14–17 Oct 2019

When to trust AI

Marta Kwiatkowska (University of Oxford)
9:5510:10 Thursday, 17 October 2019
Location: King's Suite
Secondary topics:  Ethics, Security, and Privacy
Average rating: ****.
(4.79, 19 ratings)

Machine learning solutions are revolutionizing AI, but their instability against adversarial examples—small perturbations to inputs that can catastrophically affect the output—raises concerns about the readiness of this technology for widespread deployment.

Marta Kwiatkowska uses illustrative examples to give you an overview of techniques being developed to improve the robustness, safety, and trust in AI systems.

Photo of Marta Kwiatkowska

Marta Kwiatkowska

University of Oxford

Marta Kwiatkowska is a professor of computing systems and fellow of Trinity College, University of Oxford. She’s known for fundamental contributions to the theory and practice of model checking for probabilistic systems. She led the development of the PRISM model checker, the leading software tool in the area. Probabilistic model checking has been adopted in diverse fields, including distributed computing, wireless networks, security, robotics, healthcare, systems biology, DNA computing, and nanotechnology, with genuine flaws found and corrected in real-world protocols. Marta was awarded two ERC Advanced Grants, VERIWARE and FUN2MODEL, and is a coinvestigator of the EPSRC Programme Grant on Mobile Autonomy. She was honored with the Royal Society Milner Award in 2018 and the Lovelace Medal in 2019 and is a Fellow of the Royal Society, ACM and BCS, and Member of Academia Europea.

  • Intel AI
  • O'Reilly
  • Amazon Web Services
  • IBM Watson
  • Dell Technologies
  • Hewlett Packard Enterprise
  • AXA

Contact us


For conference registration information and customer service


For more information on community discounts and trade opportunities with O’Reilly conferences


For information on exhibiting or sponsoring a conference


For media/analyst press inquires