When to trust AI





Machine learning solutions are revolutionizing AI, but their instability against adversarial examples—small perturbations to inputs that can catastrophically affect the output—raises concerns about the readiness of this technology for widespread deployment.
Marta Kwiatkowska uses illustrative examples to give you an overview of techniques being developed to improve the robustness, safety, and trust in AI systems.

Marta Kwiatkowska
University of Oxford
Marta Kwiatkowska is a professor of computing systems and fellow of Trinity College, University of Oxford. She’s known for fundamental contributions to the theory and practice of model checking for probabilistic systems. She led the development of the PRISM model checker, the leading software tool in the area. Probabilistic model checking has been adopted in diverse fields, including distributed computing, wireless networks, security, robotics, healthcare, systems biology, DNA computing, and nanotechnology, with genuine flaws found and corrected in real-world protocols. Marta was awarded two ERC Advanced Grants, VERIWARE and FUN2MODEL, and is a coinvestigator of the EPSRC Programme Grant on Mobile Autonomy. She was honored with the Royal Society Milner Award in 2018 and the Lovelace Medal in 2019 and is a Fellow of the Royal Society, ACM and BCS, and Member of Academia Europea.
Presented by
Elite Sponsors
Strategic Sponsor
Exabyte Sponsor
Impact Sponsor
Contact us
confreg@oreilly.com
For conference registration information and customer service
partners@oreilly.com
For more information on community discounts and trade opportunities with O’Reilly conferences
aisponsorships@oreilly.com
For information on exhibiting or sponsoring a conference
pr@oreilly.com
For media/analyst press inquires