When to trust AI
Machine learning solutions are revolutionizing AI, but their instability against adversarial examples—small perturbations to inputs that can catastrophically affect the output—raises concerns about the readiness of this technology for widespread deployment.
Marta Kwiatkowska uses illustrative examples to give you an overview of techniques being developed to improve the robustness, safety, and trust in AI systems.
University of Oxford
Marta Kwiatkowska is a professor of computing systems and fellow of Trinity College, University of Oxford. She’s known for fundamental contributions to the theory and practice of model checking for probabilistic systems. She led the development of the PRISM model checker, the leading software tool in the area. Probabilistic model checking has been adopted in diverse fields, including distributed computing, wireless networks, security, robotics, healthcare, systems biology, DNA computing, and nanotechnology, with genuine flaws found and corrected in real-world protocols. Marta was awarded two ERC Advanced Grants, VERIWARE and FUN2MODEL, and is a coinvestigator of the EPSRC Programme Grant on Mobile Autonomy. She was honored with the Royal Society Milner Award in 2018 and the Lovelace Medal in 2019 and is a Fellow of the Royal Society, ACM and BCS, and Member of Academia Europea.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires