A practical guide to responsible AI: Building robust, secure, and safe AI
Who is this presentation for?
- Process owners or executive stakeholders who oversee the development, deployment, monitoring, compliance or usage of models, and data scientists looking to better understand the limitations and considerations of AI in the enterprise
As AI is adopted in more diverse applications, businesses begin to realize the need for appropriate oversight of such systems. There are several areas of risk emerging for businesses relating to AI, including performance relating to bias and opacity; security, such as cybersecurity, open source, and adversarial attacks; and control risks, where models lose human oversight and go rogue. Nations and society at large must consider other risks around ethics, job displacement, intelligence divides, and autonomous warfare.
Previously, Ilana Golbin and Anand Rao have shared approaches to address bias concerns and explain model decision making to different stakeholders. Here, they lead a deep dive into risks posed specifically around the security and control of models.
- Adversarial attacks: Because machine learning models learn complex relationships in data in order to make predictions, it becomes more challenging to anticipate how models could be fooled or gamed to produce a desired outcome.
- Cybersecurity risks: In many ways, AI does not present additional cybersecurity risks. However, cybersecurity risks for AI may be more difficult to detect.
- Model theft: Exposing models to the public or for wide use enables others to capture insights from the model and potentially back into rough depictions of how models make decisions, putting unique and expensive IP at risk.
- Inadequate governance: Organizations are often not prepared to address the unique risks AI can present in systems due to the lack of appropriate skills, transparency, and controls in AI-driven processes.
Responsible AI includes AI that reflects the ethical and regulatory environment in which an organization operates, is actioned by a robust end-to-end governance, is aware of potential bias, is explainable to different stakeholders, and is stable and secure. For businesses to address the risks of AI in the enterprise and fully realize the opportunities, PwC has developed a responsible AI framework and toolkit to enable the end-to-end governance of systems, identify and contextualize ethical principles, consider the regulatory environment, and address performance needs and trade-offs with respect to bias, interpretability, robustness, security, and safety. You’ll learn about specific client examples where security, safety, and governance considerations were addressed.
What you'll learn
- Understand the risks of AI and the five dimensions of responsible AI
- Anticipate some emerging concerns around the security and robustness of AI
- Identify some existing tools and capabilities required to address these concerns
Ilana Golbin is a director in PwC’s emerging technologies practice and globally leads PwC’s research and development of responsible AI. Ilana has almost a decade of experience as a data scientist helping clients make strategic business decisions through data-informed decision making, simulation, and machine learning.
Anand Rao is a partner in PwC’s Advisory Practice and the innovation lead for the Data and Analytics Group, where he leads the design and deployment of artificial intelligence and other advanced analytical techniques and decision support systems for clients, including natural language processing, text mining, social listening, speech and video analytics, machine learning, deep learning, intelligent agents, and simulation. Anand is also responsible for open source software tools related to Apache Hadoop and packages built on top of Python and R for advanced analytics as well as research and commercial relationships with academic institutions and startups, research, development, and commercialization of innovative AI, big data, and analytic techniques. Previously, Anand was the chief research scientist at the Australian Artificial Intelligence Institute; program director for the Center of Intelligent Decision Systems at the University of Melbourne, Australia; and a student fellow at IBM’s T.J. Watson Research Center. He has held a number of board positions at startups and currently serves as a board member for a not-for-profit industry association. Anand has coedited four books and published over 50 papers in refereed journals and conferences. He was awarded the most influential paper award for the decade in 2007 from Autonomous Agents and Multi-Agent Systems (AAMAS) for his work on intelligent agents. He’s a frequent speaker on AI, behavioral economics, autonomous cars and their impact, analytics, and technology topics in academic and trade forums. Anand holds an MSc in computer science from Birla Institute of Technology and Science in India, a PhD in artificial intelligence from the University of Sydney, where he was awarded the university postgraduate research award, and an MBA with distinction from Melbourne Business School.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Premier Diamond Sponsors
Premier Exhibitor Plus
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires