Put AI to work
June 26-27, 2017: Training
June 27-29, 2017: Tutorials & Conference
New York, NY

"Fairness cases" as an accelerant and enabler for AI adoption

Chuck Howell (MITRE), Lashon Booker (MITRE)
4:00pm4:40pm Wednesday, June 28, 2017
Impact of AI on business and society
Location: Beekman Level: Intermediate
Secondary topics:  Ethics, Governance, and Privacy

Prerequisite Knowledge

  • A working knowledge of AI system development and current or planned experience with AI systems where questions about the fairness of the recommendations or results could be a barrier to acceptance

What you'll learn

  • Explore tools and techniques that can reduce rework costs and delays and increase confidence in the fairness of AI systems

Description

Concerns about fairness in AI-based systems have been expressed in best-selling books (e.g., Weapons of Math Destruction), recent technical papers (e.g., “Equality of Opportunity in Supervised Learning” at NIPS 2016), and in the White House report Preparing for the Future of Artificial Intelligence, to name just a few sources of this growing attention. As public, end user, legal, and government attention to AI fairness grows, failure to adequately address the concerns is likely to be a barrier to the adoption and use of specific AI systems. 

The development of safety-critical software in domains such as avionics, transportation systems, medical devices, and weapons systems is subject to extensive scrutiny for obvious reasons. Over the years, a variety of tools, techniques, and best practices have evolved to facilitate safety-critical software development and to support the communication of the reasons why the developer asserts that the system is safe for use.

Chuck Howell and Lashon Booker introduce the context of safety critical software development, provide an overview of relevant tools and techniques from the safety critical software community, and describe how they can be adapted to address fairness concerns for AI-based systems.

Topics include:

  • Tools, notations, and best practices associated with structured safety cases
  • Hazard analysis as applied to subtle and unexpected potential causes of mishaps or in this case violations of fairness, including “misuse cases” 
  • Instrumentation and monitoring of complex systems for anomaly detection and runtime verification
  • Tools and notations for incident investigation to expose subtle contributing causes to mishaps and to reduce the consequences of confirmation bias in the investigation (As Fred Brooks put it in The Design of Design, “Be careful how you fix what you don’t understand.”)
Photo of Chuck Howell

Chuck Howell

MITRE

Chuck Howell is the chief engineer for intelligence programs and integration at the MITRE Corporation, where he serves as the senior technical focal point for facilitating how MITRE addresses its intelligence customers’ key technical challenges. He contributes to oversight of technical activities across MITRE’s Intelligence programs, including participation in the development and integration of MITRE’s research program, direct technical support to projects, and review of technical aspects of intelligence community programs. Chuck has served as the chair of a DARPA panel refining a research agenda for building trustworthy systems, chair of a three-FFRDC study for DUSD (S&T) to develop a roadmap for S&T in software engineering, the MITRE lead for a team (MITRE, Aerospace, Johns Hopkins APL) that developed a recommended set of mission-assurance program guidelines for the Missile Defense Agency, and a principal investigator on multiple MITRE research programs addressing various aspects of software assurance, safety cases, autonomy, and error handling. He was a member of the Institute of Electrical and Electronics Engineers (IEEE) Software Engineering Body of Knowledge industrial advisory board.

Photo of Lashon Booker

Lashon Booker

MITRE

Lashon B. Booker is a senior principal scientist in MITRE’s Information Technology Technical Center. Previously, he worked at the Naval Research Laboratory, where he was eventually promoted to section head of the Intelligent Decision Aids section in the Navy Center for Applied Research in Artificial Intelligence. Lashon has published numerous technical papers in the areas of machine learning, probabilistic methods for uncertain inference, and distributed interactive simulation. He serves on the editorial boards of Evolutionary Intelligence and the Journal of Machine Learning Research and previously served as an associate editor of Adaptive Behavior and on the editorial boards of Machine Learning and Evolutionary Computation. He also regularly serves on the program committees for conferences in these areas. Lashon holds a PhD in computer and communication sciences from the University of Michigan.