Concerns about fairness in AI-based systems have been expressed in best-selling books (e.g., Weapons of Math Destruction), recent technical papers (e.g., “Equality of Opportunity in Supervised Learning” at NIPS 2016), and in the White House report Preparing for the Future of Artificial Intelligence, to name just a few sources of this growing attention. As public, end user, legal, and government attention to AI fairness grows, failure to adequately address the concerns is likely to be a barrier to the adoption and use of specific AI systems.
The development of safety-critical software in domains such as avionics, transportation systems, medical devices, and weapons systems is subject to extensive scrutiny for obvious reasons. Over the years, a variety of tools, techniques, and best practices have evolved to facilitate safety-critical software development and to support the communication of the reasons why the developer asserts that the system is safe for use.
Chuck Howell and Lashon Booker introduce the context of safety critical software development, provide an overview of relevant tools and techniques from the safety critical software community, and describe how they can be adapted to address fairness concerns for AI-based systems.
Chuck Howell is the chief engineer for intelligence programs and integration at the MITRE Corporation, where he serves as the senior technical focal point for facilitating how MITRE addresses its intelligence customers’ key technical challenges. He contributes to oversight of technical activities across MITRE’s Intelligence programs, including participation in the development and integration of MITRE’s research program, direct technical support to projects, and review of technical aspects of intelligence community programs. Chuck has served as the chair of a DARPA panel refining a research agenda for building trustworthy systems, chair of a three-FFRDC study for DUSD (S&T) to develop a roadmap for S&T in software engineering, the MITRE lead for a team (MITRE, Aerospace, Johns Hopkins APL) that developed a recommended set of mission-assurance program guidelines for the Missile Defense Agency, and a principal investigator on multiple MITRE research programs addressing various aspects of software assurance, safety cases, autonomy, and error handling. He was a member of the Institute of Electrical and Electronics Engineers (IEEE) Software Engineering Body of Knowledge industrial advisory board.
Lashon B. Booker is a senior principal scientist in MITRE’s Information Technology Technical Center. Previously, he worked at the Naval Research Laboratory, where he was eventually promoted to section head of the Intelligent Decision Aids section in the Navy Center for Applied Research in Artificial Intelligence. Lashon has published numerous technical papers in the areas of machine learning, probabilistic methods for uncertain inference, and distributed interactive simulation. He serves on the editorial boards of Evolutionary Intelligence and the Journal of Machine Learning Research and previously served as an associate editor of Adaptive Behavior and on the editorial boards of Machine Learning and Evolutionary Computation. He also regularly serves on the program committees for conferences in these areas. Lashon holds a PhD in computer and communication sciences from the University of Michigan.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org