People are slowly coming to realize that technology, especially artificial intelligence, is not neutral. It’s imbued with the same values, assumptions, and biases as its creators. However, AI has incredible potential to identify the hidden biases in your organization’s processes and products so that you can work to remove them. By removing bias, you’re increasing accuracy, which is not just better for society but also for your organization’s bottom line.
Kathy Baxter explains how to use AI to address bias in your organizations rather than perpetuate it.
Kathy Baxter is architect for ethical AI practice at Salesforce, where she develops research-informed best practices to educate employees, customers, and the industry on the development of ethical AI. She partners and collaborates with external AI and ethics experts to continuously evolve policies, practices, and products—working to create a more fair, justice, and equitable society. You can read about her research on the Salesforce UX Medium channel. Kathy has 20 years of experience in the tech industry, at companies including Google, eBay, and Oracle. She holds an MS in engineering psychology and a BS in applied psychology from the Georgia Institute of Technology. The second edition of her book, Understanding Your Users, was published in May 2015.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org