Presented By
O’Reilly + Cloudera
Make Data Work
29 April–2 May 2019
London, UK
Please log in

Privacy, identity, and autonomy in the age of big data and AI

Sandra Wachter (University of Oxford)
10:1510:35 Thursday, 2 May 2019
Location: Auditorium
Secondary topics:  Security and Privacy
Average rating: ****.
(4.65, 20 ratings)

Big data analytics and AI draw nonintuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value and create new opportunities for discriminatory, biased, and invasive decision making, often based on sensitive attributes of individuals’ private lives. European data protection law affords greater protection to processing of sensitive data, or “special categories,” describing characteristics such as health, ethnicity, or political beliefs. Big data and AI also aim to identify unintuitive small patterns and meaningful connections between individuals and their data. The analytics behind much automated decision making and profiling isn’t concerned with singling out or identifying a unique individual but rather with drawing inferences from large datasets, calculating probabilities, and learning about types or groups of people.

Sandra Wachter discusses how these technologies expand the scope of potential victims of discrimination and other potential harms (e.g., privacy, financial, reputational) to include ephemeral groups of individuals perceived to be similar by a third party. European antidiscrimination laws, which are based on historical lessons, will fail to apply to ad hoc groups that are not defined by a historically protected attribute (e.g., ethnicity, religion). Sandra concludes by arguing that a right to reasonable inferences could provide a remedy against new forms of discrimination and greater protection for group privacy interests.

Photo of Sandra Wachter

Sandra Wachter

University of Oxford

Sandra Wachter is a lawyer and research fellow (assistant professor) in data ethics, AI, robotics, and internet regulation/cybersecurity at the Oxford Internet Institute at the University of Oxford, where she also teaches internet technologies and regulation. Sandra is also a fellow at the Alan Turing Institute in London; a fellow of the World Economic Forum’s Global Futures Council on Values, Ethics, and Innovation; an academic affiliate at the Bonavero Institute of Human Rights at Oxford’s Law Faculty; and a member of the Law Committee of the IEEE. Sandra serves as a policy advisor for governments, companies, and NGOs around the world on regulatory and ethical questions concerning emerging technologies. Her work has been featured in the Telegraph, the Financial Times, the Sunday Times, the Economist, Science, the BBC, the Guardian, Le Monde, New Scientist, Die Zeit, Der Spiegel, Sueddeutsche Zeitung, Endgadget, and Wired. In 2018, she won the O2RB Excellence in Impact Award and in 2017 the CognitionX AI superhero Award.

Sandra specializes in technology, IP, and data protection law as well as European, international, human rights, and medical law. She’s also interested in the legal and ethical aspects of robotics (e.g. surgical, domestic, and social robots) and autonomous systems (e.g., autonomous and connected cars), including liability, accountability, and privacy issues. Internet policy and regulation and cybersecurity issues are also at the heart of her research, where she addresses areas such as online surveillance and profiling, censorship, intellectual property law, and human rights and identity online. Previous work also looked at (bio)medical law and bioethics in areas such as interventions in the genome and genetic testing under the Convention on Human Rights and Biomedicine. Sandra studied at the University of Oxford and the University of Vienna and previously worked at the Royal Academy of Engineering and the Austrian Ministry of Health.