Skip to main content

Jack Stilgoe trustworthy AI

Abstract

As hype and investment in AI has grown, policymakers and companies have become increasingly concerned about public trust. In this talk, I will ask what trust in technology actually means and what it would take to build genuinely trustworthy AI. The UK government has committed to regulating AI outcomes rather than scrutinising how AI works or the purposes driving its development. But the lesson from previous emerging technologies is that public trust is multidimensional and fragile and that a wait-and-see approach is unlikely to be trustworthy. For a case study, I will discuss our recent report for the Government’s Centre for Data Ethics and Innovation on responsible innovation for self-driving vehicles and develop a model for governance that doesn’t just consider public trust in what AI does, but also how it does it and why it is being developed.

The event is mainly in person but Zoom will be made available. 

Bio

Dr Jack Stilgoe is a professor in science and technology studies at University College London, where he researches the governance of emerging technologies. He is principal investigator of the ESRC Driverless Futures project (2019-2021). This project is looking to anticipate the politics of self-driving cars. He worked with EPSRC and ESRC to develop a framework for responsible innovation, which is now being used by the Research Councils. Among other publications, he is the author of ‘Who’s Driving Innovation?’ (2020, Palgrave) and ‘Experiment Earth: Responsible innovation in geoengineering’ (2015, Routledge). He previously worked in science and technology policy at the Royal Society and the think tank Demos. He is a fellow of the Turing Institute.