Event items
Do large artificial intelligence language models have a duty to tell the truth?
The talk will propose a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose large artificial intelligence language models.
Date/Time: 25 Nov 2024, 03:00 PM to 25 Nov 2024, 04:00 PM
Venue: Zoom
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and the development of shared social truths in democratic societies. LLMs produce responses that are plausible, helpful, and confident but that contain factual inaccuracies, inaccurate summaries, misleading references, and biased information. These subtle mistruths are poised to cause a severe cumulative degradation and homogenisation of knowledge over time. This talk examines the existence and feasibility of a legal duty for LLM providers to create models that “tell the truth.” LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. Careless speech is defined and contrasted with the simplified concept of “ground truth” in LLMs and prior discussion of related truth-related risks in LLMs including hallucinations, misinformation, and disinformation. EU human rights law and liability frameworks contain some truth-related obligations for products and platforms, but they are relatively limited in scope and sectoral reach. The talk concludes by proposing a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose LLMs, and discusses “zero-shot translation” as a prompting method to constrain LLMs and better align their outputs with verified, truthful information.
How to access the online meeting:
Nov 25, 2024 03:00 PM London
Meeting ID: 881 3311 9231
Passcode: 175756