Workshop for the partners of the chair

Societal questions around large language models (LLMs)

Fabian Suchanek, professor at Télécom Paris, head of the NoRDF project

Large Language Models (LLMs), such as ChatGPT, LLaMA, and Mixtral, are playing an increasingly significant role in our lives and industries. However, this also presents considerable challenges for our society, including issues surrounding intellectual property, security risks (such as hacking and prompt injection), the generation of fake news, influence on elections, micro-targeting, and environmental cost. In this presentation, I will provide an overview of around twenty factors that we have identified as important to consider when developing or using LLMs

 

Verifying large language models

Zacchary Sadeddine, PhD student at the NoRDF project

It is challenging to fully trust the responses provided by Large Language Models (LLMs), and hence it is still necessary to verify the answers of an LLM by hand. This task becomes more manageable if we ask LLMs to break their reasoning down into a “chain of thought,” or intermediate steps. We propose a new method that can automatically detect errors in these chains of thought using textual implications. The reasoning is carried out by a completely symbolic reasoner, making it fully explainable and improving the reliability of LLMs, while also enabling their evaluation.