Keynotes
Prof. Dr.
Lucie Flek
University of Bonn

Abstract
Can Language Models Be Truly Human-Centered? Rethinking Personalization, Empathy, and Social Intelligence
As language models increasingly drive Web applications — from customer care chatbots and educational tutors to well-being and healthcare platforms — I keep returning to a core question: What does it really mean for an AI system to be human-centered? Is it about being empathetic? Personalized? Robust and fair to all social groups? Or all of these — at the same time?
For me, this starts with social intelligence — not as a buzzword, but as a measurable and trainable capability. In this talk, I’ll walk through how we can ground model behavior in structured representations of human traits, beliefs, and goals, making social reasoning both explainable and steerable, letting us improve not just the outputs of LLMs, but their reasoning processes in social scenarios.
I’ll share insights from my work on trait-aware personalization, perspectivism and perspective taking, reasoning, and their robustness across user groups. I’ll explore the tensions I’ve encountered between adapting to individuals and maintaining generalization — and how affective alignment, if ungrounded, can backfire or even manipulate when used irresponsibly. In a world where messaging is tailored to your fears, values, and vulnerabilities, transparency might be one of our only defenses. My aim is to offer both a reflection and a challenge: to build language technologies that not only respond to users — but genuinely understand them, and do so responsibly.
Short Bio
Lucie Flek is a full professor at the University of Bonn, leading the Data Science and Language Technologies group. Her main interests lie in machine learning research for natural language processing (NLP), including AI robustness and safety. The application areas range from large language models and conversational systems, across clinical NLP and mental health research, to misinformation detection and social media analyses. Prof. Flek has been active both in academia and industry – she used to manage natural language understanding research programs in Amazon Alexa and contributed to the Google Shopping Search launch in Europe. Her academic work at the University of Pennsylvania and University College London revolved around user modeling from text, and its applications in psychology and social sciences. Her PhD at TU Darmstadt focused on meaning ambiguity, incorporating expert lexical-semantic resources into DNN classification tasks. She has served as Area Chair for Computational Social Sciences at numerous ACL* conferences, and as an editor of the NLP section of multiple AI journals. Before her career path in natural language processing, Prof. Flek has been contributing to particle physics research at CERN in the area of axion searches. More Information
Prof. Dr.
Daniele Quercia
King’s College London / Nokia Bell Labs

Abstract
Addressing Misconceptions: Dispelling Myths in Responsible AI Practices
In this talk, Daniele will dive deep into debunking some prevalent myths surrounding responsible AI, crucial for informed decision-making. By challenging these misconceptions, we can pave the way for ethical and effective AI practices.
Controversial opinions and discussion heavily encouraged – themes include:
- The use of impact assessments.
- The use and components of risk scoring.
- AI will take your job.
- AI and human intelligence.
- AI regulation will stifle innovation.
- Bias should always be eliminated.
- AI will be a competitive advantage.
Short Bio
Daniele Quercia is Director of Responsible AI at Nokia Bell Labs Cambridge (UK) and Professor of Computer Engineering at Politecnico di Torino. He has been named one of Fortune magazine’s 2014 Data All-Stars, and spoke about “happy maps” at TED. He was Research Scientist at Yahoo Labs, a Horizon senior researcher at the University of Cambridge, and Postdoctoral Associate at the department of Urban Studies and Planning at MIT. He received his PhD from UC London. More Information
Prof. Dr.
Irina Shklovski
University of Copenhagen

Abstract
Why can’t we get it right? The challenges and limits of ‘Responsible’ AI
In 1987 Robert Kraut, then a researcher at Bell Labs, asked how can technology be designed “to exploit its usefulness without exploiting its users.” Nearly four decades later, we still don’t have an answer. Over the years we’ve seen a progression of critique about technology from initial concerns about data and privacy, to bias and discrimination in algorithmic systems, to ethical concerns about large language models. The current solution seems to be “Responsible AI” or RAI – comprising of a wild variety of tools, methods, checklists, standards, compliance evaluations, and ways of thinking. Yet AI systems continue to fail us, plagued by the same problems of bias, privacy concerns, and overhyped promises that consistently fall short. I will discuss some reasons for why problematic technical systems seem unavoidable and what it takes to create technical systems “responsibly,” focusing on the problems of data quality, when and how technical challenges become ethical concerns, and the limits of “being ethical” when developing technologies.
Short Bio
Irina Shklovski is Professor of Communication and Computing in the Department of Computer Science at the University of Copenhagen. She also holds a WASP-HS visiting professorship in the Department of Thematic Studies, Gender Studies at Linköping University, where she leads the Operationalising Ethics for AI project. Her main research areas include speculative AI futures, responsible and ethical technology design, information privacy, creepy technologies and the sense of powerlessness people experience in the face of massive personal data collection. Current projects explore topics such as data quality, synthetic data, explainable AI, and evidence of moral stress among technologists resulting from efforts to design and develop AI systems responsibly. Her prior work focused on crisis response and recovery, mediated communication, population mobility, and war-time technology use. More Information.