Keynotes & Panels

Prof. Dr. Virginia Dignum, Umeå University

Beyond the AI hype: Balancing Innovation and  Social Responsibility

AI can extend human capabilities but requires addressing challenges in education, jobs, and biases. Taking a responsible approach involves understanding AI's nature, design choices, societal role, and ethical considerations. Recent AI developments, including foundational models, transformer models, generative models, and large language models (LLMs), raise questions about whether they are changing the paradigm of AI, and about the responsibility of those that are developing and deploying AI systems. In all these developments, is vital to understand that AI is not an autonomous entity but rather dependent on human responsibility and decision-making. 

In this talk, I will further discuss the need for a responsible approach to AI that emphasize trust, cooperation, and the common good. Taking responsibility involves regulation, governance, and awareness. Ethics and dilemmas are ongoing considerations, but require understanding that trade-offs must be made and that decision processes are always contextual. Taking responsibility requires designing AI systems with values in mind, implementing regulations, governance, monitoring, agreements, and norms. Rather than viewing regulation as a constraint, it should be seen as a stepping stone for innovation, ensuring public acceptance, driving transformation, and promoting business differentiation. Responsible Artificial Intelligence (AI) is not an option but the only possible way to go forward in AI.

Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations. She has a PHD in Artificial Intelligence from Utrecht University in 2004, is member of the Royal Swedish Academy of Engineering Sciences (IVA), and Fellow of the European Artificial Intelligence Association (EURAI). She is a member of the United Nations Advisory Body on AI, the Global Partnership on AI (GPAI), UNESCO’s expert group on the implementation of AI recommendations, OECD’s Expert group on AI, founder of ALLAI, the Dutch AI Alliance, and co-chair of the WEF’s Global Future Council on AI. She was a member of EU’s High Level Expert Group on Artificial Intelligence and leader of UNICEF's guidance for AI and children. Her new book “The AI Paradox” is planned for publication in late 2024.



Prof. Dr. Isabel Valera, Saarland University

Isabel Valera is full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany). She is a fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS), where she is part of the Robust Machine Learning Program and of the Saarbrücken Artificial Intelligence & Machine learning (Sam) Unit. Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK). 

Prof. Dr. Seth Lazar, Australian National University

Seth Lazar is Professor of Philosophy at The Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford. 

He is a member of the Executive Committee of the ACM Fairness, Accountability and Transparency Conference. Leading the Machine Intelligence and Normative Theory Lab (MINT), he directs research projects on the moral and political philosphy of AI, funded by the ARC, the Templeton World Charity Foundation, the IAG, and Schmidt Futures.




Prof. Dr. Bettina Berendt, Technical University of Berlin, Weizenbaum Institute, and KU Leuve


De-biased, diverse, divisive - On ethical perspectives regarding the de-biasing of GenAI and their actionability

AI tech companies cannot seem to get it right. After years of evidence-based criticism of biases in AI, in particular decision models, LLMs and other generative AI, after years of research and toolbox provision for de-biasing, many companies have implemented such safeguards into their services. However, ridicule and protests have recently erupted when users discovered generated images that were “(overly?) diversified” with respect to gender and ethnicity and answers to ethical questions that were “(overly?) balanced” with regard to moral stances. Is this seeming contradiction just a backlash, or does it point to deeper issues? In this talk, I will analyse instances of recent discourse on too little or too much “diversification” of (Gen)AI and relate this to methodological criticism of “de-biasing”. A second aim is to contribute to the broadening and deepening of answers that computer science and engineering can and should give to enhance fairness and justice.


Bettina Berendt holds the Chair for Internet and Society at Technical University of Berlin, Germany. She is also a director of the Weizenbaum Institute in Berlin and a guest professor at KU Leuven, Belgium. Her research includes Data Science und Critical Data Science, especially with respect to Privacy/Data Protection, discrimination and fairness, as well as AI and ethics, with a focus on textual and web-related data. 

Bettina holds a PhD in Computer Science/Cognitive Science from the University of Hamburg, Germany, and a Habilitation in Information Systems from Humboldt University Berlin, Germany. She was assistant professor in Information Systems at Humboldt University Berlin, Germany, from 2003-2007 and associate professor in Artificial Intelligence at KU Leuven, Belgium from 2007-2019.