Frontiers in AI

Frontiers in AI’ is a series of short invited talks by members of the AI community currently doing particularly exciting and innovative work. The idea is to highlight important new results, techniques, and trends. These talks will be integrated into regular technical sessions with contributed talks on related topics. They should be of equal interest to those working in the same area of specialisation and those looking for a point of entry into that area.

Emir Demirovic

TU Delft

Formal Guarantees by Design: Cases from Machine Learning, Controller Synthesis, and Combinatorial Optimisation

Bio: Dr. Emir Demirović is an Assistant Professor of Computer Science at TU Delft, where he leads the Constraint Solving (ConSol) research group. His work focuses on combinatorial optimization, constraint programming, and the integration of machine learning with optimization. As co-director of the Explainable AI in Transportation (XAIT) Lab, he also explores interpretable and trustworthy AI methods. Dr. Demirović earned his PhD from TU Wien in 2017, concentrating on SAT-based approaches for high school timetabling. He subsequently held research positions at the University of Melbourne, the National Institute of Informatics in Tokyo, and MCP in Vienna. His research has been published in leading venues such as AAAI, NeurIPS, CP, and CPAIOR. He has also contributed to algorithmic competitions, including the MaxSAT Evaluation and the ROADEF/EURO Challenge. His long-term vision is to automate complex decision-making processes currently handled by humans, enhancing efficiency and allowing experts to focus on creative tasks.

Nava Tintarev

Maastricht University

Measuring Explanation Quality — a Path Forward

Bio: Prof. Nava Tintarev is a Full Professor in Explainable AI at Maastricht University in the Department of Advanced Computing Sciences (DACS), where she also serves as the Director of Research. Her interdisciplinary work bridges computer science and human-centered evaluation, focusing on making AI systems more transparent and increasing user control. Prof. Tintarev specializes in developing interactive explanation interfaces for recommender systems and search technologies, emphasizing user empowerment and decision support. She is a lab director of the ICAI TAIM lab, working on trustworthy AI in media, and is an incoming board member of the Informatics Platform Nederlands. She was a founding principal investigator of ROBUST, a €87 million Dutch national initiative advancing trustworthy AI. Her research has also been supported by major organizations including IBM, Twitter, and the European Commission. Recognized as a Senior Member of the ACM in 2020, Prof. Tintarev has received multiple best paper awards for her research contributions. Her work underscores the importance of transparency and user agency, aiming to support better decision-making with AI.

Alessandro Abate

University of Oxford

Neural Proofs for Sound Verification of Complex Systems

Abstract: I discuss the construction of sound proofs for the formal verification and control of complex stochastic models of dynamical systems and reactive programs. Neural proofs are made up of two parts. Proof rules encode requirements for the verification of general temporal specifications over the models of interest. Certificates are then constructed from said proof rules with an inductive approach, namely accessing samples from the dynamics and training neural nets, whilst generalising such networks via SAT-modulo-theory queries, based on the full knowledge of the models. In the context of sequential decision making problems over stochastic models, I discuss how to additionally generate policies/strategies/controllers, in order to formally attain given specifications.

Bio: Alessandro Abate is Professor of Verification and Control in the Department of Computer Science at the University of Oxford. Earlier, he did research at Stanford University and at SRI International, and was an Assistant Professor at the Delft Center for Systems and Control, TU Delft. He received a Laurea degree from the University of Padua and MS/PhD at UC Berkeley. His research work spans logics, probability, and AI.

Eleonora Giunchiglia

Imperial College of London

A Posteriori Verification or A Priori Design? Navigating Deep Learning with Logical Requirements

Bio: Dr. Eleonora Giunchiglia is a Lecturer at Imperial College London’s I-X initiative and the Department of Electrical and Electronic Engineering. Her research focuses on enhancing the safety and trustworthiness of deep learning systems by integrating formal logical constraints into their design. Dr. Giunchiglia earned her PhD from the University of Oxford in 2022, where she explored the intersection of machine learning and logic. She subsequently held a postdoctoral position at TU Wien before joining Imperial College in 2024. Her work has led to the development of frameworks like PiShield and CCN+, which facilitate the creation of neural networks that comply with predefined constraints. These contributions have significant implications for fields such as autonomous driving and synthetic data generation. Beyond her research, Dr. Giunchiglia serves on the editorial board of the Neuro-Symbolic AI journal and is a junior member of the Future of Life Institute. Her commitment to AI safety underscores her dedication to developing technologies that are both innovative and aligned with human values.

Franz Wotawa

Graz University of Technology

On the Use of Artificial Intelligence for Autonomous Driving and Its Verification

Abstract: Extensive research has been applied to autonomous systems, particularly in the field of autonomous driving, encompassing a range of tasks, from object recognition to planning and control. In the presentation, we discuss research activities and solutions while also focusing on verification. The latter is essential for ensuring the safety and robustness of such systems under any scenario driven by interactions between the autonomous system and its environment. We further outline that verification during development appears to be insufficient, necessitating runtime monitoring based on regulations and physical laws, where knowledge-based systems provide an excellent foundation for this purpose.

Bio: Franz Wotawa received an M.Sc. in Computer Science (1994) and a Ph.D. in 1996, both from the Vienna University of Technology. He is currently a professor of software engineering at the Graz University of Technology. From 2003 to 2009, and starting in 2020, Franz Wotawa has been the head of the Institute for Software Technology. His research interests include model-based and qualitative reasoning, theorem proving, mobile robots, verification and validation, and software testing and debugging. From October 2017 to 2024, Franz Wotawa was the head of the Christian Doppler Laboratory for Quality Assurance Methodologies for Autonomous Cyber-Physical Systems. Franz Wotawa is an author of more than 480 peer-reviewed papers for journals, books, conferences, and workshops during his career. He supervised 107 master’s and 44 Ph.D. students. He received the Lifetime Achievement Award of the International Diagnosis Community (DX) for his work on diagnosis in 2016. Franz Wotawa is a member of the Academia Europaea, the IEEE Computer Society, ACM, the Austrian Computer Society (OCG), the Austrian Society for Artificial Intelligence, and a Senior Member of the AAAI.

Mykola Pechenizkiy

Eindhoven University of Technology

From Benchmarking to Understanding FairML

Abstract: Benchmarking is an important driver of (both real and illusionary) progress in many subfields of machine learning (ML) research. In this talk we explore the challenges posed by benchmarking fairness-aware machine learning (fairML). FairML benchmarks inherit many of the difficulties of general ML benchmarks, but also pose some unique challenges. Most importantly, the inherent contextual and contestable nature of a social concept like fairness renders it difficult to quantify. Consequently, framing fairness as a black-box optimization problem of fairness and accuracy is misleading and can lead to undesirable side effects or even harm those fairML interventions are intended to protect. We therefore call for a shift in focus from competitive benchmarks to evaluations which facilitate a deeper understanding of what fairML interventions (do not) achieve and why.

Bio: Prof. Dr. Mykola Pechenizkiy is a Full Professor and Chair of the Data Mining group at the Department of Mathematics and Computer Science, Eindhoven University of Technology. His technical AI research focuses on predictive analytics and knowledge discovery from evolving data, with applications in industry, healthcare, and education. Their research was recognized with the IEEE ICDE 2023 Best Demo Award, IDA 2023 Runner-up Frontier Prize, IEEE DSAA 2022 Best Paper Award, LoG 2022 Best Paper Award, and ALA 2022 Best Paper Award. Since 2025 he serves as a founding Director of the Center for Safe AI where he leads the trustworthy and responsible AI interdisciplinary research program aiming to address both technical and socio-technical challenges.

Przemysław Biecek

Warsaw University of Technology

Model Science: Getting Serious about Explaining and Controlling AI Models

Bio: Prof. Dr. Przemysław Biecek is a Full Professor at both the Warsaw University of Technology and the University of Warsaw, specializing in mathematical statistics, machine learning, and explainable artificial intelligence (XAI). He leads the MI² Data Lab, focusing on developing tools and methods for responsible machine learning, with applications in healthcare, education, and public policy. His notable projects include the DALEX package and the DrWhy.AI framework, which support model interpretability and fairness assessments. Prof. Biecek’s research emphasizes the integration of statistical rigor with practical applications, aiming to enhance human decision-making through transparent AI systems. He has collaborated with organizations such as Samsung, IBM, and Disney, and is the founder of Solutions42.ai, a company dedicated to deploying responsible AI solutions. A strong advocate for data literacy, he actively contributes to the open-source community and promotes evidence-based approaches in AI. His work has been recognized in leading conferences, including ICML, CVPR, and ECAI.

Nadin Kokciyan

University of Edinburgh

Enabling Responsible AI with Humans

Bio: Nadin Kokciyan is an Associate Professor (Reader) in Artificial Intelligence in the School of Informatics at the University of Edinburgh; and a Senior Research Affiliate at the Centre for Technomoral Futures, Edinburgh Futures Institute. Her research interests include human-centered AI, Privacy, Argument Mining, Responsible AI and AI Ethics. She currently leads the Human-Centered AI Lab (CHAI Lab) and is actively involved in initiatives that bridge technical AI development with ethical and societal considerations. Nadin is currently a Turing BridgeAI Associate Advisor at the Alan Turing Institute to provide specific guidance to organisations in Ethics of AI. Nadin earned her PhD in Computer Engineering from Boğaziçi University, where she developed an AI-based privacy management framework to assist humans in making decisions. She has held a postdoc position at King’s College London, where she worked on the development of an AI-based health management tool to assist users in managing their health by using argumentation techniques. Nadin regularly serves on the program committees for leading AI conferences such as AAMAS, IJCAI, AAAI and ECAI. She has dedicated over a decade of her career to the realm of academia. Her expertise is in using AI to develop decision-support tools for humans to make informed decisions. In 2021 and 2025, she served as a guest editor for special issues on ‘Sociotechnical Perspectives of AI Ethics and Accountability’ and ‘Humans Meets AI’ in IEEE Internet Computing.​

Copyright 2023-2025 Prospero Multilab Srl. All rights reserved