Frontiers in AI

Frontiers in AI’ is a series of short invited talks by members of the AI community currently doing particularly exciting and innovative work. The idea is to highlight important new results, techniques, and trends. These talks will be integrated into regular technical sessions with contributed talks on related topics. They should be of equal interest to those working in the same area of specialisation and those looking for a point of entry into that area.

Emir Demirovic

TU Delft

Constraints and Satisfiability

Bio: Dr. Emir Demirović is an Assistant Professor of Computer Science at TU Delft, where he leads the Constraint Solving (ConSol) research group. His work focuses on combinatorial optimization, constraint programming, and the integration of machine learning with optimization. As co-director of the Explainable AI in Transportation (XAIT) Lab, he also explores interpretable and trustworthy AI methods. Dr. Demirović earned his PhD from TU Wien in 2017, concentrating on SAT-based approaches for high school timetabling. He subsequently held research positions at the University of Melbourne, the National Institute of Informatics in Tokyo, and MCP in Vienna. His research has been published in leading venues such as AAAI, NeurIPS, CP, and CPAIOR. He has also contributed to algorithmic competitions, including the MaxSAT Evaluation and the ROADEF/EURO Challenge. His long-term vision is to automate complex decision-making processes currently handled by humans, enhancing efficiency and allowing experts to focus on creative tasks.

Nava Tintarev

Maastricht University

Humans and AI

Bio: Prof. Nava Tintarev is a Full Professor in Explainable AI at Maastricht University in the Department of Advanced Computing Sciences (DACS), where she also serves as the Director of Research. Her interdisciplinary work bridges computer science and human-centered evaluation, focusing on making AI systems more transparent and increasing user control. Prof. Tintarev specializes in developing interactive explanation interfaces for recommender systems and search technologies, emphasizing user empowerment and decision support. She is a lab director of the ICAI TAIM lab, working on trustworthy AI in media, and is an incoming board member of the Informatics Platform Nederlands. She was a founding principal investigator of ROBUST, a €87 million Dutch national initiative advancing trustworthy AI. Her research has also been supported by major organizations including IBM, Twitter, and the European Commission. Recognized as a Senior Member of the ACM in 2020, Prof. Tintarev has received multiple best paper awards for her research contributions. Her work underscores the importance of transparency and user agency, aiming to support better decision-making with AI.

Alessandro Abate

University of Oxford

TBD

Bio: Prof. Alessandro Abate is a Professor of Verification and Control in the Department of Computer Science at the University of Oxford, where he also serves as Deputy Head of Department. He leads the Oxford Control and Verification (OXCAV) group, focusing on the formal verification and optimal control of complex dynamical systems, particularly stochastic hybrid systems. His research integrates model-based techniques with data-driven methods, including Bayesian inference and reinforcement learning, to ensure the safety and reliability of cyber-physical systems in domains such as energy, automotive, and aerospace. Prof. Abate is a Faculty Fellow at the Alan Turing Institute and an IEEE Fellow, recognized for his contributions to the verification and control of stochastic hybrid systems. His academic journey includes research positions at SRI International and Stanford University, and a faculty role at TU Delft. He holds a Laurea degree from the University of Padua and an MS/PhD from UC Berkeley. His work has been acknowledged with several awards, including the 2024 HSCC Test-of-Time Award and the Outstanding Paper Award at AAAI 2023.

Eleonora Giunchiglia

Imperial College of London

Machine Learning

Bio: Dr. Eleonora Giunchiglia is a Lecturer at Imperial College London’s I-X initiative and the Department of Electrical and Electronic Engineering. Her research focuses on enhancing the safety and trustworthiness of deep learning systems by integrating formal logical constraints into their design. Dr. Giunchiglia earned her PhD from the University of Oxford in 2022, where she explored the intersection of machine learning and logic. She subsequently held a postdoctoral position at TU Wien before joining Imperial College in 2024. Her work has led to the development of frameworks like PiShield and CCN+, which facilitate the creation of neural networks that comply with predefined constraints. These contributions have significant implications for fields such as autonomous driving and synthetic data generation. Beyond her research, Dr. Giunchiglia serves on the editorial board of the Neuro-Symbolic AI journal and is a junior member of the Future of Life Institute. Her commitment to AI safety underscores her dedication to developing technologies that are both innovative and aligned with human values.

Franz Wotawa

Graz University of Technology

Knowledge Representation and Reasoning

Bio: Franz Wotawa received an M.Sc. in Computer Science (1994) and a Ph.D. in 1996, both from the Vienna University of Technology. He is currently a professor of software engineering at the Graz University of Technology. From 2003 to 2009, and starting in 2020, Franz Wotawa has been the head of the Institute for Software Technology. His research interests include model-based and qualitative reasoning, theorem proving, mobile robots, verification and validation, and software testing and debugging. From October 2017 to 2024, Franz Wotawa was the head of the Christian Doppler Laboratory for Quality Assurance Methodologies for Autonomous Cyber-Physical Systems. Franz Wotawa is an author of more than 480 peer-reviewed papers for journals, books, conferences, and workshops during his career. He supervised 107 master’s and 44 Ph.D. students. He received the Lifetime Achievement Award of the International Diagnosis Community (DX) for his work on diagnosis in 2016. Franz Wotawa is a member of the Academia Europaea, the IEEE Computer Society, ACM, the Austrian Computer Society (OCG), the Austrian Society for Artificial Intelligence, and a Senior Member of the AAAI.

Mykola Pechenizkiy

Eindhoven University of Technology

Fairness, Ethics, and Trust

Bio: Prof. Dr. Mykola Pechenizkiy is a Full Professor and Chair of the Data Mining group at the Department of Mathematics and Computer Science, Eindhoven University of Technology. He also serves as Director of the Center for Safe AI. His research focuses on predictive analytics and knowledge discovery from evolving data, with applications in industry, healthcare, and education. He leads interdisciplinary programs on responsible data science and trustworthy AI, emphasizing fairness, transparency, and robustness in machine learning systems. Prof. Pechenizkiy has authored over 100 peer-reviewed publications and has significantly contributed to areas such as concept drift, feature engineering, and ethics-aware AI. He is an Adjunct Professor at the University of Jyväskylä and has held visiting positions at institutions including Columbia University and NYU. As President of the International Educational Data Mining Society, he actively promotes responsible AI practices.

Przemysław Biecek

Warsaw University of Technology

Machine Learning, Data Mining, Computer Vision, Fairness, Ethics, and Trust

Bio: Prof. Dr. Przemysław Biecek is a Full Professor at both the Warsaw University of Technology and the University of Warsaw, specializing in mathematical statistics, machine learning, and explainable artificial intelligence (XAI). He leads the MI² Data Lab, focusing on developing tools and methods for responsible machine learning, with applications in healthcare, education, and public policy. His notable projects include the DALEX package and the DrWhy.AI framework, which support model interpretability and fairness assessments. Prof. Biecek’s research emphasizes the integration of statistical rigor with practical applications, aiming to enhance human decision-making through transparent AI systems. He has collaborated with organizations such as Samsung, IBM, and Disney, and is the founder of Solutions42.ai, a company dedicated to deploying responsible AI solutions. A strong advocate for data literacy, he actively contributes to the open-source community and promotes evidence-based approaches in AI. His work has been recognized in leading conferences, including ICML, CVPR, and ECAI.

Nadin Kokciyan

University of Edinburgh

TBD

Bio: Nadin Kokciyan is an Associate Professor (Reader) in Artificial Intelligence in the School of Informatics at the University of Edinburgh; and a Senior Research Affiliate at the Centre for Technomoral Futures, Edinburgh Futures Institute. Her research interests include human-centered AI, Privacy, Argument Mining, Responsible AI and AI Ethics. She currently leads the Human-Centered AI Lab (CHAI Lab) and is actively involved in initiatives that bridge technical AI development with ethical and societal considerations. Nadin is currently a Turing BridgeAI Associate Advisor at the Alan Turing Institute to provide specific guidance to organisations in Ethics of AI. Nadin earned her PhD in Computer Engineering from Boğaziçi University, where she developed an AI-based privacy management framework to assist humans in making decisions. She has held a postdoc position at King’s College London, where she worked on the development of an AI-based health management tool to assist users in managing their health by using argumentation techniques. Nadin regularly serves on the program committees for leading AI conferences such as AAMAS, IJCAI, AAAI and ECAI. She has dedicated over a decade of her career to the realm of academia. Her expertise is in using AI to develop decision-support tools for humans to make informed decisions. In 2021 and 2025, she served as a guest editor for special issues on ‘Sociotechnical Perspectives of AI Ethics and Accountability’ and ‘Humans Meets AI’ in IEEE Internet Computing.​

Copyright 2023-2025 Prospero Multilab Srl. All rights reserved