AI and machine-learning technologies are transforming how software is built—both through AI-driven components and through generative tools such as large language models. Research and practical experience show that AI-based software behaves differently from traditional systems, often with new types of dependencies and variability. This shift requires new approaches to software quality assurance.
This research theme focuses on developing such approaches and on exploring how AI tools can support quality assurance activities themselves. Because critical systems require clear human accountability, we also investigate human-centered methods that promote transparency and explainability in AI-based systems.
Finally, AI and ML systems have long-term societal and environmental impacts. For this reason, our theme also provides a natural foundation for our sustainability research.