- AI-supported Engineering
- 1. Metrics project
- 2. Stakeholder communication
- 3. MicroHRV: Recognizing Rare Events in Microwave Radio Links and Intensive Care Units using Machine Learning
- 4. T4AI – Transforming Software Architectures for AI
- 5. DEVELOP – Design, Verification and Validation of ML systems in automotive
- Cybersecurity Hackathon and Design Jam @ Software Center Reporting workshop
- Industrial impact of Rendex – requirements quality tool
- QuaSAR@car
- RAWFP – Resource aware functional programming
- Size and quality between software development approaches
- VISEE
- Workshop on Software Metrics and Measurements as Foundations of Big Data, Software Analytics and Machine Learning
- Continuous and Automated Quality Assurance
- An Analysis of Team-based Development within an Activity Based Working Environment
- Aspects of Automated Testing
- Call for participation in an investigation in Continuous Integration Visualization
- Modeling and Analyzing Collaborating Machines
- Modeling and Analyzing Event-based Autonomous Systems
- Data Visualization for Continuous Integration
- Enterprise Scale Continuous Integration and Delivery
- Continuous Delivery
- Continuous Safety, Security and Architecture
- IoTArch: Improving the Design and Realization of Situational Aware Internet of Things Systems for Emergency Situations Handling
- Managing Model Inconsistencies
- Model-based development and continuous integration
- Closing the Safety-Security gap in software intensive systems
- Evolution support for architectural artefacts
- Managing Architectural Technical Debt
- Managing Interoperability Concerns in Large Systems
- End-to-end Variability Management
- Ensuring Quality of Service through Modeling of Resource Requirements and Service-level Agreements in Industrial IoT
- Managing Interoperability Concerns in Large Systems
- Managing Practices for Development Speed
- Scaling Agile development in mechatronics organizations
- Customer Data- and Ecosystem-Driven Development
- Data-driven Digital Transformation
- Metrics
Vision
All Software Center companies have efficient product development, release and deployment processes.
Mission
We help the companies to design and develop modern measurement methods and tools by utilizing state-of-the-art analytics, AI and machine learning.
We use Action Research to increase the impact and adoption of the results (Action Research in Software Engineering), i.e., we work on-site of the companies.
Over the course of ten years of our collaboration, our theme has resulted in over 50 models and tools. We have also published over 200 papers and books that disseminate the results to the public domain.
Examples of the metrics designed and introduced to the companies:
- Release readiness: measuring the number of weeks that the product development team needs to release the product (Agile): Release Readiness Indicator for Mature Agile and Lean Software Development Projects | SpringerLink
- Change waves: measuring the impact of a change on software product: Identifying Implicit Architectural Dependencies Using Measures of Source Code Change Waves | IEEE Conference Publication | IEEE Xplore
- Defect inflow: predicting the number of defects that the development team needs to handle in the coming weeks: Predicting weekly defect inflow in large software projects based on project planning and test status - ScienceDirect
- Code quality: measuring and improving the impact of coding practices on software quality: Recognizing lines of code violating company-specific coding guidelines using machine learning | SpringerLink
- Engineering level: measuring the quality of code in a git repository: PHANTOM: Curating GitHub for engineered software projects using time-series clustering (springer.com)
- SimSAX project similarity: measuring the similarity of projects, for example to monitor the process evolution: LegacyPro—A DNA-Inspired Method for Identifying Process Legacies in Software Development Organizations | IEEE Journals & Magazine | IEEE Xplore, and Simsax: A measure of project similarity based on symbolic approximation method and software defect inflow - ScienceDirect
- MeTEAM: measuring the maturity of software metric teams: MeTeaM—A method for characterizing mature software metrics teams - ScienceDirect
- MESRAM: measuring the quality and quantity of measurement programs: MeSRAM – A method for assessing robustness of measurement programs in large software development organizations and its industrial evaluation - ScienceDirect
Projects
- Continuous Product and Organizational Performance
- Stakeholder Communication
- Associated: MicroHRV
- Associated: T4AI
- Associated: Develop
- Finished: Quasar@Car - Quantifying meta-model changes
- Finished: VISEE - Verification and Validation of ISO 26262 requirements at the complete EE system level
- Finished: Longitudinal Measurement of Agility and Group Development
- Finished: Size and Quality between Software Development Approaches
- Finished: RAWFP - Resource Aware Functional Programming
Metrics blog
- Software-on-demand – experiments September 4, 2025miroslawstaron/screenPong miroslawstaron/screenTerminal I was keen on testing the Software-on-demand hypothesis advocated by OpenAI in their last keynote, but it took me a moment to see how to test it. Then, I realized that I could work with creating screensavers based on my ideas. Not the ones that change images, we don’t need AI for that. […]Miroslaw Staron
- Measuring AI August 20, 2025How Do You Measure AI? | Communications of the ACM Due to my background in software metrics, I’ve been interested about measurement of AI systems for a while. What I found is that there are benchmarks and suites of metrics used for measurement of AI. But…. When GPT-5 was announced, most of the metrics that […]Miroslaw Staron
- Software on Demand: from IDEs to Intent August 14, 2025OpenAI’s latest keynote put one idea forward: coding is shifting from writing lines to expressing intent. With GPT-5’s push into agentic workflows—and concrete coding gains on benchmarks like SWE-bench Verified—the “software on demand” era is no longer speculative. You describe behavior; an agent plans, scaffolds, implements, runs tests, and iterates. Humans stay in the loop […]Miroslaw Staron
- GPT-5 – the best and the greatest? August 11, 2025In the last few days, OpenAI announced their newest model. The model seems to be really good. In fact, it is so good that the increase from the previous ones are in only 1% in some cases (from 98% to 99%). This means that we need better benchmarks to show how the models differ. Well, […]Miroslaw Staron
- Is Quantum the next big thing for the masses? May 9, 2025But what is quantum computing? (Grover’s Algorithm) If you are looking at the quantum computing, and you are a programmer, people start “dumbing-it-down” for you with telling about superpositions and multiple bits in one. Well, not entirely true and this is a misconception. In this video, the author explains how quantum works, based on the […]Miroslaw Staron
Theme 3, Leader: Miroslaw Staron
Professor, Software Engineering division, Department of Computer Science and Engineering, University of Gothenburg
More information
Miroslaw.Staron@cse.gu.se
Phone: +46 31 772 10 81