Continuous Delivery

Vision

All Software Center companies have efficient product development, release and deployment processes.

Mission

We help the companies to design and develop modern measurement methods and tools by utilizing state-of-the-art analytics, AI and machine learning. We use Action Research to increase the impact and adoption of the results (Action Research in Software Engineering), i.e., we work on-site of the companies. Over the course of ten years of our collaboration, our theme has resulted in over 50 models and tools. We have also published over 200 papers and books that disseminate the results to the public domain. Examples of the metrics designed and introduced to the companies:

Projects

RSS Metrics Blog
  • Vulnerability detection, a new article (highlight) November 28, 2022
    sec23summer_449-mirsky-prepub.pdf (usenix.org) Cybersecurity has been, and will always be, a challenge for software systems. It is also perceived as an art when it comes to security analysis (or exploitation for that matter). There is no single tool, no single method that will make our software secure. This article is interesting because of the way that […]
    Miroslaw Staron
  • CoditT5: Pretraining for Source Code and Natural Language Editing November 23, 2022
    CoditT5: Pretraining for Source Code and Natural Language Editing (pengyunie.github.io) I’ve written about programming language models before, and it is no secret that I am very much into this topic. I like the way in which software engineering evolves – we become a more mature discipline and our tools become smarter by the hour (at […]
    Miroslaw Staron
  • Evaluating ML pipelines for real – spoiler alert: another pipeline (article review) November 10, 2022
    Evaluating classifiers in SE research: the ECSER pipeline and two replication studies (springer.com) One of the most prominent problems with using research results in practice is the lack of replication packages, but this is far from being the only one. Another one, maybe an equally important problem, is the fact that the studies report performance […]
    Miroslaw Staron
  • Language models and security vulnerabilities – what works and what does not…. (article review) October 19, 2022
    1176898.pdf (hindawi.com) Language models are powerful tools if you know how to use them. One of the areas where they can be used in recognizing security vulnerabilities. In this article, the authors look into six language models and test them. The results show that there are more challenges than solutions in this area. The models […]
    Miroslaw Staron
  • 50 Language/Code models, let’s talk… September 21, 2022
    As you have probably observed I’ve been into language models for code analysis, design and recognition. It’s a great way of spending your research time as it gives you the possibility to understand how we program and understand how to model that. In my personal case, this is a great complement to the empirical software […]
    Miroslaw Staron
  • So, you want to automate your security assessment (beyond pentesting)… September 13, 2022
    Automatic Security Assessment of GitHub Actions Workflows (arxiv.org) After my last post, and the visit to the workshop at MDU, I realized that there are a few tools that can be used automatically already now. So, this paper presents one of them. What is interesting about this tool is that it uses github workflows, so […]
    Miroslaw Staron
  • Code reviews and cybersecurity… (article highlight) September 7, 2022
    https://arxiv.org/pdf/2208.04261.pdf So I find myself on the train again, this time strolling towards MDU for their cybersecurity workshop. Not that I am an expert on just cybersecurity, but I know a bit about programming and design. I also know this much to see that a secure product needs to start designing for security, not only […]
    Miroslaw Staron
  • What are code reviews really good for? September 1, 2022
    Concerns identified in code review: A fine-grained, faceted classification – ScienceDirect Code reviews are time consuming. And effort intensive. And boring. And needed. Depending whom we ask, we get one of the above answers (well, 80% of the time). The reality is that the code reviews are not the most productive activity. Reading the code […]
    Miroslaw Staron
  • Machine learning in compilers??? August 26, 2022
    BenchPress: A Deep Active Benchmark Generator (arxiv.org) To be honest, I did not expect machine learning to be part of a compiler… I’ve done programming since I was 13, understood compilers during my second year at the university and even wrote one (well, without any ML, that is). Why would a compiler need machine learning, […]
    Miroslaw Staron
  • Testing deep neural networks (article highlight) August 21, 2022
    A Probabilistic Framework for Mutation Testing in Deep Neural Networks (arxiv.org) Testing of neural networks is still an open problem. Due to the complexity of their connections, and their probabilistic nature, it is difficult to find defects. Although there is a lot of approaches, e.g., using autoencoders or using surprise adequacy measures, testing of neural […]
    Miroslaw Staron
Theme 3 Leader: Miroslaw Staron
Miroslaw Staron

Professor, Software Engineering division, Department of Computer Science and Engineering, University of Gothenburg

More information

Miroslaw.Staron@cse.gu.se

Phone: +46 31 772 10 81