All Software Center companies have efficient product development, release and deployment processes.
We help the companies to design and develop modern measurement methods and tools by utilizing state-of-the-art analytics, AI and machine learning.
We use Action Research to increase the impact and adoption of the results (Action Research in Software Engineering), i.e., we work on-site of the companies.
Over the course of ten years of our collaboration, our theme has resulted in over 50 models and tools. We have also published over 200 papers and books that disseminate the results to the public domain.
Examples of the metrics designed and introduced to the companies:
sec23summer_449-mirsky-prepub.pdf (usenix.org) Cybersecurity has been, and will always be, a challenge for software systems. It is also perceived as an art when it comes to security analysis (or exploitation for that matter). There is no single tool, no single method that will make our software secure. This article is interesting because of the way that […]
CoditT5: Pretraining for Source Code and Natural Language Editing (pengyunie.github.io) I’ve written about programming language models before, and it is no secret that I am very much into this topic. I like the way in which software engineering evolves – we become a more mature discipline and our tools become smarter by the hour (at […]
Evaluating classifiers in SE research: the ECSER pipeline and two replication studies (springer.com) One of the most prominent problems with using research results in practice is the lack of replication packages, but this is far from being the only one. Another one, maybe an equally important problem, is the fact that the studies report performance […]
1176898.pdf (hindawi.com) Language models are powerful tools if you know how to use them. One of the areas where they can be used in recognizing security vulnerabilities. In this article, the authors look into six language models and test them. The results show that there are more challenges than solutions in this area. The models […]
As you have probably observed I’ve been into language models for code analysis, design and recognition. It’s a great way of spending your research time as it gives you the possibility to understand how we program and understand how to model that. In my personal case, this is a great complement to the empirical software […]
Automatic Security Assessment of GitHub Actions Workflows (arxiv.org) After my last post, and the visit to the workshop at MDU, I realized that there are a few tools that can be used automatically already now. So, this paper presents one of them. What is interesting about this tool is that it uses github workflows, so […]
https://arxiv.org/pdf/2208.04261.pdf So I find myself on the train again, this time strolling towards MDU for their cybersecurity workshop. Not that I am an expert on just cybersecurity, but I know a bit about programming and design. I also know this much to see that a secure product needs to start designing for security, not only […]
Concerns identified in code review: A fine-grained, faceted classification – ScienceDirect Code reviews are time consuming. And effort intensive. And boring. And needed. Depending whom we ask, we get one of the above answers (well, 80% of the time). The reality is that the code reviews are not the most productive activity. Reading the code […]
BenchPress: A Deep Active Benchmark Generator (arxiv.org) To be honest, I did not expect machine learning to be part of a compiler… I’ve done programming since I was 13, understood compilers during my second year at the university and even wrote one (well, without any ML, that is). Why would a compiler need machine learning, […]
A Probabilistic Framework for Mutation Testing in Deep Neural Networks (arxiv.org) Testing of neural networks is still an open problem. Due to the complexity of their connections, and their probabilistic nature, it is difficult to find defects. Although there is a lot of approaches, e.g., using autoencoders or using surprise adequacy measures, testing of neural […]
Theme 3 Leader: Miroslaw Staron
Professor, Software Engineering division, Department of Computer Science and Engineering, University of Gothenburg