Code quality is a term developers use to describe how good or bad a piece of code is.
But what does that mean? How to measure code quality? And how can we measure code quality so we know when we’ve been successful at improving it?
Ultimately, what steps can you take to improve your code quality? Read on to find out.
The definition of quality code is contextual. However, in general terms good quality code is:
If you have good quality code you will likely have reliable software, apps, and websites. Poor quality code will be ‘buggy’ and unreliable.
Code exists for a purpose.
Whether it’s been written as a learning exercise or is a vital part of a multi-billion dollar application used by millions around the world, better quality code helps digital products and their features to do their jobs more effectively and reliably.
In short, higher code quality means higher software quality.
Good quality code can help protect you and your users from risk, too. A well-written piece of software will be safer and more secure. That’s vital in an era when companies are increasingly expected to take protecting data seriously by regulators.
But even assuming the worst doesn’t happen if you have bad code, your mobile, desktop, or web application is still a part of your business. If it’s used internally (examples include HR management systems, CMSs, or CRMs) an unreliable app built on bad code will reduce the efficiency of your business’ operations and projects.
If the code supports an app used by external parties (like everyday users) then poor quality code could severely damage user experience (UX). As time goes on after release, the situation is likely to get worse as the confusing code is hard for developers to understand and make changes to.
This is the kind of problem which often emerges with so-called ‘spaghetti code.’
Spaghetti code. It hurts.
What counts as code quality for your own application should determine how you measure code quality. However, regardless of the specific digital product you’re working with, you should pay attention to the following areas:
Let’s look at each of these areas in more detail.
If your code can run for a long time without errors, it’s considered reliable. But reliability isn’t an all-or-nothing condition. To measure reliability you need to define the time period you want to measure and what counts as an error so you can quantify it.
How to measure code reliability
Sit down with your team and discuss the types of errors / bugs you’re seeing and what types of problems your code might encounter in the future. Establish a system for recording them over time.
There are various ways of quantifying reliability, but one effective approach is to calculate the probability of failure with this formula:
Probability = Number of cases of failure / Total number of cases
Static analysis tools can be used to evaluate source code before running a program, making them useful for detecting reliability issues before they arise.
Popular static analysis tools include:
Code maintainability is how easy or hard it is for software engineers to keep a digital product such as a piece of software or website running over time by making necessary corrections or updates.
Code maintenance can be more or less challenging based upon the following factors:
Use a combination of human reviewers and automation when trying to optimize code maintainability.
While you should pay attention to areas such as style when evaluating the maintainability of code, developers can quantify contributing factors with conceptual tools including the Halstead complexity measures.
The Halstead complexity measures give a description of the complexity of a piece of code.
The first step is to calculate the following numbers:
- n1 = the number of distinct operators
- n2 = the number of distinct operands
- N1 = the total number of operators
- N2 = the total number of operands
From these numbers, eight measures can be calculated:
- Program vocabulary: n = n1 + n2
- Program length: N = N1 + N2
- Calculated program length: N'=n1log2(n1)+n2log2(n2)
- Volume: V= Nlog2(n)
- Difficulty: D= (n1/2) * (N2/n2)
- Effort: E= DV
- Time required to program: T= E/18 seconds
- Number of delivered bugs: B=V/3000
For a full description of the process, you can check out this GitHub page.
Testability is an umbrella term for the various factors that affect how easy (or hard) it is to test code. Some of those factors include:
We’ve already seen how Halstead complexity measures can be used to get a sense of the complexity of code; one of the determinants of testability.
Another conceptual tool in your toolbox for understanding complexity (and therefore testability overall) is Cyclomatic Complexity (CYC). If you’re interested in using this technique, this GeeksforGeeks page on CYC goes into more detail.
CYC is useful for measuring code testability because it helps determine the number of test cases required to detect errors - which is the whole point of testing (unless we’re talking about user testing, which is a different matter).
Code portability describes the drop off (or hopefully the lack of drop off) in usability of code when switching between different environments.
Have a coding standard in place for your developers that applies to the environments your code will be run in. Conduct testing in each and record the differences in performance levels and error frequency between them, if possible.
It can also be helpful to set compiler warnings as high as you can while using a minimum of two different compilers.
If your code has certain features like modularity or loose coupling, it’s more likely to be reusable.
Reusability is best measured by counting the number of interdependencies. The more there are, generally the less reusable the code. Run a static code analyzer to find out how many interdependencies exist in your code.
To ensure code quality, bake it in from the start. That’s different from an approach based on writing code for a digital product and then only identifying bugs after completion through testing.
So, how do we keep code quality the priority from early on in Agile development? As Charles G. Cobb notes in The Project Manager's Guide to Mastering Agile: Principles and Practices for an Adaptive Approach, make developers directly responsible for the quality of their code from the beginning.
During Scrum meetings, identify code quality as a priority for which individual developers are accountable. Too often, ‘just getting it built’ is the sole focus. But as we’ve seen, this causes problems down the line. Remember 50 - 80% of development money goes purely to maintenance.
By cultivating a focus on quality, maintainable code during sprints, you can substantially reduce the cost of maintenance while building a solid foundation.
Here’s a list of ten top code review tools developers can use to help create quality code:
Ready to get to work making sure your own code quality is as strong as possible? At Develocraft, we’ve got you covered when it comes to code audits and reviews. If you have questions whatever stage of the process you’re at, we’re happy to offer advice.