Category Archives: Strategy

Software Quality Assessment

I recently came across some questions about software quality that got me thinking assessing existing software. As a software developer and consultant, I’ve participated in many conversations about software quality. Here are some tools I’ve used in conversations about software quality.

1. Atrtributes of software quality of an existing system

Some of my principal concerns with software quality when I’m evaluating an existing system include:

a) How well does the software meet the user’s needs
b) Can an average user make use of the software reliably enough to get to a desired outcome
c) Ease of maintenance and enhancements
d) Technical soundness of system, primarily severity and number of open defects, frequency of system errors and crashes, and perception of stability by system users

2. Convincing others of your assessment

Historically, I’ve not been successful jamming ideas into peoples heads – they seem to get upset and resist. However, if I can involve them in a conversation about quality – and try to understand their perception – and set an example for them, often we are able to come to terms on a common definition. Even if it isn’t perfect, at least I know what to do to meet their expectations (today).

I recently read a story that I think describes this idea. Two developers walk into a village – and are told by the villagers there is an evil monster on the loose – and point to a watermelon. Both developers want to help the villagers understand that watermelons are not monsters. The first developer attacks the watermelon, easily defeats it and proceeds to eat it – trying to show the watermelon’s helplessness. The villagers find this killing and eating of monster flesh monstrous. The second developer lightly agrees with the villagers – and proceeds to share his knowledge of melons.

My short answer (now that I’ve dragged you through the longer one) is that I try to get them to convince me. Often we end up in a mutually acceptable definition – sometimes we don’t.

3. Quantifing software quality

Anything can be quantified (e.g. this software has blue quality). However, we lack rigorous, well defined industry standards to allow us to do this definitiely for all software systems. The final attributes of software quality are often subjective and context dependent. 

If an engineering team doesn’t have a scheme in place to quantify quality – it should create one. A basic scheme I’ve seen teams use in the past is:

  1. Number of open severe defects
  2. Client satisfaction with product
  3. User satisfaction with product
  4. Ability of engineering team and client to openly and effectively communicate
  5. Difference between desired frequency of use and actual frequency of use of the system
  6. Ability of client to operate and maintain the system
  7. Ability of the engineering team to evolve the system

4. Measuring software quality

A method I have used looks at some basic attributes of a software system for quality measurement:

  1. Outstanding defects and frequency (critical defects count more)
  2. User satisfaction with system
  3. Cost of ongoing maintenance for defect removal
  4. Future plans for system (low quality systems often have a replacement plan earlier than good systems)

5. Relationship between software quality and system scope, cost and schedule

I generally follow the old saying “Good, Fast, Cheap – Pick any 2”. Give me 2 degrees of freedom, and I have a fighting chance to still develop a system of reasonable quality. Mainly because I can made meaningful tradeoffs to reach the quality goal. Nail all my feet to the floor, and I can’t move at all, and the Vorpal Duck comes and eats me.

6. Software Quality as variable constraint

I believe that tradeoffs between software quality and the variables of system scope, cost and schedule happen all the time. My typical goal is to make these tradeoff decisions as explicitly vs. implicitly. This requires reflection on the current situation and goals – as an individual and as a team. I believe that as system implementors, we represent the interests of our non-technical customers when developing the system. If we make tradeoffs by significantly lowering or raising quality, we should engage the customer in understanding the implications of the decision. Significantly lowering quality can cause the customer pain the future. Significantly raising quality can cause the customer possibly uneeded schedule or financial pain in the present. This now (d)evolves into a discussion of good enough software.