Month: September 2012

The Nature of Quality

In the last days before vacations, discussions at workplace raged over the subject of software quality. The topic is interesting and we got easily carried away throwing flames each other on programming techniques, programming mastery, (we stopped just before getting to personal arguments). Eventually we didn’t reach any agreement, but the discussion was interesting and insightful. More precisely some of us claimed that in software development quality always pays off while others claimed that the same project goals may be achieved earlier (thus cheaper) by employing quality-less programming (I am a bit overemphasizing.)
In other words, the question the office tried to answer was – is the cost of quality enhancement worth spending?
It is hard to get to a final answer and that’s possibly why there isn’t a clear consensus even within programmers. Take for example Lua designers. They were adamant in not limiting the language expressiveness in some questionable directions because they targeted the language at programs of no more than a few hundreds lines written by a single programmer. In other words they sacrificed some forms of quality because, in their opinion, using those was just overhead without benefit.
Detractors of software quality have their reasons – programmers are human and as such they may “fall in love” with their own creation or with the process. This causes either an endless polishing and improvement, or the creation of overwhelming designs well beyond the scope of the project. If you question these programmers about their choices the answer usually involves design for change, portability or re-usability. Seldom an assessment has been made to check whether these are really needed for the project, its future evolution or the company.
It is clear that the point is finding the right compromise. “Anybody can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.”
Unfortunately bridge engineering (and other elder and more stable engineering fields) is not much of help beside witty saying. For bridge engineering does not have to take in account staged deliveries, demos, versioning, evolution maintenance (“yeah, nice bridge, now grow it three stories, add a panoramic wheel and a skiing slope”), customer changing idea (“yeah, that’s exactly my bridge, now move it on the other river”), nor specification problems (“I said ‘bridge’, but I meant ferry-boat”).
When talking about quality there is an important distinction to make – quality perceived by the user and internal quality (perceived only by the programmers working on such software).
User perceived quality is the opinion of software customers. As such, it is the most important quality of your software. As an engineer you should write the program so to spend the minimum needed to reach the required user perceived quality or slightly exceed it. No more, no less. (Just keep in mind that these aspects strongly depend by your customers). This claim holds as long as you consider the whole project without any maintenance. Since maintenance is usually the most expensive part of a project cost (even up to 75% of the whole project cost), you may want to lower its impact by employing maintenance-oriented development techniques, this means that you must spend more during the development, than what is strictly needed to match customer expected quality.
Internal quality is what, we programmers usually refer to when we say “quality”, good code, well insulated, possibly OO, with patterns clearly identified, documented, no spaghetti, readable and so on.
Unfortunately for software engineering supporters, programs do exist with a fair, adequate or even good perceived quality that have bad or no internal quality at all.
And this is the main argument against the strive for internal quality.
I think that this reasoning has a fault. In fact it relies on the hypothesis that since you can have fair to good perceived quality without any internal quality, then internal quality is an extra cost that we can save. But usually this is not the case, even if there would be no direct relation, would it be simpler to write a program where you keep just a bunch of variables under your attention to get to your goal, or write a program where you need to practice Zen in order to get an holistic comprehension of the whole not to mess anything in the process of treading your way?
And then it is hard that no whatsoever relation exists between the two natures of quality.
On this point both literature and commonsense agree that from medium to large program size internal quality affects several aspects of user perceived quality. E.g. if the program crashes too often perceived quality can’t be any good.
Moreover – in the same class of projects – the overall cost of the software across its entire life cycle is less if internal quality is high. That is desirable software properties really help you in understanding and modifying the software more confidently and with reliable results.
Quality-less supporters – at this point – backs up on picking single aspects of internal quality. For example they can target reuse and state that reuse practice is not worth since you pay in this project a greater cost with the risk that you won’t reuse the code in another project or that the context will be so different that it will be cheaper rewrite the code from scratch.
In this case it is hard to provide good figure of savings for designing and coding reusable components. This is unfortunate both for internal quality supporters in their advocacy and for software architects deciding if a component has to be reusable or not.
Unfortunate also because there is a large band of components between the ones that obviously are to be made reusable and the ones that obviously not.
Also it has to be considered that techniques adopted to make a component reusable are – generally speaking – techniques to improve internal quality. Pushing this thought to the extreme, you can’t rule out reusability from your tool chest because all other techniques that improve internal quality drive you in the reusability direction to the point that reusability either comes for free or at a little cost.
Despite of what I wrote, I think that there exists a problem and it occurs when the designer starts overdesigning too many components and the programmer starts to code functions that won’t be used in this project but just because.
A little overdesigning could be a good idea most of the times (After all design takes a little percentage of the development time), nonetheless you should always keep in mind a) the scope of the project and b) the complexity you are introducing in the code respect to the complexity of the problem to solve.
Since you are overdesigning you are not required to implement everything. Just implement what is needed for the next iteration of the project.
At this point you shouldn’t rush into moving your reusable component in the shared bin, but you should keep it in a “aging” area, granting it time to consolidate. Before being reused it should prove to survive at least to one project (or most of it). Then, when the component is reused in at least two projects, once properly documented, could be moved in the shared folder.
What I can’t understand and leaves me wandering is that in many workplaces firmware is considered an accidental cost, that everyone would happily do without it only if could it be possible. So the logical consequence would be to scientifically attempt to lower the cost of development and maintenance by finding the proper conditions. Instead in many places there is a blind shooting at firmware development, attempting to shrink any cost, with the effect that the cost will actually increase. E.g. you can save on tools and training, only to discover that your cost per LoC is more. I am afraid that the reason is either mistrust or ignorance in the whole body of researches done on software development since the 60s.