1 The Quality Concept
Everybody wants quality. At least everybody will tell you so when you ask them. So why do we keep seeing products and services with poor quality? How come projects keep downprioritizing quality? And most importantly, what can we testers do to change this?
Let us first look at the concept of quality. The Project Manage Institute defines quality as a measure of “how well the characteristics match requirements” – no more, no less. The ISO 9000 standard defines quality as the “degree to which a set of inherent characteristics fulfills requirements”. In other words, give the customer what the customer wants.
Nobody would argue against that, would they? But let us examine this concept in more detail. There are two sides to this coin: characteristics and requirements.
A characteristic is defined by businessdictionary.com as a feature of an item that sets it apart from similar items. A feature in its turn is a mean of providing benefits to a customer. However, it is important to understand that the customer does not care about the feature itself, only about the benefit that the feature provides. Here we find the first potential quality trap.
Quality is often defined by the supplier as the number of features an item has. In one way it is natural – mankind has always wanted to stretch the limits. If you can climb a mountain you do it, no matter what others say. This “stretch syndrome” is the reason why we often find products where you cannot see the benefit for all the features. Many IT programs and systems suffer from this syndrome, from desktop software like Microsoft Office to enterprise solutions like SAP and Oracle. They get more powerful with every new version but also more difficult to take advantage of. Steve Jobs once said about features that “we are very careful about what features we add because we can’t take them away”. That is something more suppliers should adhere to.
If we return to the mountain allegory, it is good that you stretch your personal limits but that does not mean that you should force others to join you or pester them with photos and videos showing how good you are afterwards.
Is this still a problem in today’s projects? I would say yes but not as big as it used to be. Previously users had to adapt to the technology but now technology has to adapt to the users. The trend goes towards simplicity (bye bye manuals and complicated interfaces), specialization (a set of apps instead of one do-it-all application) and socialization (the important thing is not what the application can do, it is what you and your friends/colleagues can do). A good project manager also knows to avoid “gold plating” and steer away from features that the customer does not want. A good test manager will do the same – only the requirements will be tested and any additional features will be left outside the protocol and thus not receive any gratitude. This leads us to the other side of the quality coin: the requirements.
2 The Requirements
What is a requirement? ISO 9000 defines requirements as a “need or expectation that is stated, generally implied or obligatory”. Businessdictionary.com defines requirements as “demands that must be met or satisfied, usually within a certain timeframe”. In addition, PMI distinguishes between product requirements (what to deliver) and project requirements (how to deliver). Those definitions contain three additional quality traps:
- The requirement may not capture what the customer wants
- The quality effort may be limited by time and cost
- The delivery (the project requirement) may be deemed more important than the deliverable (the product requirement)
Let us start with the first quality trap, what the customer wants. At first glance, requirement gathering sounds easy – simply ask the customer what she wants and give it to her. But does the customer always know what she wants? Does she always know what she can get? No!
In most cases the customer only knows what she has now. Imagine a customer in the Middle Ages used to shout across the valley (or from a mountain if she stretched her limits). She may require a glass of water to be able to shout for a longer time or perhaps a megaphone to shout higher. She may even dream of not shouting at all but rather using telepathy. But will she require a telephone if she has never heard of the concept? Probably not.
Our customer faces two major obstacles in the requirement gathering process: the visionary obstacle (the requirement is attainable but not imaginable) and the contextual obstacle (the requirement is imaginable but not attainable). She needs help to expand her vision (imagine long-distance calls) but also to understand the limits (be realistic). Steve Jobs expressed it with the statement that “you can’t just ask customers what they want and then try to give that to them. By the time you get it built, they’ll want something new.”
Thus it is important that the requirement gathering process is an iterative process where the customer and the supplier gradually learn to understand each other’s perspectives. Only together can they produce requirements that unlock the technology potential to power the business. As a tester, with one foot in the business world and one foot in the technology world, you have the perfect skillset to help bridging the gap between business and technology. In addition, by assessing the testability of each requirement you will in effect evaluate the ideas and praise/kill them before they reach the real world outside the project.
3 Quality vs Time and Cost
So far so good, by “testing” requirements we find defects before they have been realized in a product. However, to gather high quality requirements does require an effort so how do we handle time and cost limits? Obviously we cannot spend unlimited time and money on quality efforts, nor can we forego quality just to keep our budget. Time, cost and quality are correlated to each other, as demonstrated by Dr Martin Barnes in 1969 in his Iron Triangle. (Since then many new features have been added to the triangle such as PMI’s scope, customer satisfaction, risks and resources. However, we said above that we should be careful about adding features so I will keep it simple.)
In theory, we should find a balance between time, cost and quality. Any changes in one area should be evaluated based on the effect on the other areas. However, if you are an experienced tester you will know that time and cost are usually fixed points
In every project there are strong forces in favour of closing the project according to plan. The project supplier does not want to decrease the project profit by adding time and cost. The customer management does not want to lose credibility by acknowledging that “their” project failed. The other project members consider their activities as completed and do not want to drag on because the testers are not ready yet. Not even the acceptance testers, who will eventually use the product, may be interested any more as they have to return to their ordinary work. The result is that any unplanned events will affect not time or cost but quality. We are now approaching the delivery-deliverable conflict and the essence of the quality dilemma.
4 The Quality Dilemma
Most testers will recognize the situation: The project is delayed. There is a pressure to deliver “tangible” work items, such as executable code, report templates and nice senior management graphs. Less tangible work items such as documentation, reviews and preparations are downprioritized. The deliverable may even be “descoped”, like in the fairy tale “Master Tailor”, where a customer ordered a coat but only got a thumbkin. The only things that remain unchanged are the time schedule and the cost budget. As a tester, you will have to start your test later, most likely with more defects than expected, but you still have to end your test as planned.
But what about the business case for testing? By taking shortcuts to save time and money early in the project, you will have to spend more time and money later, won’t you? Well, that is correct in the long run but by that time the project has already closed and the stakeholders gone somewhere else (except for the poor maintenance team and the end-users, that will have to clean up the mess). But the project did deliver on time and within budget!
What happened? How did the delivery become more important than the deliverable? One reason is the intangible nature of quality work, another is the tendency to consider only what is measurable. You can measure time spent versus plan down to the minute and costs expended versus plan down to the cent. But you can never put an exact value on your quality level.
The test manager will most likely be expected to deliver status in terms of passed test cases, open defects and such but will those figures say anything about the quality? What if we miss defects due to insufficient test case preparation? Or test the wrong things due to insufficient requirement gathering?
To make things even worse, your performance evaluation may suffer since your effort is not recognized until the end of the project and the better you perform (by detecting defects), the more you may be blamed for delaying the project!
By now, you may be discouraged from pursuing a tester career but bear with me, it gets better from now on!
5 The Ten Tester Commandments
Can you do anything about the quality dilemma? Yes you can and doing it is one of the most interesting and exciting part of testing! Here are the ten commandments that you as a tester may follow to make the quality work not only better but also more fun!
- Promote testing in your organization. This applies both on corporate level and project level. You can do this by setting up communities, giving presentations etc. The purpose is to create an awareness of the testing benefits and an appreciation of the tester performance.
- Act as a mentor for all people you are working with. This applies both to formal and informal team members, such as acceptance testers. Mentoring includes discussing objectives, coaching in the daily work and providing regular feedback. This is particularly important when working with acceptance testers, whose work may not be recognized by their managers.
- Take advantage of the requirements traceability matrix. Not only can you ensure that each test case has clear acceptance criteria to be evaluated against. You can also use it for risk-based testing where you evaluate business criticality and functional complexity. This will help you to prioritize test cases and put a value on quality risks when project delays occur.
- Bridge the gap between business and technology. Participation in the requirements gathering is an excellent opportunity to do this. Besides being a great opportunity to share knowledge and understand other areas of the project, it will help you assess testability and start preparing test cases early.
- Be attentive to the end user reality. How are they working now, how will they work in the future, what needs and expectations do they have etc.? Use this information to adjust the test cases accordingly and drive changes in requirements and designs if necessary. You will get more accurate test cases and the acceptance tester will feel that they are more relevant.
- Strive to put a value on quality. Although it will not be exact, an objective figure is always more convincing to management than a subjective feeling. You can accomplish this by going back to the requirements traceability matrix and measure the quality in terms of fulfilled/not fulfilled requirements. An even more powerful way to illustrate the quality would be a solution overview, showing which areas are affected by the defects. You may also consider measuring the end-users’ confidence. Although subjective, it will give management a figure and also give the end-users a vent to express satisfaction or frustration.
- Work proactively to prevent defects. A defect may arise any time during the project so you should not wait until test execution to find them. Promote static testing (reviews of requirements, designs, codes etc.) and log defects as you would in dynamic testing (the actual execution of the test object). If defect is a sensitive word, use the more precise word “deviation” instead.
- Conduct root cause analyses. A root cause analyses will not only help you identify areas of improvement, it will also measure how well you prevent defects. Did you log all deviations as mentioned in point 7? If so, did you find and close them in the same phase as they were introduced (i.e. were deviations due to requirements detected in the requirements reviewor not until later)? If you constantly find deviations late, you have a business case for more testing!
- Have fun. If you enjoy your work, others around you will do the same. Don’t be the negative guy pointing fingers to the developers’ work but rather the positive guy that cooperates to make their work perfect. Share your enthusiasm with the acceptance testers (that may have been forced to do testing) and set an example for them that testing is fun.
- Be proud of your work. As a tester, you help making the world a bit better every day!