Acceptance tests, criteria, requirements – pinning down what we mean

In my test team we have been working with creating templates and generic processes and so quite naturally we came to acceptance testing and how to handle that. We quickly realised that in the room we had different experiences and definitions of what acceptance testing was all about. We discussed our understanding of acceptance criteria and came up with a number of different ways of looking at it.

Requirements as criteria

This is where we have a list of requirements from the customer, these can be functional, and non-functional, and it is understood that when all of these requirements are met then the system can be put into production. This left us with the task of defining the requirements and then working out how to prove they have been met.

I have seen this quite a lot recently on the more Agile side of the discussion, in the following link (http://tracks.roojoom.com/r/467#/trek?page=2) you can see acceptance criteria defined as the boundaries of a user story. Or, in other words the outline requirements; all the examples given are business rules or requirements on the behaviour.

Tests as criteria

This is where we have dHappyeveloped a list of test cases and as long as all the tests are executed with a successful result then the system can be put into production. This left us with the task of ensuring that the list of test cases covered enough of the system and the definition of what we mean by a successful result.

This seemed to be quite common amongst the test team, where we all had seen system acceptance based on a number of user acceptance test cases being successfully executed. This in its turn led us into a discussion on the difference between user acceptance test cases and system/function test cases. We had all seen examples where the user testing was basically a repetition of the testing carried out within the development project.

We agreed that user acceptance tests are best when they are based on workplace scenarios, that these tests are not a bug hunt (as we should have already hunted down as many bugs as possible in system/function testing) but a validation that the system does what it should i.e. it helps the user carry out their daily tasks in an efficient and effective manner, it should make their lives easier not harder.

 

Activities as criteria

This is where we have put together a list of tasks, such as complete documentation, buy and install new hardware etc. As long as all the activities have been identified and successfully completed then we should have a system in production.

We had also seen variations on this where start and exit criteria for testing had been defined and then considered to be acceptance criteria; such as the test accounts have been created and test data created and installed before we can start testing, and then all tests executed to exit the testing phase. I think that this kind of thinking is the equivalent to setting a ‘Definition of Done’ (DoD) for the team and customer. In projects using scrum we often had DoD rules for when the task can be moved from one column to the next. This means that tasks that are completed really are completed to the same degree for everyone, but it doesn’t really help get the system as a whole accepted.

Quality levels as criteria

LessHappyThis is where an attempt has been made to define quality; for instance that there is one or more test for every requirement, that all tests for high priority functions have been executed, that there are no known critical or high priority bugs left in the system after testing is complete etc.  There might even be some extra caveats, such as even if all bugs are not fixed before going into production there must be a release schedule to deal with them at a later date.

This is how I would try to define acceptance criteria. We need to remember what the final goal is; for most of us it is to have a system in production with a happy customer so that we can get paid, preferably within time and budget constraints. So for me setting acceptance criteria is all about working out how to get the system accepted with the least amount of pain on all sides. I think that this is a key activity in the management of expectations. Whether your strategy is to undersell so that the customer gets more than they expect or oversell so that the customer buys in the first place you still need to know what the customer expectations are, how they view and value things and from there be able to judge if you are going to fulfil their expectations or not.

I often start the process with the customer with a short discussion on how to categorise issues that turn up during testing; what is the difference between a bug and a design issue or even a user error? I may then go on to discuss possible error sources; for instance if we are tailoring an off the shelf product then the error may lie in the original product, in the configuration of the product or in the new code. Defects found may be dealt with differently depending on where the error actually lies. I always discuss how to define priority, what is the difference between a high or low priority defect and what differences that will lead to when planning fixes.

I also like to build a model of the system with the architect and the customer and use this model as a basis for discussion. I want to get the customer to identify the most sensitive parts of the model from a business point of view; by asking questions like where are the most users working, where do you get the most questions at helpdesk etc. The aim is to identify where the pain can arise; which failures will lead to the most complaints and dissatisfaction. Yes, this is also one step in risk analysis and a key part in test planning based on risk evaluation, but in the context of understanding the customer this is a helpful step in setting acceptance criteria.

Maybe at this stage another helpful discussion is to discuss the project triangle; when a project tries to balance the time-plan, against the budget against the list of required functionality which one is going to give. (Funnily enough I have seen big cultural differences here: in Sweden the tendency is to focus on delivering functionality even if the budget suffered, in Finland the focus was on keeping to the time-plan and throwing out functionality, in the US it was all about keeping the budget).

In summary it doesn’t matter how you define your acceptance criteria as long as you are aware of the different ways of seeing things and pick the way that is most suitable for the project. Do not lose sight of the end goal which is not only to get the system into production but to have happy users who will keep the system in production.

About Marie Kyletoft

Jag är en konsult på Softronic med uppdrag som testledare. Jag ramlade in i testbranschen över 15år sedan, via TQM, ISO9000, processförbättring osv. Sedan dess har jag arbetat med funktionstestning och acceptanstestning, med embedded och web applikationer, i projekt med PRINCE och med SCRUM. Ganska varierad men det finns mer att lära sig därute.