Introduction
In this article I am going to introduce a variation of the burn down chart, which focusses on test, which I have successfully used in a few projects. This chart helps me answer questions such as: when do you have to start testing, how much testing do you have left to do, how is it going?
I felt the need to introduce a separate chart for testing, in addition to the sprint burn down chart, because of problems I have encountered in working with newly created SCRUM teams; one of which is that testing is not fully integrated into the team and there is still a strong tendency to ‘throw the code over the fence’. So test always starts after the coding is ‘complete’, the testing is manual so often takes longer than the development, and the regression testing is a major burden which the developers are not willing, or able, to help with. So, although everyone agreed that the definition of done included passing the tests there was still a lonely tester dealing with it on their own.
Solution I have adopted
I value planning and I like the burn down chart and I believe that I can rely on our estimations so I have tried to put it all together in one packet to provide visual information for the team and management.
Test burn down – part one
The first question I tried to answer was: “when do I need to start testing?”
Step 1: Check the list of functions planned for the release/sprint and then identify how many test suites will need to be executed.
Step 2: Estimate how much time each test suite will take.
Step 3: Check the developer estimates to see when each function should become available for test.
Then I use this information to create a line graph:
What I am plotting here is the estimated number of test suites to be completed against the week number (we were working with long release cycles). That is in week 40 I have 20 test suites left to execute and I plan to only have 10 left by week 52. I create this graph as follows:
- I know that I need 3 weeks to run full regression testing so I start by drawing in the line for the delivery date then making a little ‘bump’ of time for the three weeks regression testing made up of 2 test suites.
- I know I have 20 test suites to cover function testing with a total of 350 hours testing. So now I have my latest start date i.e. approximately 9 weeks (350/40=8.75) before the start of regression testing.
I also see from the development planning that they can actually start releasing to test 10 weeks before the start of regression testing so I have a week in hand. If you find that in your project it is the other way around then now (at the start of the project) is to start shouting that test can’t keep up with development without a major input of test resources or throwing out of functionality!
- So starting from week 46 (I choose to start as early as possible) I can start subtracting the time for each test suite.
So now I have a plan of how I expect things to go, but that is not enough I have to track my progress too.
Test Burn Down – part two
Now I want to answer the question on how testing is progressing. I start with the same information as before but now I display it as bars instead: that is there are 20 test suites to test the functionality and 2 test suites for the regression testing.
Note that I decided that instead of dropping to bars of height 2 at the end (during the regression test phase) I added to the total. This was a question of taste; I thought it looked better this way.
To start with the bars are all grey because no testing has taken place yet, but the formatting in excel is set up so that as each week goes past the grey bars will change to traffic lights.
When a week had elapsed I knew which tests had taken place so I changed the bar to reflect this: red for the test suites that should have started but didn’t, yellow for the test suites that were started but not complete, green for the test suites that were complete. So now we can see that by week 52 seven test suites have been started.
I followed the principle that “done” is when there is nothing left to do on the test suite. If there are open bugs associated with that test suite that are going to be fixed in this release, requiring retest, then the test suite is not complete.
Test Burn Down – part three
The final step is to combine the bar and the line chart in order to see progress against plan. This can be quite fiddly to get the scales to match but worth it because now you can see that in this project they managed to get a head start and start testing a little earlier than estimated and that the completion rate was not exactly on target but by week 52 the completion rate matched plan and testing was still ahead in terms of test suites started.
Now I can answer the question of how much testing there is left to do and whether there is enough time in which to complete testing, or do we need to start throwing out functionality (I always seem to work on projects where the end date is fixed).
Conclusion
This diagram is quite difficult to put together the first time but worth the effort as so much information is then visible. Once the excel sheet is set up it is a simple matter to add the numbers every week.
I also found that I had to explain it twice to management as it is not totally intuitive to the non-team member, but they quickly grasped that red bars over the blue curve was a bad thing.
Note
I would like to give thanks to the testers I worked with when developing the test burn down chart for their input and points of view: Katarina Zaib and Stefan Pettersson, and special thanks to Stefan for solving some of the Excel formatting for me.
Intressant läsning!
/Johan
Jag skulle gärna veta mer om hur du kan lita på tidsuppskattningarna (särskilt då buggar och omtest är inräknat.)
Och hur gör ni när stora oväntade saker händer?
Estimering var ganska bra för att vi hade jobbat med produkterna i flera år och hade bra känsla för tid. Till regressionstestarna hade vi skrivit ner hur mycket de tog och kontrollerade tiderna då och då för att håller de uppdaterad.
Om funktioner blev utkastad från leveransen det finns ett antal val och de beror på arbetskulturen. Jag väljer att behålla grå kolumner så att management inte glömmer bort vi hade planerat något annat.
Vi har en mall som vi fyller i men ändå kan det vara lite svårt att lägga upp diagramet så jag föredrar att inte börjar om. Statistik ska vara en hjälpmedel inte en börda.
Nice article and an interesting use of a burn down chart. But aren’t you just avoiding solving the real problems (which you summarized very succinctly) – testing that only starts when coding is complete, reliance on manual testing and developers that aren’t willing or able to participate in testing?
True; this chart was just a way of tracking progress and didn’t tackle the underlying problem. But I don’t think that reducing the test time and sprint durations etc. means that you stop using this chart. If this is the information you are out after this is a good way of displaying it.
However, we also worked with trying to improve the situation. We introduced an automation tool and started building up a testcase library. However, that was a very long term activity. We also got one development team to start working with TDD and we had a funny conversation where we testers pointed out that as the “definition of done” included testing and that the developers had agreed to that then maybe they could start helping us with the testing. We suddenly started seeing a lot more ideas on how to introduce more automation (but never any offer to help with the manual testing!).