Tuesday, 29 April 2014
I came across this excellent test plan template, in a comment by Ivor McCormack, over on http://www.softwaretestingclub.com and I need to share this!
Why?! Well because it is the sum of all on one page, easily digestible for people. I have earlier advocated using PowerPoint for the test plan you use for communication, but this sums up all the important parts of test planning and is something that can easily be absorbed and discussed.
I doubt that you can fit the plan on one page after adding the project details, but you can fit all the highlights and use is as the introduction in both the test plan document and your presentations for stakeholders.
I will definitely use this in the future, as a checklist, and a driver for discussion when planning test.
Happy test planning!
Friday, 25 April 2014
Just a quick note on processes and (mis-)use of same, inspired by this Picture:
It is clear that the fire brigade has a process for protecting fire hoses crossing a road, but it is clearly not fit for scenarios where the hose is crossing rail tracks - None the less it has been executed by the firemen.
The picture made me think of a finding we had while running some Test Process Improvement projects some time ago. We were applying Sogeti’s TPI model to projects and programmes in order to optimize use of test resources.
The finding was related to the interview question “For re-tests, a simple strategy determination also takes place, in which a substantiated choice is made between variations of ‘test solutions only’ and ‘complete re-test.’”
And the answer given by the project manger was: ”Yes! We have a defined strategy, the procedure is to execute all test cases for every release.” Something that might look good on paper, but is a complete waste of time and resources, much like the fireman’s effort of protecting the hose from the train. At best the hose will be cut, leaving the time spend shielding it a waste, at worst the train is derailed.
Revisit those procedures from time to time, and ask why do we do the things we do? Don’t question everything, only that what seems odd.
Have a nice weekend!
Monday, 21 April 2014
Problem: Lots of effort is spend on reporting, little is spend on reading them.
Churchill once said: ”This report, by its very length, defends itself against the risk of being read.” Writing a long report is indeed an excellent strategy in case the goal is to avoid having an audience, delivering the right information is indeed hard.
Solution: Elevator pitch reporting and expectation management.
I admit that I have written extensive reports with lots of figures, metrics and long evaluations during my time as test manager in various organizations. Extensive reporting seems to be the solution if the reader really does not know what he wants or if the organisation uncritically uses a reporting template.
1st thing is to get the scope for the report in place. It is all about expectation management when dealing with those who are to receive the report. I recommend that you bring a suggestion to this session, or you will face the easy answer: “Just give us all the numbers you have…”
This is where the elevator pitch approach will help you. Less is more in in terms of test reports, it needs to be as short and precise as possible, focussing only at the core of things. In my experience this goes for both written and oral reporting, and will help you and the organization focus on the important issues. Furthermore you will save time writing the reports, and the reads will save time reading them.
Feel like you are being asked to report everything, and that nothing of the reported is being used? Ask this question to those asking for the reports: “What is this figure used for in the context where you are using the report?” If you get no answer, then there is a potential for LEANing your reporting. Engage your audience, and agree on a proper level of reporting, that saves time on writing and later reading the reports.
Monday, 14 April 2014
There has been a lot of fuzz about the OpenSSL ’Heartbleed’ bug, as you might have noticed, if not check this out http://heartbleed.com/.
I came across this comic on xkcd.com the other day:http://xkcd.com/1354/
It serves as an excellent reminder on why tests must include lots of negative tests. Making illogical requests like the one presented in the comic will make most people ask: ”Why did you do it?” and any tester would immediately answer ”Because I can!”
It is not uncommon that we have discussions over defects that according to some are scenarios that can/will not happen, and according to others (often testers) will happen because there is nothing that prevents the users or interfacing systems from making the action. None the less the negative scenarios often expose far more (and with higher severity) defects compared to the positive test scenarios…
There is lots of nice information on negative testing in this paper: http://www.workroom-productions.com/papers/PVoNT_paper.pdf
Happy negative testing!
Friday, 11 April 2014
Problem: Predicting amount of defects in the week to come is difficult.
The following factors can should then be taken into account before guessing the total defects next week:
Being able to guess the amount of defects found in the week(s) to come is very useful for ressource estimation, risk evaluation and other test managment related activities. Furthermore this require a little crystal ball technology, as it requires the test manager to make a qualified quess, based on assumptions related to the test items and resources.
Solution: Use your defects trends as guide for the estimation.
Being able to make qualified guesswork require that you use your metrics as foundation for seeing trends. The low-tech approach that I use is to extrapolate the trend and see where total will be end of next week and the week after. The cool thing about low tech solutions is that they are easily applied and communicated,
Consider this example, where test has been running for 2 weeks and question from development team is how much effort shall we reserve for bug-fixing. This question is in fact: “How many defects should we expect raised next week?”
Using the low tech approach I just draw a line that follows the trend from the previous weeks of testing. That gives an indication on where the defect count will end if all test conditions are the same.
- Is the test resources and cases for the week to come approximately the same as the previous weeks?
- Will items under test change so the test team gets new or complex deliveries for test?
- Is there a lot of retest in the week to come? Will this take resources away from the test?
- Do you have planned maintenance in the test environment?
These assumptions are then used to adjust the slope of the trendline that you base your guesstimate. In above example the trendline is set higher than average, as the test items in week 3 are highly complex and 1st time under test. This lands the defect total on 85+, or additional 35 defects in week 3 (calculated as week 3 total minus week 2 total).
You can also estimate severity for the defects to come in the following way: Combine your trend graph with your defects severity trends then you can add severity as a flavor to the estimate. In above example project we have the following severity distribution after week 2: 7% Critical 13% Major 25% Medium 37% Minor and 18% Cosmetic. This leaves week 3’s 35 estimated defects with the following spread: 2,5 Critical 4,5 Major 8,75 Medium 13 Minor and 6,25 Cosmetic
There is no rocket science in the equation, but only simple logic based on extrapolation of trends that is easily communicated to the recipient. Putting the graph with trend line and your assumptions in a mail is a short route to setting expectations. There are also other situations where this approach will come in handy, when communicating general quality trend in a delivery or when talking to customers or steering committees while doing expectation management.
Happy testing & Enjoy your weekend!