Friday 27 September 2013

When defect priority looses its meaning... STOP!


Problem: All defects are critical just before launch – defect priority can easily loose it’s value & meaning

Clock is ticking and a release is approaching. Development only have a few days/hours before code-freeze and only the most urgent bugs can be addressed. This is where defect priority can loose its meaning, as people see all bugs as critical, hoping for them to make the cut.

Solution: STOP! Check your defect metrics and call for a bug triage

Most extreme example I have seen was a large program where the % of priority 1 & 2 defects went from 17% to 92% in a week. This is what happened:

Program management announced to all stakeholders that deadline was tight, and that time would no longer pay attention to defects of severity 3 or lower. In the two days that followed this announcement, we could see a pattern forming. More than half of the existing defects had priority changed, majority got priority 1 or 2. 90+% of new defects was opened with priority 2+.

This story holds two lessons:
  • Do not tell stakeholders that you will be ignoring their defects – They will not take it kindly.
  • Make sure to check your defect metrics from time to time and do a bug triage when needed.

Since this I have always kept a pie-chart of defect priority (and severity) at hand in my projects – This is easy, as you get it for free in most defects tracking tools. Checking it from time to time, will offer a sanity check of priority spread, before doing a bug triage exercise.

In the example above priority got confused with severity, everyone got all jumpy about when, and if things got fixed. This meant that the fist bug triage meeting was a battle of will – Who was prepared to give ground and sacrifice some bugs to lower priorities? The meeting did not change any priorities, and we simply agreed that all stakeholders (business areas) had to nominate 3 bugs that they needed and 5 they wanted, and that set the new standards for priorities – Not very pretty but it did the job, giving development priorities for completing bug fixing in the time left.

For more information on bug triage have a look at this: http://www.softwaretestingtimes.com/2010/04/bug-triage-meeting-process.

Have a nice weekend & Happy testing!

/Nicolai

Thursday 26 September 2013

Testing ROI - what's your story??

Problem:

There are numerous ROI calculations on various forms of testing. Be it manual or automated testing, in waterfall or agile contexts someone has crunched the numbers for "testing as a whole" or for a specific industry.
It very often leads to some rather academic figures.

And very often to a setup where a lot of assumptions or "model projects" have to form the foundation for the calculations - and then another set of "best practices" on top of this with estimates on how many minutes it takes to do a test script, review and execute it, how many defects/test script. And then you have to put in the human factor - multiply by 1.07 and the ROI is....lost.


Solution:

Find a good "product" story for your project or for your QA department in your company. You will most likely not be able to do flashy ROI on top of the story. Or only partially.
I've been with a company that was very focused on software- and hardware quality, yet what we needed was not ROI to remind us that test was important. We all shared the same story which was told when you joined the company - two failed product launches and we are out of business.
And QA didn't want to contribute to that in any way. At all. So we tested because we had to do test - because we did not want to fail. And that did not change for years. ROI calculations were updated during that time, better tools to support testing were introduced, the test deparment matured in terms of processeses and participants skills and experience but at the end of the day our story was still the selling point. And it could be shared outside the QA department with other deparments.

I'm not saying that ROI's are useless. Some are obviously so easy to pick apart that they are. On the other hand good ROI's are great input for understanding how much effort and reuse it takes before automation is feasible. And it is also good for telling the story about how many parameters that are impacting testing. In short the ROI process is good for getting discussions and understandings aligned internally. If there is a factor of 20 between two estimates or factors in an ROI where is the consensus and why?

If you are put with the task of doing a ROI to justify a test department in your company you should consider something to be fundamentally wrong. Test departments are needed whenever there is software or hardware development. They can be outsourced with different disasters waiting to happen. Unless you also ROI "communication challenges". And if you do that please share your thoughts on this blog.
But we would much rather hear your product story.

 

Monday 23 September 2013

Test automation return of investment


Problem: Calculating & using Return of Investment (RoI) in test automation implementation


Engaging in discussions on RoI when implementing test automation is a tricky business. Discussions is often derailed either by incorrect numbers, a strange formula being used or stakeholders who for political or personal reasons wants to influence the decision.

Solution: Expectation management and a simple RoI calculation to steer the discussion

RoI calculations are needed for doing the business case detailing the investment that test automation is. There is really no way around this, but I would recommend that you start your test automation discussion with expectation management.

Expectation management is needed, as there will be many different opinions on what can, and what cannot be done using test automation. Especially the falsely expected benefits are dangerous, as they will pull discussion in the wrong direction, and set the project up for failure. When looking for falsely expected benefits and other mistakes I suggest that you start the discussion with stakeholders listing the tangible and intangible benefits they see. This will serve as input for the business case and give you an idea of the realism in the expectations of the stakeholders.

Test automation is an investment that takes a lot of time to get RoI from. Especially the arguments about savings and cost reduction should be examined closely and that is where a RoI calucation will come in handy. Some years ago I got a copy of Dorothy Graham’s test automation calculator, a nice Excel sheet that allows you to make a simple RoI. You can find it here, along with some notes on the subject on Dorothy's Blog.

In my experience RoI calculations can never stand alone – The numbers in the calculation needs to be backed by the experience you get from doing the actual test automation. When introducing test automation to a project, I suggest that you do a short proof of concept to ensure that the assumptions are right and that estimates are reasonable.
In short:
  • Engage stakeholders in a discussion on expectations and manage those
  • Do a RoI based on a simple calculation – Remember to mention the intangible benefits
  • Make a proof of concept, where you check your numbers and assumptions

Want to know more? Check out Douglas Hoffman’s excellent paper on ’Cost Benefits Analysis of Test Automation’ found here: http://www.softwarequalitymethods.com/papers/star99%20model%20paper.pdf


Happy testing!

/Nicolai
 

Thursday 19 September 2013

Test policy considerations

Problem:

"Test" is a holistic expression used randomly across the organisation until 2 (weeks/days/hours) before go-live.

You've worked with test for more than six months, you know the above is true. Some exceptions exist but they are exceptions.

Solution:

Write an ultra short test policy and if organisational acceptance fails, at least you know that things will never change.

A fellow test blogger has done an interesting blog entry about test policies (in Danish). So based on her headlines we tried to come up with the shortest possible test policy that can be adapted to almost all organisations.

Why not a 20 page document? Because they are never read, never updated and never detailed enough to deal with the fact that IT-development is a one giant moving target, no two projects are alike and change happens fast.

So instead of spending months and weeks on that unpolished jewel, spend 1 hour. Preferably with fellow test professionals and see how short you are able to fill in 9 sections below - or if they are relevant for you at all.




Test policy for you

  1. Definition of- and purpose of test in the organisation. This organisation recognises the importance of structured test, with the aim of providing valuable and tangible information to relevant parts of the organisation and the senior management group.
  2. Test is organised according to demand since it is a support activity to 1) projects of whatever size, 2) maintenance and 3) IT-operations. Test is a distinct discipline and thus also has its own distinct manager (insert name).
  3. Career tracks and competence development of the testers is subject to demand from the organisation. Competence planning will be carried out at least two times a year or at the end of each (project) assignment.
  4. The overall test process that must be followed is embedded in the 1) project model and the 2) development model that are implemented in the company. No stand-alone test models are accepted. The implementation of test activities and related gates are evaluated and adjusted when the models are up for revision.
  5. Standards that must be followed. Since test is a professional discipline we follow the vocabulary of ISTQB as a standard. In the case we are working within a regulated industry we follow (insert name) standard(s).
  6. Other relevant internal and external policies that must be followed - refer to (4)
  7. Measurement of test value creation is based on the following three criteria:
    • Input from test in terms of blocking defects (number of accepted, requirements traceable bugs that have required a fix)
    • Input from test in terms of implemented process improvement suggestions (time to market improvements).
    • Assistance with defect prevention activities (number of accepted bugs found during analysis and inspection of requirements and other static test environments)
  8. Archiving and reuse of test results are determined at the end of each task or project.  A peer review will be conducted to determine which test artefacts are worth saving for re-use, and which existing test artefacts must be discarded.
  9. Ethics for the testers:
    • Bugs must be documented and made visible to the organisation
    • You own your own bugs until handover is accepted by the receiver
    • All testers are expected to speak up when they come across something "fishy" i.e. improper solutions, processes, implementations or the like.

Wednesday 18 September 2013

Cutting corners has a price…


Problem: Risk impact discussions can be problematic


Someone posted this picture at Linkedin, and that made me think of the iron triangle. While working with quality you often have to talk about the cost of quality. Quality is fluffy, and hard to explain, especially when talking quality in large projects. As a tester you have valuable input for the risk logs and risk meetings - Make sure that you get the point across to the receiver.


Solution: Use the iron triangle to illustrate the consequences of a risk to your peers

I often use the iron triangle when talking impact of risks and issues. The reason for this is that it is simple and speaks a language that most project managers will understand.

This is how I do it:

Every time I identify something that threatens quality, I think about the triangle, and ask the question; where will this hurt the most? Cost, Scope or Schedule? I really like this angle on risk analysis, as it gives much to the risk index from traditional risk management.

The thing this approach adds, is easier prioritization amongst risks based on the overall project goals, as most project managers will be able to say which of the corners of the triangle he can afford to cut. Furthermore, it links the risk impact score directly to something that project members can relate to.

Give it a try – I bet it will raise awareness about risk impact in your project. At least it will give you a nice discussion on what corner your project will cut off the triangle when things gets though.

One last thing: Cutting corners always costs quality...

Have a nice day & Happy testing!

/Nicolai

Monday 16 September 2013

Get to know your defects

How many times have you been in a project where the standard defect report has been published - and nobody cared?

The numbers are right, the defects are grouped according to severity, system area, functional area or whatever makes sense in the project. If you are in a well-managed project you'll also be presented to an overview showing changes over time. The report is filed with graphs for easy presentation of the numbers. Nobody is awake now. Why?

 Well, the "why" word is exactly the reason nobody cares about standard reports because they do hardly say a thing about why the defects are there.

Its' comparable to a train wreck. It has happened. The dust has settled. What remains now is twisted metal and a lot of questions with few obvious answers.

In well-driven projects there's room for the most fabulous defect activity - root cause analysis. The kind of work that really pays off both in the short run and in the long run.

The most fantastic book I've read for a while is actually this one.
It's an old book. It's from the time when the PC was a future concept, the Internet was research and tablets were science fiction. Yet it captures many of the conceptual problems we face when working in present day projects.

The reason I mention this book is because the authors have the most wonderful and simple problem break-down which goes as follow:
  • What is the problem?
  • What is really the problem?
  • What is the problem, really?

Most projects only pay attention to the first bullet. Raise a defect, describe the initial problem. Done. Fix. Re-test. Done-done.

Instead of the "let's see how many defects we can close or down grade"-approach, try to apply the problem break-down method to your defects. Maybe not all of them. Maybe just the trivial ones. You know the most severe ones will get the attention anyway, so start from the bottom instead. 

Then you'll have the chance of understanding why so many defects are being reported. And just then you might also have some actions to do to prevent defects. That is when you understand your defects.

Monday 9 September 2013

Load testing in the cloud


Problem: Scalability and setup can be problematic when setting up load and stress tests

Load testing is one of the tougher disciplines, not only does it require automated test execution, but it also requires high volumes of cases running every second. Scalability can easily become a problem, for your execution, and limitations in the infrastructure where the test is running dictates the test rather than your non-functional requirements.  

Solution: Take your load test to the cloud

We have quite some time used Microsoft Visual Studio & Azure to run distributed load testing. It is a test setup of a test controller with multiple test agents hosted in Azure, using Visual Studio for running the distributed test.

It is the test controller's responsibility to manage the test agents. The controller handles the execution of the test cases, by nominating who (ie which test agents) to perform the test and how the cases should be performed. It also makes sure to distribute the relevant test cases for a given test agent when the test started. When the test is finished, the controller collects the data from all test agents and this forms the basis of test results.

The test agent is responsible for the execution of the tests and simulation of a given number of virtual users. The test controller tells which tests should be performed and how many virtual users to be allocated for each test. Test Agent reports the status to the test controller during the test, and this information is used to generate the load test metrics.

For more information on test controllers and agents please see MSDN: http://msdn.microsoft.com/en-us/library/vstudio/dd648127.aspx

The trick is getting all the test controllers and agents setup, and then running the automated test on this setup. But once it is running there is really no limit to the number of test agents you can spawn to do your bidding. There is a nice demo on Youtube, showing running Web performance test and load test in Visual Studio 2010: http://youtu.be/yhkHtXcgWUc

Another thing that you might want to look into is the preview version of MS Visual Studio 2013 – It incorporates distributed load testing as a feature, cutting most of the setup of controllers and agents, making distributed load testing much more easily available.

For an overview of the new features of VS 2013 have a look at Brian Harry’s Blog: http://blogs.msdn.com/b/bharry/archive/2013/06/03/visual-studio-2013.aspx

Preview version of VS 2013 can be found here: http://www.microsoft.com/visualstudio/eng/2013-preview

Have a nice day & Happy testing!

/Nicolai

Friday 6 September 2013

Applying test automation


Problem: Test automation is easily applied wrong, resulting in huge cost and little productivity gains.

While reading Brian Marick’s “Classic testing mistakes” I came to think of some of the test automation initiatives I have been involved over the years. Some were successes other were not, and failure was often due to the mistakes Brian outlines in “Theme Five: Technology Run Rampart”:
  • Attempting to automate all tests.   
  • Expecting to rerun manual tests.   
  • Using GUI capture/replay tools to reduce test creation cost.   
  • Expecting regression tests to find a high proportion of new bugs.

Solution: Be careful when introducing test automation - Proof of concept can help you in the right direction.

In my experience, there are several things that you can do to increase chances of success when implementing test automation.

Manage expectations with the sponsor – He pays the bills and wants to know when the investment pays off. He I likely to pressure the test automation project for signoff on business cases that shows significant return of investment, and he is not likely to be a testing professional. This means that you need to tell him what to expect – Promises of great wealth is easy, but the truth about the long journey to get there needs to be told up front.
 
Have a strategy for what to automate – Brian is right, automation is not for everything, in fact there are many test cases that are never to be automated. The strategy needs to tell what to automate, what not automate, and more importantly, how to determine what is a candidate for automation. Examples of what to automate could be:
  • Repetitive tests that run for multiple builds
  • Tests that are highly subject to human error
  • Tests that require multiple data sets
  • Frequently-used functionality that introduces high risk conditions
  • Tests that are impossible to perform manually
  • Tests that run on several different hardware or software platforms and configurations
  • Tests that take a lot of effort and time when doing manual testing
I would advise you to spend 5 minutes on this slideware it will definitely help you establish a strategy for test automation.

Choose the right tool and operator"A fool with a tool is still a fool, and the wrong tool will make you a fool!" MAKE A PROOF OF CONCEPT! That goes for both the tool and the operator. Take the most complicated test case (that candidates test automation), and make the operator script it using the tool. If either the tool or the operator fails, then you might want to reconsider if they are right for the job.

Remember test automation is not a silver bullet – It will make money if applied right, but it will be a costly adventure if applied wrong. There is a nice article on this subject here – If you want to explore the topic further.

Have a nice weekend & Happy testing!

/Nicolai

Wednesday 4 September 2013

Let's talk about our software


Problem: People tend to forget that testing starts way before any code is written.

One of the common mistakes is to wait with test until running code is available. Some testers spend their time writing huge test plans, and a million test cases while waiting for some running code to test. Some might even do reviews of specifications, flowdiagrams etc. but still all of these activities require somebody else to deliver some test basics. Do not get me wrong here, cases and reviews are definitely worthy workproducts, but waiting around for someone else to deliver something to test is a waste.

Solution: Informal discussions for clarification, preferably BEFORE work is started.

Grab a cup of coffee, and go talk to people – Your project is full of people who all knows something about the product. I love this exercise, as it allows me to interact with project stakeholders and explore functionality in the making.

In order to have a good discussion there are rules of engagement that I suggest that you have in mind when you start debating ‘the state of the union’ with your peers.

I came across this article while googling for discussion rules: http://www.wikihow.com/Be-Good-at-Group-Discussion

I really like the tips listed in above article – If applied in a software development context, I bet that you will clear many misunderstandings, and raise general level quality. In my experience, talking about the product happens rarely, as developers and testers tend to rely on what is in the specifications and make assumptions of what is not. These assumptions often materialize as shortcomings later, and cost escalation modelling tells a tale of waste and cost that should must avoided.

Yesterday we made some money on this exercise, and this was how it happened:
One of our developers is currently working on a feature in the current sprint. This feature is relying of data that it compares and uses to generate some reports.

Easy?! Yes, but after a cup of coffee things changed. The new perspective was caused by a discussion on test scenarios that I executed earlier in the test environment. While looking for test data the developer found my scenarios and asked me if I could give some details on the business  behind.

This sparked an interesting discussion on population of data models, the business behind the requirements and the design to be for the new feature. After this discussion, the developer took a new course, incorporating some of the scenarios I brought to the table. I took the conclusions from the discussion and used that for drafting the test charter that will be first hit on the new functionality. Win-win for a cup of coffee and a chat.

 

Have a nice discussion & Happy testing!
/Nicolai

 

 

Tuesday 3 September 2013

Performance testing - going high

No problems or solutions in this post. Just another real life example of how testing can be done.

I don't care how much FX or anything else was used during production. As one comment states, enjoy the beautiful scenery. To that I'd also add: Enjoy the beautiful idea.