Wednesday, 24 December 2014
Monday, 22 December 2014
I recently had the pleasure of entertaning a small group of BI-professionals on the ever interesting topic of “test”. My brief was very short – like “can you participate for an hour and talk about test?”. Of course I could.
So we went through the usual discussions about testing challenges in general, the more specific ones about the nature of BI and datawarehouse solutions, the complexity of data, the heterogenuos system-of-systems setups we usually encounter in this area of business and last but not least the more universal ones about always being last in the food chain and how this affects testing.
Before speaking I got a bit of inspiration from this excellent link.
A lot of problems were discussed and finally somebody raised his voice and said: “But I hoped you were able to provide us with the silver bullet of testing BI solutions.” After 20 seconds of silence I had to admit: “There is no silver bullet”.
Of course there is no silver bullet. There are several bullets including those big ones for cannons but the real bullet is looking at the organisation of BI- and datawarehouse teams and understanding their background and the way they developed from “merge this data into one excel sheet” into “align this data from these 80 sources and give me a transparent and flexible datamart”. That is the essence of testing challenges. That most other branches of IT have understood and acknowledged these challenges and have adapted with proper processes and tools.
Within BI there seems to have been an understanding that “we can test our way out of the problems” – and then it has been one failure after the other. The combination of functional testers and test managers and “BI-teams” trying to do end-to-end testing is not a happy combination. Add to that missing or in-complete test environments and lots of configuration and reconfiguration happening all the time and you have a pre-programmed failure.
If I were to spend my money on testing within this field from scratch I would bet them on testing the ETL part. This is where you have a relative chance of success based on the fact that:
- It is a relatively simple process (or set of different processes with similar goal).
- It is possible to do checks for every step
- the input and output can be predicted and to some extent it does not matter whether you have complete data sets
- It is possible to repeat the ETL process (full or partially) for every error that is found to see that re-test and regression testing results are as expected.
Doing end-to-end testing is the ultimate goal, ETL is the pragmatic start with a chance of success. It’s similar to all other complex integrated test tasks with some slightly different challenges related to BI.
Friday, 19 December 2014
Problem: Structuring the retrospective in a way that facilitates lessons learned.
Preparing for a retrospective is needed in order to get valuable feedback and ensure that the participants are prepared for the session.
Solution: Try the retrospective starfish
There are many ways of doing retrospectives, some simpler than others, and in my mind simplicity is the key to success. If you expect that people spend time preparing then you should make sure that the process is understandable and that the product that you peers needs to produce is well defined.
I came across a method that was new to me a couple of months back, called the retrospective starfish. It is all about listing items under 5 simple headlines:
· Do more
· Do less
· Start doing
· Stop doing
· Continue doing
All input is consolidated under respective headline, and team then evaluate where to look for improving for next sprint, test phase etc. Try it out, it gives quick results and a nice overview of where your project is headed. Furthermore, it allows you to see the trends from retrospective to retrospective, by comparing the starfish from sprint to sprint.
You’ll find a nice description of the method here: https://www.thekua.com/rant/
Have a nice weekend!
Tuesday, 16 December 2014
Problem: Calculating or defining the risks and priorities driving the test
Risk-driven testing is for obvious reasons in need of risk-definitions or priorities. Obtaining these might be difficult in cases where the framework or organization does not support the risk-driven test setup.
Solution: Pursue simple estimates with the right stakeholders
I usually base my test priorities on the combination of technical complexity and business criticality, following below formula:
Technical complexity * Business Criticality = Test Risk or Priority
I usually apply a scale 1 to 5, with 5 as the most complex / critical and 1 as the least. This means that all test items will be rated from a technical and a business perspective, following model:
Use Case 1: Report Print
Use Case 2: Advanced filtering
Technical complexity is based on items like Test data complexity and availability, Requirements and Code complexity, Environment and technology, number of integrations, developer skill and knowledge on business and technology. You get an indication of this for free in the story estimation done as part estimation of the stories. Seek advice from the techies in the project in case you require some input on this
Business criticality ranges from Need-To-Have over Important features to Nice-to-Have, using a scale of 1 to 5. Seek this with the business representative, product owner or other person who is representing the customer.
The alternative is to apply a shortcut:
Label all test items using following scale using the business importance as driver
Need-To-Have, Important features & Nice-to-Have
Be aware however, everything is Need-To-Have in the initial discussion with the customer, getting to a point where you have even spread across the scale is hard. Furthermore ignoring the tech complexity is not always advisable.
Thursday, 11 December 2014
Problem: Measuring the quality of delivered story pointsOne thing is to get the agile team working and measuring the trends in ability to deliver finished stories or story points, another is to monitor the quality of the delivered story points.
Solution: Use simple quality metrics for each story
Traditional V-model approach allows you to monitor defect detection ratio and test efficiency for the individual test phases. This offers an indication about quality in the delivery and pinpoints where to look for optimizing your test effort. This approach is however not viable in an agile setup where release happens every other week.
That is where two simple metrics will help you in your retrospective on the topic of improving the quality effort in the team: Story rejection rate & defects / story
Story rejection rate is measured pr. Story delivered in a sprint. It is binanry to the question: “Did the customer accept the story, as presented without any objections?” Yes is green, no is red, leaving you with a very clear pie or graph to be monitored from sprint to sprint. From here it is simple mathe to make the story point rejection rate in case you break everything down to the point.
Defects / story allows you to get some indication on the defect spread vs. your stories. It requires that you actually register the defects that the development team finds, rather than just fixing them on the fly – But since it is good practice to accompany the code changes with documentation like defect descriptions this shouldn’t be a problem? Reasoning behind measuring this is to follow up on two things: The general level of rework, and where the defects are found. Trends like majority of defects being found in large (read: high story point) stories tells you that you might want to break the story down to avoid confusion.
Go measure that agile delivery and then use the figures in your retrospectives to give yourself a little adeg one improving the quality in the delivery of each and every story-point in your delivery.