lørdag den 18. april 2015

Product risk, are we talking the talk or walking the walk.



Product risks is something we talk a lot about as testers... but do we only talk the talk or are we also walking the walk?

Recently I visited a project where they had written in their test strategy that they were doing risk based testing. They had completed a PRA (Product risk analysis), had a beautiful table in the strategy document showing all identified product risks and weighted them according to damage and change of failure. They also identified on which test levels they should test with what intensity to mitigate those risks… even identified test techniques to use to get the best test done….completely by the book… I felt happy… ;-) so beautiful. 

But then I started to take a look at the test being done, the test designs and test cases and I started to wonder. Nowhere was it visible to me that the identified test strategy was addressed, that any kind of test design techniques were used etc. So I asked the testers in that project: have you considered the PRA which was conducted for the system? Have you used the test techniques, have you even looked at the risk table when you designed and implemented the test for this system… and sadly the answer was NO. They hadn’t had time to use test design techniques they said, and they had forgotten about the test strategy document. 

Since then I have stumbled upon that a couple of times. One of my friends who’s also a test manager had conducted a PRA together with the business and the testers to get a picture of how the business saw the system. But when the result was presented to the testers and test lead the answer were; nice table but we don’t use it anyway.

So how do we change this? How do we go from talking the talk to actually walking the walk? Or should we maybe just accept that we don’t?

I actually think that we should walk the walk, the process of identifying and classifying product risk as a foundation for a test strategy and for testing is the right thing to do, but maybe we could do it in another way? Maybe we shouldn’t just hide the result of risk analysis in spreadsheets and tools? Maybe we should focus more on the conversation we have when we do the risk analysis – the knowledge we share and less about the formalities?

For example I am a great fan Product Risk Analysis as described in TMap, but I have my own lightweight version of how to do it – have taken a lot of the formality away and primarily focus on getting people to talk about risk. Getting the right mix of people together around a whiteboard, getting them to talk about what THEY see as product risks, and even more important getting them to discuss both damage and chance of failure – explaining to each other why they see the risk as that high (or low). 

The table that comes out of it is just like in TMap, but we have made it together, we have discussed, shared knowledge and even clarified potential misunderstandings with the scope during the workshop.

I even do the test strategy table (maybe not the test techniques… that depends on the testers), but rather than just putting it in a test strategy document I make it visible just next to the task board. And when someone starts a new task/story we talk about how that fits into the risk picture.  And when a tester starts on a new feature we take a look at the product risks identified and break it down to more detail for the given feature, ensuring the right focus and weighting of the test.

The main thing in my humble opinion is that we talk about risk to ensure that we have a common picture, and that we actually address them when we test - what form, shape or name we give it is less important..