Quality Radar Retrospective

February 25, 2012

I’m currently working as ScrumMaster with a Scrum-team that is closing in on our first release to production. The entire release has been tough, I’d almost say mission impossible, in order to get the basic functionality into the procuct. What has been produced by team, Product Owner and other people around us is nothing short of a miracle but we’ve cut a few corners along the way and several decisions have been deferred until later. The thing is that “later” is coming at us now like a freight train. Yesterday I decided to run a retrospective trying to find out what the team consider to be any main quality issues.

Earlier in the release I had planned to perform David Laing’s Complexity Radar retrospective but never got around to actually doing it. At this time I figured that we’re not in a position to do any architectural changes but we can still do some stabilizing activities and perhaps sand the edges of some of the corners that have been cut. So instead of looking at complexity I changed the perspective to quality (whatever that is).

I opened up with a check-in; “Which day of the sprint was your favorite?”. Most members pointed to the last day of the sprint “because a lot of pieces fell into place”.

I know, I know … we’re still working on WIP-issues. – But that’s not why we’re here today.

After the check-in I presented the Quality Radar to them:

Everyone got five points to distribute among the dimensions.

0 points – Stable

1 point – Unknown quality

2 points – Known quality issues

Since the team is distributed over two locations, we used corkboard.me where I had placed Post-its according to the Quality Radar above:

The team members put up Post-its on the radar with their points (one’s and two’s) with a comment on why they thought we had an issue.

My original plan was then for team members to pair up (one from each city) and analyze the radar in pairs but due to time limitation we did it as an open discussion instead.

We began looking at the biggest cluster of notes that was around maintainability. It turned out though that most concerns where not regarding the maintainability of the application but regarding the maintenance organization. Since this was out of the teams reach we decided that I’d raise these concerns withing the organization and we moved on the the second largest cluster that was around the applications’ capacity. What came up was that not only did we have unknowns in this area but there was also a known issue that had fallen between the chairs.

It was decided that load testing will be a focus for testers during the next sprint with extra attention given to the known issue.

If you’ve read my blog before you might have seen that I’m not particularly fond of the word “quality” because of it’s fuzziness but in this case I left it to the team to decide what they meant by “quality”. I’m really glad that we did this because it raised several issues beyond those we prioritized this time. I will use this format again in the future but I’ll make sure that we have more time for analyzing the data and for suggesting and deciding on actions. Another thing that’s bothering me is the scale where I gave more significance to known issues than to unknown quality. This could easily draw people’s attention away from untested areas and into minor issues that they’ve seen. I welcome any suggestions on how to address this risk.

%d bloggers like this: