I’ve noticed that a lot of organizations seem to have problems with Scrum of Scrums. Some coaches refrain from recommending them altogether while others might use them with low expectations. Without making too many generalizations I’d like to describe one of my more positive experiences using Scrum of Scrums – as an indicator of our ability to work together.

My assignment was to coach a Scrum project with ~100 members and seven development teams distributed over three countries. One of the pains that were brought to my attention early on was how dysfunctional the Scrum of Scrums was. All the ScrumMasters and the project manager would have a teleconference three times a week where the ScrumMasters would take turns giving status reports(!) and complaining about the problems they had. I was approached by some of the ScrumMasters asking me if we shouldn’t have the meeting less frequently since “nothing new is being said. The same problems are being brought up in every meeting.”. I asked them if they thought the real problem was the frequency of the meetings or their inability to solve problems between the meetings. They kind of recognized my point and agreed to continue with the three weekly meetings for a while longer.



We began to move away from giving status reports and I also suggested that they started to write down the issues that were being brought up and make sure that unless someone claimed responsibility for actually working on an issue, they wouldn’t be allowed to keep complaining about it in this forum. My idea of writing down the issue was to create an excel-sheet or something similar in a shared folder but apparently in this organization, there would have to be a JIRA project to store such information. It also turned out that it would take about a month to create said JIRA project.

Since I had a hard time seeing that JIRA would be the way this problem got resolved, I also started working on other things in parallel. The first thing we did was to get all the ScrumMasters together to get to know each other. I managed to get the funding to fly all ScrumMasters to one of our sites and hold a retrospective and some other workshops. It was a great day with people getting to know each other but it still wouldn’t be enough. When, towards the end of that day, I asked if everyone in the room had everyone else on speed dial, the frightening answer was that no-one had anyone else’s phone number in their cells. Getting this problem solved was easy, the problem was to get everyone to use the phone numbers.

After this day together, I began requesting from each ScrumMaster to call all the others’ on a daily basis. Whether they had an issue to talk about or not, they should at least make a social call to see how the others’ were doing. This didn’t happen immediately; most thought that they’d get away without making the calls but I kept asking them about it in our one-on-one’s.

I can’t tell exactly when the transformation happened, but before the new JIRA project had been set up, I noticed that fewer and fewer issues were being brought up during the Scrum of Scrums. Instead people were having social discussions and talking about problems in a past tense. When I started inquiring about this I learned that there were still a great deal of issues but now they were being solved outside of the Scrum of Scrums. The teams (or at least their ScrumMasters) had begun caring about each other. One team even offered to send some of their developers to another country to help one of the teams there before the other team had worked up the courage to ask for external help.

In a little more than a month we went from having a meeting that didn’t help us coordinate any issues or solve any problems at all, to holding a meeting where there were no issues to coordinate and no problems to solve. This made me realize that the daily stand-up and the Scrum of Scrums might not really be any solutions in themselves, but rather indicators of how well we communicate within our teams – outside of the meetings.

This post is based on a lightning talk I gave at a client and is heavily inspired by the excellent book “A Practical Guide to Distributed Scrum” as well as my own experiences from working with distributed teams.

Paula Underwood, a Native American (Oneida – Iroquois) historian, wrote down the 10.000 year old oral history of her tribe. Among the stories that she shared there is one particular learning story called “Who speaks for Wolf”.

Paula Underwood

Paula Underwood

The story describes a time when the tribe had outgrown its current habitat and was looking for a new place to live. They sent out many young men in different directions looking for the perfect spot for them to move on to. When the men came back the tribe evaluated the places found on different criteria such as access to water, suitability for growing their seeds, animals to hunt and so on. Finally they decided on an area that had the potential to fulfill their needs. The problem was that a large population of wolves also inhabited this very spot. One of the men in the tribe, called Wolf’s Brother, who was very close to our feline friends, spoke up against this decision. He told his peers that there wasn’t room enough for both man and wolf in this place, but his words were ignored.

Soon enough though the rest of the tribe realized the correctness in Wolf’s Brother’s prophecy, that too many wolves where competing with them for the same food and that they wouldn’t be able to chase the pack away. Instead they decided to hunt down the wolves and exterminate them from the area. Luckily, they came to their senses at the last minute and realized that this would change the people into something they didn’t want to become; “a people who took life rather than move a little”. With this insight they changed their decision and moved to another area and left the wolves alone.

In order to not let this story repeat itself, to make sure that someone always took nature into consideration when they made any decisions, someone would always raise the question:
“Tell me my brothers,
Tell me my sisters,
Who speaks for Wolf?”


When we are working in distributed teams, we are often confined to teleconferencing. And when we’re facilitating a teleconference it’s easy to forget that there are people on the other side of that line who don’t see what we see. It’s easy to fall into the trap and act as though everyone were in the same situation as we are. In order to not forget about our friends on the other side, it can be a good custom to make sure that there’s always someone who speaks for Wolf. Someone who looks after the interests of those on the other side of the line.

What you can do is to nominate someone in your team to be the patron of the people on the other side. Have someone, preferably someone who has also been one of the people on the other side, to watch for, and to call out non-remote friendly behaviors so they come to everyone’s attention.

So what are non-remote friendly behaviors?

One thing to look for is visual cues. Those usually don’t travel well across phone lines. Ask for visual cues that you don’t see or translate them when you do see them.

Say for example that you mention a new requirement that your team has been asked to bring into your next sprint. No one in the room opens their mouth but Paul and Jill are making gagging faces showing that they consider this to be a horrible idea at the moment. Let the people on the other side know what is happening.
“Okay, I don’t know what you’re thinking about this new requirement in Hyderabad but Paul and Jill are making really funny faces about it right now.

Or perhaps someone makes a reference to some tension that happened in your last meeting and you’re not sure if this is water under the bridge or if the tension is still there. Ask!
“Yeah, that was quite a disagreement we had last week. Jane, are you smiling now or does this still put a frown on your face?”

Every now and then someone forgets about the non-present part of the meeting and starts to point at the screen while commenting, or even worse; starts to draw on the whiteboard. Let people know what is happening.
“Ok, now Peter is pointing at the column with last years figures, just so everyone knows what he’s referring to.”
“I’m sorry guys that you can’t see this but Jill just drew a pie chart here showing that 45% of the functionality must be done this quarter. Perhaps Jill can take a photo of it and email it to you after the meeting.”

Anyone should be able to call these things out but if you have a patron of the people on the other side, responsible for keeping an eye on these things, it will make everyone more aware of them.

Another problem, especially for new teams, is that it can be hard to tell whose voice it is you’re hearing. So always try to identify the speaker. Before you begin to say something it’s good to identify yourself.
“Okay, Jane here. I think we need to reconsider those numbers you just presented.”
But if Jane forgets to present herself, the patron can move in with a short:
“Thank you Jane for that comment.”
just to let everyone know who spoke out.

These are just a few examples of misbehavior that cripple the communication within a team. There are many others and learning to see them takes time. But if your patron of the people on the other side, calls out these misbehavior people will begin to see the patterns and start correcting themselves.

So tell me my brothers,
Tell me my sisters,
Who speaks for Wolf on your team?

I wrote in my previous post about the Scrum team I’m working in as a ScrumMaster and that we’re closing in on our first release to production. At this stage a lot of the work is related to getting production environments up and running and our user stories have taken on a more technical format and are formed more by the team than the Product Owner. Our PO had a story asking for a production environment but that one was way too fuzzy for the team to take in so they had to break it into smaller stories/technical tasks. A lot of this work was done on the fly during the planning session and we needed to find defintions of done for the stories as they were forming.

The task of finding good definitions of done proved to be harder than anticipated for these stories. After a while I realized that what we were specifying tended to be more of a list of tasks than acceptance criteria. So we backed up the value chain by asking “Why” we wanted to do something and started arriving at better definitions of done. However, the crown jewel was when we were discussing the job of getting outsourced operations to monitor our application. What was the definition of done here? Putting the order for monitoring services in the mail? Getting a reply to our order? Having a signed contract? We weren’t really getting anywhere until all of a sudden one of the team members spoke up:

“I want to turn off one of our services and when I see operations dancing a jig on our doorstep, that’s when we’re done.”

I damn near got a tear in my eye hearing this suggestion. This is the kind of thinking that we need when we measure things. Whether it’s a level of service, quality or productivity we want to measure we always need to begin by looking at what we want to accomplish. We can’t demo usability by showing a pretty UI, we need to put a user in front of the UI to demo usability. We can’t demo quality in number of bugs found, we must demo quality in a fit for use product that is stable under usage and over time. And if we want to demo our ability to handle problems, we can’t do that by waving a contract. We demonstrate our ability to handle problems by handling a problem.

This episode reminded me of a story an aquiantance told me ten years ago about his neighbor. The neighbor had a burglar alarm connected to a security company. The security company promised in their service to arrive at the house within 20 minutes of an alarm. Twice every year this neighbor set the alarm off. He then pulled out a lawn chair, put on ear protections and sat down with a timer in his hand and when the security company arrived after half an hour or fortyfive minutes, he filed a complaint and got the service for free for another six months. This guy knew what the definition of done was; he also waited for operations to dance a jig on his doorstep.

If you want to measure or demo some qualitative aspect, don’t settle for the easy way out and try to quantify it. Put it to the ultimate test, that is the only way you’ll ever know for sure that you’ve done something right.

Quality Radar Retrospective

February 25, 2012

I’m currently working as ScrumMaster with a Scrum-team that is closing in on our first release to production. The entire release has been tough, I’d almost say mission impossible, in order to get the basic functionality into the procuct. What has been produced by team, Product Owner and other people around us is nothing short of a miracle but we’ve cut a few corners along the way and several decisions have been deferred until later. The thing is that “later” is coming at us now like a freight train. Yesterday I decided to run a retrospective trying to find out what the team consider to be any main quality issues.

Earlier in the release I had planned to perform David Laing’s Complexity Radar retrospective but never got around to actually doing it. At this time I figured that we’re not in a position to do any architectural changes but we can still do some stabilizing activities and perhaps sand the edges of some of the corners that have been cut. So instead of looking at complexity I changed the perspective to quality (whatever that is).

I opened up with a check-in; “Which day of the sprint was your favorite?”. Most members pointed to the last day of the sprint “because a lot of pieces fell into place”.

I know, I know … we’re still working on WIP-issues. – But that’s not why we’re here today.

After the check-in I presented the Quality Radar to them:

Everyone got five points to distribute among the dimensions.

0 points – Stable

1 point – Unknown quality

2 points – Known quality issues

Since the team is distributed over two locations, we used corkboard.me where I had placed Post-its according to the Quality Radar above:

The team members put up Post-its on the radar with their points (one’s and two’s) with a comment on why they thought we had an issue.

My original plan was then for team members to pair up (one from each city) and analyze the radar in pairs but due to time limitation we did it as an open discussion instead.

We began looking at the biggest cluster of notes that was around maintainability. It turned out though that most concerns where not regarding the maintainability of the application but regarding the maintenance organization. Since this was out of the teams reach we decided that I’d raise these concerns withing the organization and we moved on the the second largest cluster that was around the applications’ capacity. What came up was that not only did we have unknowns in this area but there was also a known issue that had fallen between the chairs.

It was decided that load testing will be a focus for testers during the next sprint with extra attention given to the known issue.

If you’ve read my blog before you might have seen that I’m not particularly fond of the word “quality” because of it’s fuzziness but in this case I left it to the team to decide what they meant by “quality”. I’m really glad that we did this because it raised several issues beyond those we prioritized this time. I will use this format again in the future but I’ll make sure that we have more time for analyzing the data and for suggesting and deciding on actions. Another thing that’s bothering me is the scale where I gave more significance to known issues than to unknown quality. This could easily draw people’s attention away from untested areas and into minor issues that they’ve seen. I welcome any suggestions on how to address this risk.

Assumptions is the mother of all … flurps.

I would say that the majority of our failures in cooperation and  collaboration arae due to assumptions. We also fail largely due to poor or lacking communication but this is based on an assumption that we don’t need to communicate in order to succeed.

I’ve seen so many problems arising from people making assumptions about other people. We prefix a lot of our statements with “I think that … ” or “I believe that …” and yet we continue acting based on the statement that follows. What is up with that? Shouldn’t the words “I believe …” raise a warning flag so red that it would have brought a tear to Karl Marx’s eyes?
My suggestion is that whenever you hear yourself or someone else utter words to that meaning, you should challenge yourself or the other person to validate the assumption.
I use those words all the time and I want someone to challenge me since that will provide me with great opportunities for learning.

When it comes to teamwork, I find that a lot of assumptions take the form of implicit expectations. We expect people around us to behave in certain ways and we assume that they know this and that they will act accordingly. This is a breeding ground for bad feelings and even less communication because we often attribute peoples failure to live up to our expectations to spite on their part. In an attempt to lessen the frequency and impact of these hidden assumptions I’m designing an exercise to help make our expectations more explicit. I’ve only run it a couple of times so far and have updated the format every time and I welcome any and all suggestions for improvement that you might have but I will describe the exercise as I plan to run it the next time.

Part 1:
I begin the exercise by creating columns on a whiteboard, headed by the different roles in the project; project manager, product owner, ScrumMaster, developer, tester etc. I’m aware that Scrum only prescribes three roles but the goal of this exercise is to make things explicit, not wishful thinking. I also ask a representative for each role to act as secretary for the second part of the exercise. I then hand out pens and Post-It notes to everyone present (all roles from the columns need to be present for the exercise to be meaningful). Then everyone gets to write down their expectations on the different roles on the Post-It’s, and post them in the column for the role they have expectations on, as they write them. The expectations are written on a format similar to user stories;
“As a ‘role‘ I expect ‘another role‘ to ‘fulfill some expectation‘ so that ‘some benefit will occur‘.”
I might for example post the following expectaion in the Product Owner-column:
“As a ScrumMaster I expect the Product Owner to be available for the team members at all times during the sprint to answer their questions about requirements so that they don’t have to guess what the customer needs.”
I give this part about 10-15 minutes or stop it when no more notes are being produced. People usually need some time to think during this exercise and they tend to write during the time available.

Part 2:
Then I go through role by role and note by note, asking for clarifications if needed, and also asking the representative(s) for the role if they intend to live up to the expectation on the note. I will ask the secretary to take additional notes from the discussion. Usually people want to live up to the expectations but not without clarifying what is meant;
What do you mean by ‘at all times’?” and “What do you mean by ‘available’?“.
Sometimes the expectations are off, it can be a project manager expecting team members from a Scrum team to report progress to him or a developer expecting a product owner to add every technical story they can come up with to the backlog. The discussion around these expectations is really important because it will clarify and set boundaries. It will validate or invalidate assumptions. I have found that for a group of 15-20 persons this part of the exercise will take about one hour if it is heavily moderated. Make sure that you have enough time for this discussion because this is an investment that will pay for itself.

Part 3:
After the exercise I ask the secretaries to take home and rewrite the accepted expectations as a contract, with any clarifying notes from the discussion added. The expectations should be turned around and rephrased as commitments.
For example:
“As a Product Owner I commit to be available, at least by mail and phone, five days a week for the team members, to answer their questions about requirements so they don’t have to guess what the customer needs.”.
The contracts are then to be signed or at least acknowledged by everyone representing the role.

The main goal of the exercise is not to produce contracts but to bring out as many assumptions and expectations as possible into the light. The contracts are just reminders about the exercise and a way of repeating what was agreed upon one more time after everyone has had a chance to digest the exercise.

This exercise can be done as an activity at the beginning of a project as well as later on if you experience communication issues in your project and suspect that they might stem from hidden assumptions. If you run this exercise, please share your learnings here so I can continue to improve it.

You can’t stop people or yourself entirely from making assumptions but you can tell other’s about yours and you can as them about theirs.

Rotting Estimates

June 20, 2011

Have you ever been part of a late project where you constantly update your estimates and plans but they continuously get worse instead of improving? Where you finally get to the point where your estimates get out of synch during the time you fetch a cup of coffee?

No? Then I congratulate you because it’s an extremely demoralizing and costly situation.

Yes? Then this post might be able to provide some insights to the dynamics at play.

The model

A colleague of mine just recently presented me with the generic project model built on the work of Pugh Roberts in the early 70’s.

The model is very simple but it gives a good starting point for discussing the recurring dynamics in projects. We pull work from our “Work to Do” box and implement it, either correctly or incorrectly. Work done correctly goes into our “Work Done” box while work done incorrectly goes into our box of “Undiscovered Rework”. Note that this has nothing to do with our physical deliveries, both work done correctly and work done incorrectly will move along the same physical path since we haven’t been able to distinguish the two from each other yet. When we do discover the need for rework we will assess the problems and move them back into our backlog of “Work to Do” again.

What is “Undiscovered Rework”?

In this post I will mainly focus on the “Undiscovered Rework” box. This is our backlog of work that we think we have completed but that will not be accepted as it is. Some of the rework will be discovered when we implement new functionality, some will be found during testing and yet some will be discovered by our end users in production. Anything we produce where the quality is unknown carries the potential to end up in this box.

The sources of “Undiscovered Rework”

The amount of work in  “Undiscovered Rework” tends to grow quite fast as a project goes along. A couple of factors that speed this growth up are:

  • Not having a good definition of done
  • Postponing testing until the end of the project

Both of these factors hide the true quality of our product and allow for different kinds of errors to pile without us knowing it. If our feedback loops for determining the quality of our product are too long or if they are missing entirely, there is really no limit to how much waste we can add to this future todo-list.

The implications

The big problem with “Undiscovered Rework” is that it hides the true progress of our project. It hides our status because we do not know how much work is actually done and we do not know how much work is actually left to do. It also corrupts our view of the rate at which we make progress.

Normally when working in an agile project where we use our velocity to predict future deliveries, our estimates narrow in and get better and better as we gather data over the sprints but this only holds true if we don’t let our hidden backlog grow. If we do not know the true quality of our product, the only thing our velocity tells us is at what rate we can produce crap. If we allow the amount of “Undiscovered Rework” to grow, our estimates will keep deteriorating over time.

An example

Let’s imagine we’re in a project where a serious amount of the testing is done at the end of the project. We begin this project with 15 user stories in our backlog and find that according to our velocity we can implement three user stories each sprint.

The thing is that one third of this work ends up in the “Undiscovered Rework” box. We move into our next sprint believing that we have finished requirements A, B and C and that we will be able to do requirements E, F and G during the next couple of weeks. The problem is that stories C and G will need to be redone completely later on (I’ve simplified the example by gathering all errors in one user story here).

After going for four iterations believing that we have a velocity of three, we look at the last three remaining items in our backlog and think that we are one iteration away from our goal. But testing will show that we actually have four more stories to complete from our previous sprints. So we actually have seven (!) stories to implement.

We are not talking about one more sprint anymore. That is more like two and half sprints. But wait a minute, our true velocity was not three stories per sprint, we actually only managed to produce two stories of good enough quality per sprint so that means that our seven remaining stories actually will take three and a half sprints to complete. Now we’ve gone from being almost done, to being halfway done.

The insights about the remaining work in the previous example will not happen all at once. They will usually dawn on us one at a time and without us being able to see the connections. The symptoms that management and we will see are that work suddenly begins to slow down. So we begin to re-estimate our work when the first bug reports come in. During our first re-estimates we probably still believe that our velocity is three stories per sprint and we will just add some time for the bugs that have been found so far. Then as we move closer to testing and get faster response on our fixes our true velocity will begin to show and we will need to re-estimate again. What often happens at this point is that pressure from management will force people to take shortcuts so the rework becomes fixes and the fixes becomes workarounds and these workarounds will create new problems for us. Stress generally forces people to make bad decisions that will put the project even more in a pickle. If we are totally out of luck, management will fall back into old command and control ways of thinking at this point and force the project to go from pull to push while beginning to micro-manage people and thus slowing down progress even more. Now there is really no saying how long this project will take anymore.


Good estimation is about always being correct but adding precision as we learn more.

Most projects start out with a well-defined status (i.e. nothing has been done) and they stand a fair chance of making a somewhat decent initial estimate. Nurturing this estimate by collecting data while at the same time assuring quality will help bring precision to it. But if we allow for quality to be an unknown and still fool ourselves into believing that our data gathering will add precision to our estimates, then we are heading for a crash. The false sense of knowing where we stand will only make further estimates more and more off for every new data point gathered.

Turning this knowledge around though, you can use it as a canary in our project. If you experience that your estimates begin to get worse over time instead of improving, it might be a sign that your project has some serious quality issues.

There are probably as many ways to look at Scrum commitments as there are teams doing Scrum but I think that I have seen a couple of patterns emerge that I’d like to share with you.

The Galley Slaves

The first team I’d like you to meet is called the Galley Slaves. When we first meet the Galley Slaves they are in the middle of sprint planning.

PO: This is our prioritized backlog. I’d like to remind you all that we’re way behind for the next milestone so I want you to commit to as much as possible this sprint. We have to get the top 18 user stories into the sprint or we’ll never make it.

Team: But that adds up to 65 story points and we’ve only managed 45 during our last two sprints.

PO: Yes, but remember that Peter was home sick for three days last sprint and John had to stay home with his kids for two days the sprint before that. You should be able to make it. This is no time to under-commit.

Team: Okay. We will commit to the top 18 stories then.

So in this situation we have a PO or PM that’s pushing the team to take in more work than they should. This is not the first time it’s happening so the team has a history of missing their previous “commitments” and have a disadvantage when trying to convince the PO that they should do even less. The team might even feel bad about not meeting their previous commitments and have some delusion about being able to catch up.

Next time we meet the Galley Slaves is mid-sprint.

Team: It seems like Peter came back from sick leave a bit premature. He had to go back home again right after sprint planning. It also looks like John caught the same thing because we haven’t seen him since then either. We will never be able to finish all of this on time.

PO: But you made a commitment to these stories …  I’ve communicated that you would do this to upper management. It looks like you’ll have to work this weekend now.

At this point we get to see even more push from above. The team has made a promise and other people will get into trouble if they don’t meet it.

But who did actually make the commitment? Usually in cases like this, one part of the organization has made promises to another part of the organization and now people whose butts are on the line try to shift the blame further down the organization.

The Flagellants

The second team I’d like you to meet are called the Flagellants. The flagellants were a medieval brotherhood that thought they could scourge themselves to absolution. In 1417 the church banned the flagellant movement but they are still very much alive in modern-day corporations.

Let’s see how sprint planning goes for the Flagellants.

PO: This is our prioritized backlog. I’d like for you to pick from the top the stories that you think you’ll be able to finish during this sprint.

Team: Okay, we’ll have to bring in at least the top 18 stories if we’re going to make the next milestone in two sprints.

PO: That will be 65 story points and you’ve only managed to deliver 45 points per sprint the last two sprints. Do you really think you can make this?

Team: If we are to get everything into the next release, we will have to go for it. So we will commit to these 18 stories.

This team obviously feels a strong responsibility for the entire product. As with the Galley Slaves, the Flagellants have a history of not meeting their commitments and they really want to make up for it. Every time.

Now we meet the Flagellants mid-sprint.

PO: So, how’s your commitment coming along? Your burndown chart has been flatlining for three days now and it was quite flat even before that.

Team: Yeah, but we’re on top of that. Peter has cancelled his vacation and we’ve all decided to work Saturday as well so we will catch up. You know we had some database problems at the beginning of the sprint but it looks like we will be able to solve those now.

This team if full of optimists, no doubt about that. They always believe that the last hurdle has been passed and everything will be downhill from now. They also make a commitment at the wrong level. A sprint level commitment can be contained and controlled to quite a large extent by the team but any promises made by the organization on release- or product level are outside of the team’s control.

Team Alfa

Finally I’d like to present you with team Alfa. These are my favorites. Let’s watch their sprint planning.

PO: This is our prioritized backlog. I’d like for you to pick from the top the stories that you think you’ll be able to finish during this sprint.

Team: We have picked these 12 stories. They correspond to our velocity during the last three sprints and we feel confident that we will be able to finish them. This is our commitment and we will do our best to deliver these stories during the upcoming sprint.

This looks healthy to me. The team is looking at their history and they make a commitment that is both feasible from a historic perspective and it is a commitment that they actually believe in themselves.

But alas, even team Alfa can run into troubles. Let’s listen in on them mid-sprint.

Team: Unfortunately we’ve fallen behind since this one story called for changes in a database outside of our control. We didn’t see that one coming. We have tried to work around the problem; Peter stayed here until eight o’clock last night but this will take quite some time to handle. We don’t think that we’ll be able to do the last two of the stories that we initially committed to anymore.

PO: Okay. Let me take the last two stories back to the backlog and re-plan them for the next sprint. In the meantime you do your best on the rest. Does that sound like a plan?

We will always make estimates that don’t come true. There will always be unforeseen events happening. What we need to do is to accept this as a fact and adjust our game to the reality.


What will happen to teams like the Galley Slaves and the Flagellants in the long run?

  • We will get quality issues. When we start looking at commitments as truly fixed, our sprints become miniature projects with fixed time, fixed resources and fixed scope. The only dimension left for the teams to compromise with is quality.
  • Disappointments. The teams will get disappointed in themselves for not meeting their commitments and they will lose credibility towards the rest of the organization.
  • Burnout. No one is able to handle this much negative pressure and overtime in the long run. The Flagellants are in the worst positions since they really don’t have anyone to complain to.

The reasons that we ask for commitments are that we want to be able to look into the future for planning purposes and because it gives the team a good intermediary goal to work towards. That is; prognosis to see what will be ready when and the motivational factor from having clear expectations set by oneself.

What we need to remember about these commitments is that they are still based on estimates and estimates come with a confidence level. We need to inspect and adapt even within the sprints.

Everyone is entitled to their view on what a commitment are but there are two parts of the agile manifesto that (in my eyes) trump any personal interpretations of the term commitment:

“Responding to change over following a plan.”


“Agile processes promote sustainable development. The sponsors, developers, and users should be able
to maintain a constant pace indefinitely.”

%d bloggers like this: