Scrum, Kanban and Unplanned Work

A colleague sent a link to an article to a group of colleagues and ask for a comment. The article was published in one of the popular online IT magazines. My goal here is not to criticize or fix this article, but to clear up the misconceptions that this article left some of my colleagues with.

A Scrum team plans stories for the sprint ahead and starts working. Very soon, some unplanned work arrives: technical support interruptions, bugs, requests from other teams dependent on this team, and so on. If the team is unwilling to interrupt their planned work on their user stories, ask their Scrum Master to “protect” them, and the Scrum Master succeeds at that, then some clearly urgent and important work doesn’t get done, leaving several stakeholders frustrated. If the team interrupts their sprint, that jeopardizes their delivery of planned user stories by the end of the sprint, leaving the Product Owner and some other stakeholders frustrated.

Then — honk if you have never seen this happen — somebody says: “This is the nature of our work. We cannot honour our timeboxes. We have to admit Scrum is not for us. We need to switch to Kanban!”

Things are much better in the imaginary Kanbanland. Whenever an urgent important issue occurs, the team places a ticket into the Expedite lane on their Kanban board and swarms it. The issue gets resolved very quickly. Dynamic prioritization happens all the time, so some items pass other items in the “flow” so that they can be done faster. The work items they pass get done a little slower, but still on time. Everybody is happy. Someone has this vision in their head and sells it to their colleagues. This has of course never happened to anyone you know.

Switching from Scrum to Kanban, Missing the Most of Both?

Embedded in this naive Kanban sales pitch are those several misconceptions:

  1. that Kanban has some magic way of dealing with unplanned work and emergencies
  2. that Kanban makes it more convenient to deal with them
  3. that Scrum is too inflexible with its timebox and cannot accommodate unplanned work

There is no expedite magic in Kanban

Kanban board with questions about allocating planned and unplanned work

Drawing an expedite lane on a Kanban board doesn’t magically ensure that unplanned/expedited work gets done. What many Kanban practitioners do is they allocate some of their capacity to work on unplanned work/expedites and some capacity to do planned work. They design explicit policies for sequencing work items belonging to different classes of service. That’s how their unplanned work gets done. Their board makes the allocation explicit.

The point of Kanban is actually to get rid of such work, not to keep doing it indefinitely through magic or skill. Kanban, when practiced at the depth that includes quantitative flow measurements, makes the economic cost of expedites explicit. Case studies presented at major conferences show that it is actually quite heavy. Monte Carlo simulations based on real-world project data confirm and provide the mathematical basis of this finding. For example, in his LSSC’12 talk, Dan Vacanti showed an example where a 10% capacity allocation to expedites roughly doubled lead times for the standard class of service. Another analogy is a suburban highway closed during rush hour to make way for the presidential motorcade – when will the commuters get to work?

In the Corbis 2006-2007 case study, which is one of the case studies in the book every Kanban practitioner ought to have read by now, the arrival rate of expedite tickets was reduced to five per year for a company of 1300 employees, including over 100 in the IT department. (It may not be easy to connect these two numbers because the expedite statistics are mentioned on page 227, while the size of the company is described on page 52.) Gradually reducing restorative demand is the direction to go, regardless of process and improvement frameworks. Pursuing this direction with Kanban requires certain depth of practice of the Kanban method, especially in the areas of demand analysis, measuring flow, Katas and models. Simply visualizing team workflows and controlling multitasking with WIP limits, which is what many teams “switching from Scrum to Kanban” for convenience end up doing is not likely to be enough.

Scrum doesn’t tell you to defy common sense

At the same time, unwillingness to set aside planned work and switch to a clearly urgent, important issue, insistence on “honouring” the sprint plan and scheduling this urgent work for the next sprint strikes me as a dogmatic interpretation of the Scrum framework. Scrum gives teams a process and an improvement framework, but it doesn’t tell you to defy common sense. If an unplanned work item shows up and there is no economic cost to not doing it and there is no benefit to the business or clients in doing it, then common sense says, delete it from your backlog and don’t do it. Otherwise, which is more likely, there is some economic cost to not doing it, because it is very beneficial to your clients and your business (we call it cost of delay in Kanban), and that means you have to find and allocate capacity to do this type of work quickly. How much capacity does your unplanned work require? Do some team members need to be relieved of planned user-story work for the sprint so that they are always available to handle the urgent unplanned work? How many and for how many days? What does it take? What does cost-of-delay vs. cost-of-capacity tradeoff look like for this team?

When the team plans their sprints, they should use only the planned-work capacity to select the user stories for the sprint. This capacity measure may also be called the sustainable pace, the focus factor and so on. Then they can use this capacity to do their planned work in an uninterrupted timeboxed fashion. If they use their total capacity for the planning purpose, they are counting on magic to happen. There may be one problem with this approach: the planned-work capacity is laughably small, because unplanned work dominates. Say, the Product Owner expects the team to pick up 10 stories for the sprint, but they only pick up 3 and say, we need the capacity that could be used for 7 other stories to do all that unplanned work that we know will occur. PO: “WTH?”

If the team’s planned-work capacity is so small, then they have two great questions for their next retrospective. Why is there so much unplanned work? What can we do to reduce the arrivals of unplanned work? Scrum is both a process and an improvement framework. No matter what happens within the Sprint timebox, there is a retrospective a the end, where the team can identify and take some improvement steps. If unplanned work causes a lot of pain, ask those two focused questions.

Conclusion: Kanban for Improvement, Not for Convenience

Kanban can provide options for improvement. However the powerful improvement-focused questions have to be asked regardless of the chosen improvement framework. If these questions are not being asked, that’s a problem. Those making the “switch” for convenience may be ignoring this problem and running the risk of missing the improvement part, which is the point of the Kanban method.

Posted in hands-on | 7 Comments

The #NoEstimates Game Debut at #ACCCA13

I’m continuing the series of reports from the 2013 Agile Coach Camp Canada started with the previous post.

The opening night of the camp was the games night and the highlight of it was definitely was the debut of Chris Chapman‘s new game, which he has recently designed to demonstrate that the estimation of upcoming work can sometimes get in the way of delivering value to customers. This game was inspired by Woody Zuill, who leads the informal #NoEstimates movement, which challenges software development practitioners to get out of their comfort zones and question the value they get out of their estimation activities. (Here is the collection of Woody’s blog posts on this topic.)

Chris and I met in Toronto for dinner a week earlier and he demonstrated to me the design of his game. We discussed it, I suggested a few improvements. He included them in the game design, purchased the materials, ran some experiments during the week and offered me to help him facilitate the first run of this game during the coach camp.

The San Francisco Painted Ladies

The game participants made two teams and received the same project to assemble a jigsaw puzzle. Chris and I chose the picture of the San Francisco Painted Ladies as our jigsaw puzzle picture, because it had many distinct architectural and landscape elements in it, giving the teams’ product owners lots of choices in defining value. The picture in the actual puzzle was a little bigger than the picture above and included a lot of green (grass) and blue (sky) pieces, which Chris removed from the puzzle boxes to simplify the project.

One of the teams had to skip estimation and simply try to get some work done. They would ask their Product Owner (John MacIntyre played this role during the coach camp) right away what area of the puzzle he wanted them to work on and start assembling it right away. They would demonstrate progress to the Product Owner frequently and ask for feedback. If the Product Owner was satisfied with that area of the puzzle, he would give them another are to assemble next. The team and the Product Owner would then keep going until we all reached our time limit.

Team 1 has assembled the top floor of one of the "Painted Ladies" houses

The other team was the control team in this experiment and had to follow a more traditional iterative Agile approach. They had an Iteration Zero first to populate the product backlog with a few “stories” that the Product Owner (Maria Kouras played this role) identified as the most valuable. After relative estimation of the stories using planning poker and quick prioritization of the backlog by the Product Owner, the team would select some stories into their Iteration One and start the work. The iteration was supposed to be followed by a quick retrospective and some necessary adjustments. The Product Owner would get an opportunity to reprioritize the backlog at that point. The team and the Product Owner would then keep repeating their iterations until the end of the game.

Both Product Owners were told that their definitions of value may change during the game, simulating the real-world possibility that customers (and Product Owners representing them) can change their mind about what they want after they have seen the first delivered increments of the product.

Team 2 has not yet assembled anything

Chris facilitated the first team, I facilitated the second, so I can tell you mostly what was happening on that second team. The second team used their Iteration Zero to create several items in the backlog and estimate them on a Fibonacci scale, mostly in the 8-13-point range. After the Product Owner stated her priorities, the team decided they could select only one 8-point story for their first sprint. They could not deliver it during the sprint, the Product Owner didn’t accept anything, and it was clear that the stories were simply too large or what we in Agile software development would call epics. The team broke down the story into several smaller stories and, to estimate them, they chose to skip planning poker and simply assume that each new story split off the original 8-point epic was worth 1 point. Their velocity in Sprint 2 was finally above zero, but it was exactly one story point.

The first team clearly managed to assemble a larger portion of the puzzle than the second team, but the real punchline was delivered by Yehoram Shenhar who played on Team 2: “Which team has learned more?” He was not suggesting it was his team. Here are the final results achieved by the teams by the end of the game:

The first team's result - much more deliveredTeam 2 result

I noticed during the game that Team 2 ran all the Scrum process elements (writing user stories, prioritizing the backlog, playing the planning poker) very smoothly, because they had several very experienced agilists on the team. My initial conclusion was that we needed such people on the “traditional” team in order to run the game. However, when Chris brought this game to our Agile user group gathering in Kitchener ten days later, we had less experienced players on this team, but achieved pretty much the same results.

Finally, you should read Chris’s own blog posts with reflections on this game! (And also his follow-up posts here and here.)

Posted in conferences | 3 Comments

Open Space Without Proposals

The fourth annual Agile Coach Camp Canada took place last month in Toronto last month and was once again a great event for learning and sharing knowledge. I am starting a series of posts about what went on there as I want to blog it before it fades from my memory.

I took a different approach to participating in open space session this year and explained it during my lightning talk on the opening night. I actually gave three lightning talks instead of one and fit them within two minutes. The last of the three, “the lightning talk about nothing” was to explain this approach.

"The lightning talk about nothing" at ACC

“The lightning talk about nothing” at ACC

People come to conferences to have conversations, learn and share. I want to learn something and hope to meet someone who has knowledge of the subject so that I can learn from them. At the same time, someone wants to learn about a different subject that I happen to know something about. How can we find each other, how can we find time and place to talk about it, how can I tell them something about this subject that is useful to them?

Open space provides a marketplace where we can find each other. It deals with the first two problems effectively. If you want to discuss something, you can propose a topic. There are usually 7 or 8 time slots at each coach camp (the main open space part lasts a day and a half), there are many parallel tracks, so we can easily have 50-60 sessions on a variety of topics. We cannot cover everything before everyone is exhausted at the end of the camp. When it’s over, it’s over and what happened is the best thing that could have.

People who took part in the same open space events with me during the last four years know that I always propose something and sometimes than one session during the same open space event. This time I decided to change that and propose nothing. I was not satisfied with the status of the third problem – how can I tell other participants something about a subject that is useful to them?

The knowledge sharing and exchange can take place during any moment of an hour-long (typical duration) session. We can get to that moment using many ways from many starting points. Proposing a topic in open space puts a label and sets the starting points of our potential conversation. The label may attract some people and may repel others. So the theory I wanted to test during this coach camp was that if I refrain from proposing topics and labeling them, that could let the conversation start with somebody else’s proposal based on their needs. If I have anything useful to say from my experience and knowledge, then it would be in the middle of a conversation started by them, in their context and thus more useful to them.

How Did the Experiment Go?

I think the test went pretty well. A couple of sessions happened where I could offer a lot advice, but – not starting with my labelled offering was key. We discussed Scrumban in one of these sessions. The coach who proposed it worked with a Scrum team that had some problems and they were looking to find improvement options with Kanban. I could simply sit there and listen for more than half of the hour-long session about their situation. When I finally started talking, I was answering their concerns directly. We made some terminology fixes and reinforced the principles. I drew the Kanban depth model and we used it to discuss practices where adding depth might help in this particular situation. If I started this session by proposing “Scrumban”, we would likely not have made such a connection. Another session where I ended up sharing a lot was the Waste-Watchers Anonymous, where I managed to show the group how we can do better than reduce Lean to “eliminate waste” and how much more there was to it, especially in the knowledge-work fields. But I had to let someone else propose the topic, then sit, listen and understand where people were coming from.

Aside from those sessions, I managed to go to even more sessions as a listener. I finally got introduced and understood what Temenos is about (thanks to Michael Sahota), got some professional coaching insights from Sue Johnston. Declan Whelan‘s brother, Paul Whelan, who is an architect and not a software architect, led a session on Architecture and how he deals with clients, critics, engineers, contractors, concerned citizens, etc. and how work works in his world. Ellen Grove started a swashbooking session (the term coined by James Marcus Bach, a famous self-taught software tester, in his book The Buccaneer Scholar), where I discovered the previously unfamiliar ways of visual facilitation.

And there was something else that I now forget. But if I had one or two sessions to go to because I proposed them, I would have certainly missed several of those listening experiences.

Conclusion

Of course, we can’t run open space if everybody decided to follow a #NoProposals approach. Most participants have to propose their topics, otherwise we can’t have the marketplace. But, I definitely recommend that you try my experiment the next time you go to open space and propose nothing.

Posted in conferences | 4 Comments

Quotable Zomblatt

Zomblatt is a social media meme that started in 2012. It has helped people understand and reflect on various aspects of Agile/Lean through small doses of humour, usually 140 characters at a time.

The best definition of Zomblatt I can think of is to use quotes, so I’m giving a number of quotes here. Various people have said these things, mostly about a year ago. I don’t remember correctly who said what to attribute them accurately, but hope the Zomblatt regulars don’t mind. So, here it goes:

  • Zomblatt team members can only speak when their coach finishes speaking and gives them permission.
  • There is no concept of the last responsible moment in Zomblatt. There is never an irresponsible moment.
  • Zomblatt cannot be adopted, only installed.
  • You can apply Zomblatt to any process, but you can’t apply it to Kanban, because Kanban is not a process.
  • Zomblatt is not only about the “process stuff.” Zomblatt cares about technical practices, such as Development-Driven Testing and Examples through Specification.
  • Zomblatt didn’t repeat Agile certification mistakes and certified all certificate-granting bodies first. Zomblatt Alliance, Zomblatt.org, Open Zomblatt are all certified.
  • etc.

Bill Lumbergh, ZBBBS

Bill Lumbergh, ZBBBS


The highest certification level in Zomblatt is ZBBBS (Zomblatt Black Belt and Brown Suspenders). Bill Lumbergh of Initech Corp. was first to attain it.

Feel free to add your favourite Zomblatt quotes in the comments!

Posted in Uncategorized | 2 Comments

#LKNA13 Wednesday in Tweets

From Douglas Hubbard’s keynote:

From the morning sessions:

From the ignite talks:

From the Brickell Key Award ceremony:

From the afternoon sessions:

And the summary:

And the preview of the next year’s event:

Posted in conferences | Leave a comment

#LKNA13 Tuesday in Tweets

From Stephen Parry’s keynote:

From the morning sessions:

As a side note, here is a small catalogue of these injuries:

  • Schedule Stress
  • Torn Trust
  • Termination Tension
  • Fear of Failure
  • Lost Respect
  • Brain Hernia (really complicated code)
  • Duplication Depression
  • Fragility Frustration
  • Merge Misery
  • Crushing Complexity
  • Outage Ordeal

From the ignite talks:

From the Monte-Carlo challenge:

From the afternoon sessions:

Observations:

And last, but not least:

Posted in conferences | Leave a comment

#LKNA13 Monday in Tweets

From Bob Lewis’ keynote:

About the unique, emergent scaled Agile/Lean/whatever system of work at Spotify:

Ignite talk highlights:

The announcement of the 2014 plans:

From the afternoon sessions:

Posted in conferences | Leave a comment

#NoEstimates Discussion at Agile Open Toronto

I was at Agile Open Toronto as week ago and this post is a short report and a follow-up on the “No-Estimates” session that took place there. The session was proposed and led by Chris Chapman and 23 people, about one-fourth of all open-space attendees showed up for it.

2013-04-13_11-10-42_387 - Copy

I believe Chris’ proposal was motivated in large part by Woody Zuill‘s story of no-estimates. Check out also Woody’s story of Mob Programming, a novel collaboration technique that is important part of his system.

The start of the discussion was uninspiring as we recycled often-cited notions such as:

  • Estimates are fragile, they are really “guesstimates.”
  • Not reliable if the timeframe is longer than one month. (Yeah, right, like all Scrum teams always meet their two-week sprint commitments.)
  • More spin of “guesstimates” such as “wild-eyed guesses.”
  • Estimates are useless, but the process of estimation is useful to the team. (So, what prevents the team to discover that useful knowledge without an exercise aimed at producing a useless number?)
  • There is value in team’s talking about each story. (What? They are not talking about it otherwise and on any other occasions?)
  • Correlation between the team’s cohesion and the quality of its estimates. A reasonable objection, which didn’t stand scrutiny.

Understanding It Deeper

The discussion eventually reached the point when some people started talking about deeper issues surrounding estimation. This is when I think the session became really valuable to the participants.

2013-04-13_12-03-58_354 - Copy

Trust. Estimation correlates with a low-trust culture. In a higher-trust organization, estimating the time to carry out this activity or that quickly loses its value. Business stakeholders can make decisions on what products and features to pursue based on their prudent assessments of risks and value, and they are trusted to do that. At the same time, technologists put their best effort behind the few most-valuable products and features. Frequent delivery, continual “doneness” help maintain trust. Trust enables all people involved to understand together that effort estimates have little correlation with delivery lead times and that “the time it takes” is never a number, but a probability distribution.

Decision-making. Effort estimation is pervasive in the software industry because it is assumed if only we could obtain an accurate enough number, we could accurately calculate ROI, prioritize backlogs, make good promises to customers, etc. However, “just give me the number” hides the reality that what we really need is not numbers, but decisions. Decisions on what to work on and in what sequence. How many bits of information do we need to make a decision? What information about the value of a new feature, or a bug fix, or associated risks is already available? When people start pondering such questions, it often turns out a lot of such information is already available and a precise effort estimate adds little to the decision. On the other hand, “just give me a number” often hides weak understanding of other factors involved in the decision and a weak decision-making capability.

Sizing features. T-shirt sizing was cited during the discussion as a way to estimate features on the order of magnitude and thus provide some useful bits of information to the decision-making process. It was quickly noted that T-shirts are not a great way to communicate the difference is sizes and the lizard-based scale may be better for it. One of the participants had an example how in her small software company, the development team established a steady delivery of iguana-sized features. The business stakeholders no longer need to ask the product developers to estimate each “iguana”, but instead, analyze risks, experiment to understand market value, and choose which iguana to work on, connecting the decision to the economic outcomes. They may even take a risk on an occasional Komodo dragon!


We didn’t get into exposing ROI as a vanity metric and into what specific risk management techniques can be used in place of the deterministic estimates. So, to finish the post, I’d like to summarize some of the most common well-established approaches that have been put in practice by the Kanban community with great success in recent years.

  • Make the value creation process explicit. This is no trivial matter – most Kanban boards are different, most Scrum boards are the same.
  • Establish a reliable delivery cadence that is economically sensible and helps maintain trust.
  • Make the commitment point explicit. Hint: if “backlog grooming” is part of your vocabulary, you don’t have an explicit commitment point.
  • Limit work in progress, which has the effect of transferring information and risk management upstream.
  • Treat everything upstream of the commitment point as an option. Options have value, options expire, don’t commit too early unless you know why.
  • Improve not only the production capability, but also the capabilities to replenish options and select and sequence commitments.
Posted in conferences | 1 Comment

T-Shirts, Rabbits, Lizards and Sizing Software Features

I was at Agile Open Toronto last weekend, which included a no-estimates session. That session and the open-space conference itself deserve separate blog posts, but for now I want to cover just set of concerns that relates to sizing and estimation of software projects, features, user stories, epics and other software work items.

T-shirt sizing (Small, Medium, Large, eXtra-Large) is a quick and easy method to roughly assess the size of a proposed software feature. My observation is, however, that many software people don’t appreciate the size scale that sizes S, M, L, and XL are supposed to represent. Relating it to a familiar everyday item like a T-shirt may add to the confusion. It is not obvious that the only thing the T-shirt feature sizing method has in common with wearable T-shirts is the size labels.

I decided to do a little experiment and found several road race T-shirts I collected over the years, such as this one:

2013-04-15_06-04-18_470

The shirts came in three different sizes and the organizers got them from the same vendor, so the sizing had to be consistent. The Medium shirt (shown in the picture) measured 21 1/2 inches across, the Large 23, and the X-Large 24. Assuming the volume is proportional to the cube of the linear size, a Large shirt can fit a 22% bigger body than a Medium one and that X-Large is 14% bigger than Large.

Here is the danger of confusion, especially if we take this talk of T-shirt sizes outside the development team to the business and forget to explain that the “t-shirt sizes” is only our technical jargon. And what it actually means. We may be mistakenly communicating that relationship between two sizes is “fits a little tight.” Now imagine what business may ask you late in the project based on such understanding of sizing.

The Fibonacci Code

Another t-shirt size confusion I’ve observed is mistaking the size labels as proxies for user story sizes used in the planning poker game. The most popular sizing sequence in this game is based on the Fibonacci series, in which every number is approximately the sum of the previous two: 1, 2, 3, 5, 8, 13, 20, … Some people mistakenly assumed that Small can mean 2, meaning 3 is Medium, 5 Large, etc. That again doesn’t properly communicate the time scale by suggesting that the next big size is less than twice as big.

While your team plays planning poker and uses numbers from a small range (e.g. 2-5 or 1-8 at the most), you’re only doing more refined estimates within the same T-shirt size. When people start raising numbers like 13, 20, or 40, then you’re in next T-shirt size.

The Power Law

What T-shirt sizes are really intended for was to communicate that the features are of different orders of magnitude. The next big size is really several times as big and outside the normal variation range of its smaller neighbour. The rationale for such power-law size scale is to keep categorization mistakes to a minimum. Fitting a proposed feature into a range on the power scale allows us to establish probabilistic estimates based on the history of our delivery of features within that range.

Software teams and organizations have done this successfully over the years. To give two examples of case studies, I would point the reader to Rick SimmonsUpstream Kanban and Henrik Kniberg‘s Lean from the Trenches. What these case studies have in common is that the teams came to realize that the extra-large and large sizes don’t really work for them, because they naturally have a lot of variation, making the delivery too unpredictable and, from the business point of view, often a risk that is not worth it. Further, each T-shirt size essentially represents a class of service and a recent inquiry into the economics of classes of service came to the conclusion that having each of them has an economic cost, so it is possible to have too many classes of service and have their cost exceed the benefit and thus sub-optimize the system

I don’t remember who proposed the Lizard Scale several years ago, but I think it was Jeff Patton. The scale is: gecko, iguana, Komodo dragon, Godzilla. Each animal is clearly bigger and in a different class than the previous one. This scale is essentially the same T-shirt scale, but without any confusion that may be caused by referring to T-shirts. Teams may or may not need to use this scale literally as long as they and their business stakeholders understand what the labels S, M, L and XL mean.

Posted in hands-on | 4 Comments

Scrum Commitments, Little’s Law, and Variability

I have recently had a discussion with a Scrum Master whose team was struggling quite a bit, completing exactly zero stories for two straight iterations. This problem is often framed as overcommitment – how can we make them team commit to what they can deliver and deliver what they have committed to? The Scrum Master was also a recent graduate of an Agile training program as I could tell by very revealing language: “teaching the team a lesson”, “honouring commitments”, and looking at the Scrum Guide first and at the situation second, trying to link the problem with “violations” of Scrum rules.

Let’s re-frame the problem first. This is not about coaching the team to “perform.” Even the concept of a “team” is not really helpful here. What we deal with here is an ecosystem: part of it the team, other parts of it are its customers, partners within the organization, and various stakeholders. Some of these interests may be represented by the Product Owner – although how the Product Owner can be a leaky abstraction and other criticisms of this role is a wholly different topic. The problem is not how to make the team execute on some tasks, but to understand how its ecosystem – a system – co-evolves to create value by delivering its software. Our job is understand the system capability and improve it continually and forever.

Before we embark on that improvement journey, it would be reasonable to ask, what “laws of physics” our system may be governed by?

One of such laws is Little’s Law from the queuing theory. If we have to have conservation of flow – stories enter the system at the sprint planning and exit at the demo – then the amount of work in progress divided by the completion rate must equal the cycle time.

LittlesLaw

If you do one-week sprints, your minimally acceptable throughput rate is 0.2, representing the velocity of one user story per sprint (five working days). Since the amount of work in progress is greater or equal to 1, Little’s Law allows us to establish quickly that the best we can do as far as the cycle time in this case is 5 days. Fortunately, this cycle time fits into the one-week sprint, although it gives no room for error. But we can say at the very least that the cycle time must be consistently no more than 5 days, and ideally 4, to account for the transaction costs (such as sprint planning and demo)

If the team multitasks (WIP several times more than necessary), its cycle time increases linearly, making it likely it will not fit into the four-day timebox. For example, when the cycle time was three days, they met their commitments. Double the WIP, the cycle time is now six days and they’re now missing commitments. When human beings are involved, the task-switching will actually decrease throughput while increasing WIP at the same, making the cycle-time increase worse than linear.

Another force affecting the team is variability. Cycle time is not really a number, but a probability distribution. People tend to underestimate its variance and the effects of the variance. I hear quite a few Scrum Masters and coaches talk about velocity as a “hard number” or a “meaningful metric” – they miss these metrics’ probabilistic nature.

CycleTimeNotNumberButProbabilityDistribution

In order to meet weekly sprint commitments, you need the upper control limit of the cycle time to be no more than four days. For example, if the cycle time has Gaussian distribution with the mean of 2 and the standard deviation of 0.5, the control limits are 0.5 and 3.5. In this case, you have nearly 100% probability that the story will be delivered within the sprint. If the mean is 4 and the distribution is symmetrical, you have a 50% probability of missing your sprint commitment.

Delivering half of the sprint commitment

Even if my example of the average cycle time of 2 days with a sigma of 0.5 helped anyone start thinking probabilistically, it is actually too good to be a real-world example. It was only an illustration – real-world distributions are a lot worse!

They are almost never normal (Gaussian); lognormal, exponential and “unique” distributions much more common; the standard deviation a greater fraction of the mean as well. The expansion-of-work phenomenon and the asymmetry of distributions will ensure that you almost never benefit from “less-than” outcomes, but always pay the price of “longer-than.”

Options for Improvement

There are two basic options for improving due-date performance. One is reducing variation. Here the main culprits that contribute to increasing it:

  • multitasking
  • story size
  • various blockers (due to multitasking, new technologies, and other sources)

Another option is to reduce the average story size, which has a doubly beneficial side effect because it tends to reduce the harmful component of variation as well. As you can see in the made-up example (average 2, sigma 0.5), committing to stories even as small as half the sprint length assumes unrealistically low variability.

As a rule of thumb, divide the sprint length by a factor of 4 to 6 and that’s how long the average duration (not the development effort) of your stories should be. Err on the side of making your stories smaller, using XP techniques like Product Sashimi and Elephant Carpaccio to make them smaller.

Lastly, the Scrum Master received some advice to increase the sprint length, but was reluctant to do so. I’m in agreement with him on that. I have seen teams take this step as a matter of convenience and they ended up covering up problems and avoided the deeper learning that they needed to go through. Instead, keep the week-long sprints, reduce work in progress, and use the actual cycle-time and velocity data to guide improvement.

Posted in hands-on | 11 Comments