## Cycle Time Revisited

The cycle time has been a hot topic for me lately. A debate is going on on kanbandev.

Meanwhile, someone at a client site has asked my advice on continuous delivery (of software) and whether “the cycle time” is a useful metric for it based on some article they found somewhere on the Internet and read over the weekend. The article was written purely at the practice level, without context and the author assumed whatever cycle time meant to them was “the cycle time” and didn’t bother with any definitions. That was a lot to sort out.

# There is no such thing as “the cycle time”

It is an overloaded term and its uses should always be qualified.

In the manufacturing domain, where there is a stable definition of cycle time and where the term can be used without qualifiers, it means something very different from how the author used the term. It means the (average) interval between successive deliveries. For example, if the cycle time of a car assembly line is 45 seconds that means, on average, every 45 seconds a new car rolls off (actually, at nearly perfect 45-second intervals, because there is very little variability in the manufacturing process). A total of 60 * 60 / 45 = 80 cars are produced every hour. (Note that the lead time to manufacture a car is significantly longer than 45 seconds.)

If we’re to adopt this definition in the software development domain, the cycle time means the reciprocal of the deployment frequency. For example, if a team demoes their user stories every two weeks, but actually ships only after multiple increments are integrated into a six- month release plan, then their cycle time is 6 months. If they ship at the end of every sprint, then their cycle time is 2 weeks. If they deploy 50 times a day on average, their average cycle time is approximately 29 minutes. The average cycle time of the software delivery process at Amazon is reportedly 11 seconds.

# The Software G-Forces

Kent Beck, one of the eXtreme Programming pioneers, proposed a model called the Software G-Forces. He showed the scale of deployment frequencies (cycle times), from yearly to quarterly to monthly to weekly to daily to hourly or less. He also showed how the distribution of where software companies fit on this scale changed over time. There are no best practices for delivery. Whatever the practices you’ve got are the right practices for delivering at the frequency you’ve got.

Speaking in terms of delivery frequency also avoids the terminological overload of “cycle time.”

# Cycle time and continuous delivery

Beck further contends that if you want to double or triple your frequency, doing so will likely break your existing process. You have to figure out which new practices to add to your process and which practices to part with. Addition and subtraction of practices will always be contextual. For example:

1. Delivering once in 6 months may involve little test automation, but, if you want to deliver every 2-3 months, you will probably need to invest in test automation. The full manual regression test may take too long.
2. Going from once every few months to once every few weeks is likely to require automated provisioning of various testing, staging, certification and production environments. “Snowflake” servers may stand in the way.
3. Delivering once a two-week sprint or more frequently generally requires a properly designed, unit-testable code base with good amount of unit test coverage. Are all developers fluent in SOLID principles? There will be no trivial bugs for testers to catch – are they trained in exploratory testing?
4. Delivering more frequently than once a week tends to break processes based on time-boxed frameworks.
5. To deliver more than once a day, you may need to solve the bottlenecks in your deployment pipeline.
6. And so on – you can easily turn this into a very long checklist of practices.

# What about local cycle times?

Other uses of “cycle time” are possible in the software domain, but they all mean the time through a local activity (or a continuous sequence of activities) and all need to be qualified where they’re from and to (something the author neglected to do). As it turned out upon careful examination of the article still influencing my client, its author didn’t mean any of the above definitions. He meant the time from the code commit to seeing it in production. It was the local cycle time of the deployment pipeline. (In the Kanban method terminology, his clock started at the second commitment point.)

By optimizing the local cycle time of the deployment pipeline, the author was effectively solving the Problem #5 from the long list above. This immediately raised the question whether the Problem #5 was contextually relevant to my client. (The answer was, without giving away anything: to some teams, yes; to other teams, no.)

# Conclusion

Back to the original question: is “the cycle time” useful metric to inform progress towards continuous delivery? Generally speaking, no, unless we can all agree on the definition, which is clearly not happening.

Meanwhile, using the delivery frequency can be much more productive you can avoid using it to rank your teams. Team A delivers once a month; Team B, once a day. Is Team B better than Team A? No. Is Team B working on what it will take to deliver three times a day? No, they’re just cranking out user stories? If so, that’s not so good. Is Team A working on what it will take to delivery every two weeks? If yes, great!

If you’re still looking for a best practice from this memo, here it is. It’s okay to deliver only twice a year out of a legacy codebase with mostly manual testing. It’s not OK to say, when we’ll rewrite this codebase in a few years, it will be more conducive to deployments at the end of each sprint. It is also not OK to say, we’re deploying every day and are far ahead of those dinosaurs and their twice-a-year deployments. Instead, it may be preferable that each team have a goal to deploy 2-3 times more frequently than they do now. The leaders can communicate and set such expectations with the teams.

Posted in Uncategorized | | 3 Comments

# Update (2018)

For the up-to-date version of this STATIK A3 paper, please see this more recent post. You can also download the paper from my firm’s website.

# The Original Post (largely relevant still)

I’d like to introduce a new simple tool, the STATIK canvas. STATIK is an acronym that refers to the Systems Thinking Approach to Introducing Kanban in Your Technology Business.

Kanban (in the knowledge work context) is an evolutionary improvement method. It uses virtual Kanban systems as a way to catalyze improvement and adaptive capabilities. A Kanban system is introduced into the environment which comprises the service delivery team and its customers and partners. This is a critical moment. Systems thinking is key!

It is not the goal of this post to explain STATIK, it is rather to introduce the canvas and let people download and try it. Therefore I’ll skip this explanation and encourage the readers to explore the key STATIK resources:

# Instructions

The proposed STATIK canvas is roughly the size of an A3 paper. It is intended to be filled in by pencil and capture only the most important stuff. The following are instructions by section.

## 1. Context for Change

This section captures the internal and external (from customer’s perspective) sources of dissatisfaction and variability. Stories collected in this section often contain words that reveal work item types, hidden risk information, odd patterns of demand, unmet expectations (used in Section 2 – Demand Analysis), external and specialist dependencies (Section 3 – Workflow Mapping), implicit classes of service (Section 4).

## 2. Demand Analysis

This section contains the demand analysis template introduced by Dave White. The following information is collected for each work item type:

1. Source – where the requests to deliver this type of work item arrive from
2. Destination – where the results of work are delivered to
3. Arrival rate. This must be a number of requests per unit of time. (“We have 300 items in the backlog” is not good enough. If you get this answer, ask where they come from and how.)
4. Nature of Demand – note any patterns.
5. Customer’s delivery expectations, even if unreasonable.

## 3. Workflow Mapping

Map the workflow for each work item type. Note the similarities and variations. Pay attention to concurrent and unordered activities, too. Note external dependencies, specialist risks, etc.

## 4. Classes of Service

For each work item type, specify which class(es) of service are being provided, their policies and delivery expectations.

## 5. Input and Output Cadences

Specify them for every work item type.

## 6. Kanban Board Visualization Design

This section is intended to be a simple sketch helping the delivery team, manager and coach figure out the major outlines of the visual board. These may include swim lanes, two-tier structure, use of colour, etc. There should not be any need to make this section a miniature replica of the actual board.

# Learning

Here is a number of things I hope to learn by trying out this canvas. I expect its design to improve as a result.

• Whether the canvas is helpful to capture the thinking process of introducing (or updating the design of) a Kanban system
• Whether it is helpful to hang the canvas near the Kanban board to help remember why certain visual Kanban board elements are the way they are
• The relative proportions of the sections
• Level of detail or instructions needed in each section
• Whether the “Roll Out” section belongs in the canvas
• Any surprises, things I don’t expect to learn
Posted in Kanban | | 6 Comments

## #BeyondVSM: Understanding and Mapping Your Process of Knowledge Discovery

This short post will serve as the “table of contents” for a series of six posts I wrote this year about mapping processes in creative industries based on knowledge creation and information discovery and not by handoffs of work between people and teams.

• Understanding Your Process as Collaborative Knowledge Discovery: the first post in the series explores the problem and the new ways of looking at it
• Examples of using this approach to map processes of knowledge discovery in two different industries
• Mapping Your Process as Collaborative Knowledge Discovery. I wrote about how to actually create such process maps with real people who do the work, why, and how to use these maps in Kanban system design. This post turned into three, each covering different layers:
• Recipes: how to actually do it and what not to do, without much explaining why. Sorry, the post was already long enough. Of course, recipes are not enough, but that’s what I have the next two posts for!
• Observations: what actually happened as I tried this new approach. The experience informed various tips on how to do it.
• Thinking
Posted in Kanban | Tagged , , | 1 Comment

## Lead Time and Iterative Software Development

I have introduced my forecasting cards and written about lead time distributions in my recent blog post series. Now I’d like to turn to how these concepts apply in iterative software development, particularly the popular process framework Scrum.

Let’s consider one of the reference distribution shapes (Weibull k=1.5), which often occurs in product development, particularly software. I went through various points on this curve and replaced them with what they ought to mean in this specific context.

Scrum teams often complain that their user stories are not finished in the same sprint they were started. I have often observed in such situations that their stories are simply too large.

Even if typical stories were smaller than the duration of the sprint, such as, 7-8 days in a 10-business-day, two-week sprint, that was not small enough. The teams, Scrum masters, product owners held, perhaps subconsciously, the notion that we can “keep the average and squeeze the variance”, that is keep the 7-8-day average but limit variability — estimate, plan and task better — so that the right side of the distribution fits within the timebox. Recent lead time distribution research, examining many data sets from different companies (including those using iterative Agile methods) refute this notion. One of the key properties of common lead time distributions is that the average and standard deviation are not independent.

Another suggestion — keeping the average story to half the sprint duration, so that the ends of the bell curve gives us zero in the best case and the sprint duration in the worst case — is another illusion. Lead time distributions are asymmetric!

The real strategy is to left-shift the whole distribution curve.

This Kanban-sourced knowledge led to many quick wins as the Scrum teams, their Scrum masters and product owners I coached gave themselves a goal to systematically make their stories smaller. They simply asked, what can we do to double the count of delivered stories in the next few sprints, covering roughly the same workload in each sprint? After the doubling, ask the same question again until the stories are small enough.

# How Small?

How small do user stories need to be? We can turn to our forecasting cards, which give the control-limit-to-average ratios between 3.9 and 4.9 for the two most common distribution shapes (1.25 and 1.5). In the extreme case, we have to assume the exponential distribution (I have observed quasi-exponential distributions in some cases in incremental software development), which gives us the ratio of 6.6. The ratio of average lead time to sprint duration in the range of 1:4 to 1:6 can be used as a guideline.

To make this rule of thumb a bit more practical, lets take into account these practical considerations: (1) the lead time is likely to be measured in whole days, (2) the number of business days in a sprint is likely to be a multiple of five, and (3) the median (half-longer, half-shorter) is easier to use in feedback loops than the average.

The control-limit-to-median ratios for the same distribution shapes are (consulting the forecasting cards again) 4.5 to 6.1; in the extreme case, 9.5. Therefore, half of the stories in one-fifth of the sprint duration can be used as a guideline. In the extreme cases, we may need one-tenth instead of one-fifth.

None of this is news to experienced Scrum practitioners, particularly those with eXtreme Programming backgrounds. XP tribe has appreciated the value of small stories since long ago, and invented and evangelized techniques, such as Product Sashimi to make them smaller.

Posted in Kanban | | 1 Comment

## Introducing Lead Time Forecasting Cards

I’m introducing a simple tool: lead time forecasting cards.

Each card displays a pre-calculated distribution shape, using Weibull distribution with shape parameters 0.75, 1 (Exponential distribution), 1.25, 1.5, 2 (Rayleigh distribution), and 3. (Since I printed the first batch, I realized I need to include k=0.5 in the collection.)

For each distribution, the following points are marked with rainbow colours:

• mode
• median
• average
• percentiles (63rd, 75th, 80th, 85th, 90th, 95th, 98th, and 99th)
• the upper control limit (99.865%)

The scale of each card is such that the lead time average is 1. Your average is different, so multiply it by the numbers given in the table on each card.

I will be bringing a small number of printed cards to the upcoming conferences, training classes and consulting clients. The goal is, of course, to get feedback, refine them, and then make them more widely available.

Posted in Kanban | | 1 Comment

## Lead Time Distributions and Antifragility

This post continues the series about lead-time distribution and deals with risks involved in matching real-world lead-time data sets to known distributions and estimating the distribution parameters.

One of the key ideas of Nassim Nicholas Taleb’s book Antifragile is the notion of convexity, demonstrated by this option payoff curve. The horizontal axis shows the range of outcomes, the vertical axis shows the corresponding payoff. With this particular option, the payoff is asymmetric. Our losses in case of negative outcomes on the left are limited to a small amount. But our gains on the right side (positive outcomes) are unlimited. Note that this is due to the payoff function’s convexity. If the function was only increasing as a straight line, both our losses and gains would be unlimited.

A concave payoff function would achieve the opposite effect: limited gains and unlimited losses.

An antifragile system exposes itself to opportunities where benefits are convex and harm is concave and avoids exposure to the opposite.

In the book’s Appendix, Taleb considers a model that relies on Gaussian (normal) distribution. Suppose the Gaussian bell curve is centered on 0 and the standard deviation (sigma) is 1.5. What is the probability of the rare event that the random variable will exceed 6? It’s a number, and a pretty small one, which anyone with a scientific calculator can calculate.

Gaussian distribution analysis: probability of a rare event as a function of sigma is a convex function.

Right? Wrong. We don’t really know that sigma is 1.5. We simply calculated it from a set of numbers collected by observing some phenomenon. The real sigma may be a little bit more or a little bit less. How does that change the probability of our rare event? There is a chart in the Appendix, but I rechecked the calculations, and here it is — it’s a (very) convex function.

If we overestimate sigma a little bit, it’s really less than what we think it is, on the left side of the chart, we overestimate the probability of our rare event — a little bit. But if we underestimate sigma a little bit, we underestimate the probability of our rare event — a lot.

# Convexity Effects in Lead Time Distributions

Weibull distribution analysis: probability of exceeding SLA as a function of parameter

Weibull distribution analysis: probability of exceeding SLA as a function of scale parameter (shape parameter k=3).

Let’s apply this convexity thinking to lead time distributions of service delivery in knowledge work. Weibull distributions with various parameters are often found in this domain. Let’s say we have a shape parameter k=1.5 and a service delivery expectation: 95% of deliveries within 30 days. If we are spot-on with our model, the probability to fail this expectation is exactly 5%. How sensitive is this probability to the shape and scale parameters?

With respect to the shape parameter, the probabilities to exceed the SLAs are all convex decreasing functions (I added the SLAs based on 98th and 99th percentiles to the chart). If we underestimate the shape parameter a bit, we overstate the risk a bit; if we overestimate it a bit, we understate the risk — a lot.

In other distribution shape types (k<1, k>2), it is the same story. The risk of underestimating-overestimating the shape parameter is asymmetric.

What about the scale parameter? It turns out there is less sensitivity with respect to it. The convexity effect (it pays to overestimate the scale parameter) is present for k>2 (as shown by the chart), it is weaker for 1<k<2, and the curves are essentially linear for k<1.

# Conclusions

When analyzing lead time distributions of service delivery in creative industries, it is important not to overestimate the shape parameter. The under-overstatement of risk due to the shape parameter error is asymmetrical.

Matching a given lead-time data set to a distribution doesn’t have to be a complicated mathematical exercise. We should also not fool ourselves about the precision of this exercise, especially given our imperfect real-world data sets. Using several pre-calculated reference shapes should be sufficient for practical uses such as defining service level expectations, designing feedback loops and statistical process control. If we find our lead-time data set fits somewhere between two reference shapes, we should choose the smaller shape parameter.

## How to Match to Weibull Distribution without Excel

A bit more than one year ago, I wrote a short, but fairly technical post on how to do this without complicated statistical tools, only using something found in many modern offices: spreadsheet software such as Excel.

Here is the problem we are trying to solve. Weibull distribution occurs often in service delivery lead times in various creative industries. We need to match given lead time data sets and find the distribution parameters (shape and scale), which help in understanding risks, establishing service-level expectations and creating forecasting models.

My post and the spreadsheet containing the necessary formulas (just copy and paste your own data) are still valid. However, I would like to propose a simpler method with even lower barrier to entry. It is less precise, but still reasonably accurate and you can use it to think on your feet without more complicated tools in your hands.

# 3 Main Shapes And 2 Boundary Cases

The three main shapes of Weibull distribution curves can be told from one another by observing convexity and concavity of the left side and the middle of the curve. (On the right side, they are all convex.) The three cases correspond to the shape parameter being (1) less than 1, (2) between 1 and 2, (3) greater than 2. Exponential (k=1) and Rayleigh (k=2) distributions give the boundary cases, separating three broad classes.

Let’s take a look at the charts.

Weibull distribution, shape parameter k=0.75

Weibull distribution, shape parameter k=1.5

Weibull distribution with shape parameter k=1.25

When the shape parameter is less than 1, the distribution curve (probability density function) is convex over the entire range. This shape parameter range occurs often in IT operations, customer care and other fields with a lot of unplanned work. The lowest value of the shape parameter that I have observed is 0.56. I chose k=0.75 as the representative of this class. The main visual features of this distribution are, besides convexity: the right-shifted average (for example, for k=0.75, the average falls on the 68th percentile) and a wide spread of common-cause variation (for example, for k=0.75, the ratio of the 99th percentile to the median is about 15:1).

When the shape parameter is between 1 and 2, the curve is concave on the left side and through the peak and turns convex on the back slope. This shape parameter range occurs in product development environments. I have observed shape parameters close to 1, close to 2 and everything in between, but parameters between 1 and 1.5 more often than between 1.5 and 2. I chose k=1.5 and k=1.25 as two representatives of this class. The main visual features of this distribution is the asymmetric “hump” rising steeply on the left side and sloping gently on the right. The mode (the distribution peak) is left-shifted (as I wrote earlier, it falls on the 18th and 28th percentiles, respectively, for k=1.25 and k=1.5). The median is slightly, but appreciably less than the average (87% of the average for k=1.5). The common-cause variation spread is significant, but narrower than for k<1.

Weibull distribution with shape parameter k=3

In software engineering, the adoption of Agile methods may lead to left-shifting of the shape parameter. This corresponds to the team's growing ability to deliver software solutions in smaller batches faster. This was another reason for including k=1.25 as a reference point.

Exponential distribution, also Weibull distribution with shape parameter k=1

When the shape parameter is greater than 2, the curve is convex on the left side, then turns concave as it goes up towards the peak, and turns convex again on the back slope. This distribution shape often occurs in phase-gated processes. I have only few observations of this type of distribution curve and the greatest value of the shape parameter I have observed so far is 3.22. I chose k=3 as the representative of this class. The main visual feature of this distribution is that a much more symmetric peak (although it is slightly asymmetric). Compared to k<2, the spread of common-cause variation is narrower relative to the average, however, processes with this type lead time distribution tend to have very long average lead times.

Rayleigh distribution, also Weibull distribution with shape parameter k=2

The Exponential distribution (k=1) provides the boundary between two classes of shapes (k<1 and 1<k<2). It is a very well-studied distribution thanks to all the research of the queuing theory and Markov chains. This distribution has a unique property: its hazard rate, which is the propensity to finish yet unfinished work, remains constant and independent of time. When the hazard rate decreases with time, we get distributions such as with k<1; when it increases slowly, we get distributions such as the ones with 1<k<2 shown above.

Similarly, Rayleigh distribution (k=2) provides the boundary between the k>2 and 1<2<k classes.

# Matching Your Data Set to Weibull Distribution

The first step is to narrow down the shape parameter range by observing convexity-concavity on the left side and in the centre.

The second step is to compare your lead time distribution histogram and to the few reference shapes provided in this post and see if it is close enough to one of them or is somewhere between the two. Ratios of various percentiles to the average can help. This will pin the shape parameter within a fairly narrow range (e.g. 1.25 to 1.5). This is not very precise, but accurate enough for many practical applications, such as reasoning about service-level expectations, control limits and feedback loops via the median.

# Math Check

Finally, there are two quantitative shortcuts to estimating Weibull shape and scale parameters.

A good approximation of the scale parameter (thanks again to Troy for pointing this out) is the 63rd percentile. Indeed, the value of the Weibull cumulative distribution function where the random variable (lead time) equals the scale parameter is independent of the shape parameter and is given by:

The value of the quantile (inverse cumulative distribution) function at this “magic point” is indeed the scale parameter as shown by:

The “magic point” has another important property, expressed by the following formula:

If the lead time data set is smooth enough to allow approximating the slope of the quantile function at the 63rd percentile (using, for example, finite differences), then this formula can be used to estimate the shape parameter.

# Summary

• Weibull distribution occurs often in lead time data sets in creative industries. Estimating distribution parameters (shape and scale) is desirable so that we can use them in various quantitative models.
• The linear regression method is still preferable for finding the shape and scale parameters.
• In situations when the tools are not accessible, the shape and scale parameter can be approximated by comparison with several reference shapes. It is important to differentiate between three shape parameter ranges (k<1, 1<k<2, k>2) by observing convexity and concavity on the left side and the centre of the distribution curve.
• Relatively low precision (two-tenths for the shape parameters) is still sufficient for practical, quick thinking about the service delivery capability described by the lead time data set.
• There are two math tricks based on the unique properties of Weibull distribution and the 63rd percentile that can be used to roughly estimate the distribution parameters.
Posted in Kanban | | 7 Comments

## Inside a Lead Time Distribution

My earlier post discussed:

• how to measure lead time
• how to analyze lead distribution charts
• how to get better understanding of lead time by breaking down multimodal (work item type mix) lead time data sets into unimodal (by work item type)
• how to establish service-level expectations based on lead time data

As a reminder, the context is creative endeavours in knowledge-work industries.

Now let’s take a closer look at the structure of lead-time distributions and their common elements.

p>Mode. The mode is the most probable or, in other words, the most often occurring number in the data set, or the location of the peak of the distribution curve. In lead time distributions I often observe in delivery processes in knowledge work industries, the mode is often left-shifted and tops the distribution’s asymmetric “hump.” The probability of lead time being less than the mode is small, 18% to 28% observed in distribution shapes common in product development.

The mode is also what people tend to remember well. When asked how long it take to deliver this type of solution, they give the answer their memory is trained on. The trouble is, their memory may be trained on the 18th percentile, so beware of the remaining 82%.

Median. The median is also known as the 50% percentile. If we sorted all numbers in our data set, this one would be right in the middle. Half of the deliveries take more time, half take less. Lead time distributions observed in creative industries are asymmetrical and the median is usually less than the average, sometimes significantly. The ratio of median-to-average of 80% to 90% is quite common in product development. It can drop to 50% in operations and customer care.

Because the median is located on the left side of many other important points on the distribution, it is very useful in establishing short feedback loops to continuously validate forecasting models and project and release plans. If half of the lower-level work items (such as, features in a product release, tasks in a project, and so on) are still being delivered in less than or equal time, in other words, the median is not drifting to the right, the original forecast is still sound.

This insight is due to David J. Anderson, Dan Vacanti and their staff at Corbis. This is a rarely publicized part of their well-known case study from 2006-07. While the project was large and took a long time to deliver in total, there was a feedback loop in it taking only a few days to run and closing almost every day and revalidating the project forecast. The understanding of the median and its role in the distribution enabled this feedback loop.

Average. The average is the easiest to calculate. It connects with the work-in-process (WIP) and throughput in a simple equation known as Little’s Law.

The 63rd percentile. In the Weibull family of distributions, which are often observed, there is one special point, where the over and under probabilities don’t depend on the shape parameter. This point is

Credit to Troy Magennis for pointing this out. Thanks to its unique properties, this percentile can be used in estimating the scale parameter when matching lead time data sets to Weibull distribution.

The 75th percentile. The percentiles between 50th and 75th can be used to estimate the shape parameter if we’re dealing with Weibull distribution and the shape is known to be between 1 and 2, which is common in product development. This owes to another unique property of the 63rd percentile. The math of this estimation is not the purpose of this post, so I’ll save it for future writing.

Higher percentiles (80th and up). The higher percentiles are used in establishing service-level expectations. The average or median are insufficient to define these expectations, because they don’t deal with probabilities of progressively rarer events that the delivery will take longer. The 80th (one in five will take longer), 85th, 90th, 95th, 98th and 99th percentiles are often used for this purpose. These percentiles can be taken from lead time data sets or calculated from Weibull distribution “navigation tables” as multiples of the average (the N-th percentile-to-average ratio depends only on the shape parameter).

The upper control limit. For the purposes of statistical process control and identifying outliers (special-cause variation), we can establish an upper control limit. In creative industries, it is not necessary to establish control limits by calculation. Collaborative inquiry — this project took X days to deliver, do we agree on a single obvious cause that led to the delay? — can be used instead to differentiate between special- and common-cause variation.

If we need to calculate the upper control limit, then adding three standard deviations to the average is plainly incorrect, because the lead time distribution is never Gaussian. With Weibull distribution, we can set the limit at the same probability as being within the average plus three sigma on the Gaussian, which is 99.865%.  Credit to Bruno Chassagne who took this approach assuming the Exponential distribution in IT Operations work and presented the results at Lean Kanban Benelux 2011 conference.  Owing to the properties of Weibull distribution, the control limit is proportional to the lead time average and depends only on the shape parameter. In product development, the ratio of 4 to 6 is common, in operations and customer care, 10 to 12.

# Conclusion

Different points on a lead time distribution chart play different roles. Understanding those roles can help us:

1. assess service delivery capabilities
2. set service-level expectations
3. create delivery forecasts
4. create short feedback loops
5. understand workers’ biases
6. manage variation
Posted in Kanban | | 9 Comments

## Analyzing the Lead Time Distribution Chart

This post is about one of the key measurements of flow we use in Kanban: lead time. We’ll talk a bit about how to measure it and analyze and use the results.

Example: a work item on a Kanban board with the start time marked

Loosely defined, the lead time is how long it takes a work item to get through the system. There are several variations of this definition. The customer lead time is essentially from concept to cash. In non-manufacturing, knowledge-work kanban systems, we also use the kanban system lead time, measured from the kanban system’s first commitment point to the first infinite buffer.

We also use the term cycle time. In knowledge-work kanban systems, it is always local and qualified where it is to and from. There is no “the cycle time.” There are interesting reasons behind these definitions, but they are beyond the scope of this post.

Example: a policy card on a Kanban board with a reminder to mark lead time

Measuring lead time is very easy. All it takes is to note when the work item started, when it was delivered, and subtract the difference. We can do this using timestamps from electronic work management systems. If we use physical Kanban boards, we can also do this by marking the start and end dates in the bottom corners of the card. The “Robinson Crusoe method”, placing a mark on the card every day, proved ineffective in the modern office.

After delivering several items, we’ve got a lead time data set. We can now plot a histogram and analyze the distribution

# Example

Here is an example of lead time distributions from an organization developing custom IT solutions.

This is not an uncommon lead time distribution shape, with a left-shifted asymmetric “hump” and a long, fat “tail.” The best fit for this data set is the so-called Weibull distribution with the shape parameter 1.62. (Weibull is a parametrized family of distributions. Varying the shape parameter tweaks the distribution curve into several distinct shapes. But this is a topic for another post.)

The delivery takes 34 days on average. Eighty-five percent of solutions are delivered in 61 days, 98% take up to 96 days. We know these probabilities if we know nothing else about a project.

# Drilling Down

This organization’s projects are not all the same. They deliver solutions of several different types. The following histogram shows the lead time distribution of four major project types. (For reasons of anonymity, these project types are only identified by colour-coding.)

We can see on this chart that each project category has a different distribution. Blue implementations typically take longer to deliver than green projects. There are several, disproportionally too many red reports in the distribution’s tail, even though their distribution peak is left-shifted. Purple integrations take quite a long time even in the best case.

The first bits of information known about a project is which of the categories it falls into. In Kanban, we call these work item types. This information is clear, unambiguous, and requires no time or effort to get. Let’s drill down our lead time data set by work item type.

Here is the lead time histogram for “green” projects which account for a bit more than half of all deliveries.

Green projects are delivered slightly faster than the average for all projects, in 30 days vs. 34. The 85th percentile of deliveries also take less time, 54 days vs. 61; and the 98th percentile takes 85 days vs. 96.

The best distribution fit for green projects is, again, Weibull with shape 1.62. One important quality of this distribution is that the percentiles (the 85th, the 98th and others) depend only on the average and the shape parameter. Anyone can calculate the average. Finding the shape parameter takes some statistical skill, but the good news is, it doesn’t change much with time. (I plan to write later about simpler ways to find the shape parameter with sufficient precision.) Knowing the shape parameter and the average, we can calculate the 85th, the 98th and any other percentile we need. We can also, of course, take these percentiles from the data set.

Let’s now take a look at red reports.

The delivery process company uses for this type of project is different and this shows in the shape parameter of the best-fit distribution. It is again Weibull, but the shape parameter is much smaller: 1.23. The average delivery takes 35 days, the mode (most common lead time ) is much less, 85% of deliveries take 61 days, 98% take 96 days.

Looking at other work item types, blue implementation projects follow a distribution similar to green projects: shape parameter 1.65, average 40 days, 85% delivered in 66 days, 98% in 81 days.

The organization’s process for delivering purple integration projects is the most regimented and uses several phase gates. This is also reflected in the lead time distribution shape (shape parameter: 3.22). It is more symmetrical, with very few “small” (short lead-time) projects. The average delivery time is 56 days, 85% in 70 days, 98% in 100 days.

# Establishing Service-Level Expectations

We can now establish service level expectations for different work item types delivered by this organization. The following table summarizes data for four work item types. For the sake of comparison, I included percentile estimates obtained using parametrized distributions. (A little secret: I have “navigation tables” with pre-calculated ratios for all common distribution shapes, so I simply took the average lead time and multiplied it by numbers from the table using my phone’s calculator).

 Work item type Shape Average lead time From the lead time data set From the parametrized distribution 85% 98% 85% 98% Green Projects 1.62 30 54 85 51 83 Red Reports 1.23 35 61 96 63 112 Blue Implementations 1.65 40 66 81 68 110 Purple Integrations 3.22 56 70 100 78 99

Notice that the 98th-percentile estimates obtained using parametric distributions are fairly conservative. This is appropriate as small data sets (there were only 19 Red and 14 Blue work items in the data set) don’t capture probabilities of rare events well.

# Summary

Lead time is easy to measure. There are several definitions and it’s important to understand where to measure from and to. Many services deliver a mix of work item types. Therefore it is important to drill down our lead time data by work item type, so that separate service level expectations can be established for each. Properties of common lead-time distributions are well-studied and can be used to support the SLEs and forecasting.

Posted in Kanban | Tagged , , | 4 Comments

## Mapping Your Process as Collaborative Knowledge Discovery – Part 3 (Thinking)

This will be the last post in the series about process visualization in professional services, specifically within the context of applying the Kanban method and one of its core practices, visualization. Earlier in this series:

Now it’s time to write about the thinking behind the knowledge discovery process, the mapping techniques I presented, and the explanation of what I observed by applying them.

Simply put, we have to respect the complexity of the system – and the group of people working together to perform some creative tasks to deliver their service is a complex adaptive system. Basic understanding of the Cynefin framework is essential here.

The problem of evolving processes in our complex system and discovering its future improved state falls into the Complex domain. The Cynefin framework informs us of two kinds of things what should not be doing:

1. implementing best practices, which belong in the Obvious domain
2. applying expert analysis, which would be appropriate in the Complicated

# A Model of a System Will Affect the System – How Weird Is That?

The Kanban method is an evolutionary method appropriate in the Complex domain. When we introduce a kanban system within the context of the Kanban method, the kanban system is essentially a probe launched into the complex system of workers, their organizational partners and customers. The probe will co-evolve with the system. How well or poorly this probe is designed may affect the effectiveness and success of the evolutionary change. Designing the kanban system and introducing it appropriately in the given context is one of the key skills of Kanban coaches.

In the last several decades, an approach known as Value Stream Mapping (VSM) has become very popular. A value stream map lets us see value-adding and non-value-adding activities that make up the process of delivery of value to the customer. The implication here is that we can improve the process by reducing the non-value-adding activities or waste. This has been done so many times as to create a wide-spread, though not accurate opinion that Lean is about eliminating waste. The hidden assumption here is that we are treating our process improvement problem as a Complicated-domain problem. It is the Complicated domain that permits such reduction and aggregation.

The process visualization we are trying to create as part of our kanban system introduction is a model of our system of work. In the Complicated domain, a good model would be detailed and accurate enough (and VSM certainly delivers that) to enable us to expertly analyze the system and find improvements. But in the Complex domain, which does not permit an approach based on reduction and aggregation, the quality of our model is determined primarily by what response it provokes from the system.

It was observed by Alisson Vale, one of the pioneers of Kanban in professional services and a 2010 Brickell Key Award winner, that workflow visualizations with too much detail provoke inertia. If a Kanban board has too many columns in it, perhaps appropriately considering the number of steps in the accurately mapped process, it fatigues the workers and slows down the pace of evolutionary improvement.

# Heuristics

The approach to Kanban visualization and observations of using it in practice that I presented in the earlier posts in this series (1 and 2) are largely consistent with complex nature of the problem and the heuristics of the Complex domain.

One such heuristic (a negative one) is retrospective coherence. It warns us about the danger of trying to replicate successes from one system to another. A more effective approach is not to repeat the causes of failure. Inertia due to kanban board detail is one clear example. Mapping the process by a small group of experts is another. Yet another example: steering the conversations in front of the board away from “hey, this is waste, let’s eliminate it” towards safe-to-fail experimentation. Well, in kanban systems visualizations based on the knowledge discovery process, all dominant activities are value-adding!

Other heuristics (positive) are fine-grained objects and disintermediation. Indeed, groups of people participating in the mapping exercise find the right granularity of activities for them. The resulting workflow visualization comes directly from them, and not through an intermediary such as expert process analyst.

Complex systems are also known to be path dependent. We have seen this with “dev” and “test” captions on software delivery Kanban boards. Whereas in a naively visualized process these columns may continue to visualize functional silos, the groups that have taken the path through the knowledge discovery process mapping have a deeper understanding of them as collaborative knowledge-producing activities. “Dev” and “test” are merely convenient shortcuts.

As other kanban system users and Kanban method practitioners try the knowledge discovery process visualization approach, I expect their own ways, recipes and observations to differ from mine. However, I predict that we will have a lot in common when it comes to the complexity thinking behind what we do and the use of Complex domain heuristics. I would love to hear their stories.

Posted in Kanban | | 2 Comments