Introducing the STATIK Canvas

I’d like to introduce a new simple tool, the STATIK canvas. STATIK is an acronym that refers to the Systems Thinking Approach to Introducing Kanban in Your Technology Business.

Kanban (in the knowledge work context) is an evolutionary improvement method. It uses virtual Kanban systems as a way to catalyze improvement and adaptive capabilities. A Kanban system is introduced into the environment which comprises the service delivery team and its customers and partners. This is a critical moment. Systems thinking is key!

It is not the goal of this post to explain STATIK, it is rather to introduce the canvas and let people download and try it. Therefore I’ll skip this explanation and encourage the readers to explore the key STATIK resources:


OK, so here is the file to download.


The proposed STATIK canvas is roughly the size of an A3 paper. It is intended to be filled in by pencil and capture only the most important stuff. The following are instructions by section.

1. Context for Change

This section captures the internal and external (from customer’s perspective) sources of dissatisfaction and variability. Stories collected in this section often contain words that reveal work item types, hidden risk information, odd patterns of demand, unmet expectations (used in Section 2 – Demand Analysis), external and specialist dependencies (Section 3 – Workflow Mapping), implicit classes of service (Section 4).

2. Demand Analysis

This section contains the demand analysis template introduced by Dave White. The following information is collected for each work item type:

  1. Source – where the requests to deliver this type of work item arrive from
  2. Destination – where the results of work are delivered to
  3. Arrival rate. This must be a number of requests per unit of time. (“We have 300 items in the backlog” is not good enough. If you get this answer, ask where they come from and how.)
  4. Nature of Demand – note any patterns.
  5. Customer’s delivery expectations, even if unreasonable.

3. Classes of Service

For each work item type, specify which class(es) of service are being provided, their policies and delivery expectations.

4. Input and Output Cadences

Specify them for every work item type.

5. Kanban Board Visualization Design

This section is intended to be a simple sketch helping the delivery team, manager and coach figure out the major outlines of the visual board. These may include swim lanes, two-tier structure, use of colour, etc. There should not be any need to make this section a miniature replica of the actual board.


Here is a number of things I hope to learn by trying out this canvas. I expect its design to improve as a result.

  • Whether the canvas is helpful to capture the thinking process of introducing (or updating the design of) a Kanban system
  • Whether it is helpful to hang the canvas near the Kanban board to help remember why certain visual Kanban board elements are the way they are
  • The relative proportions of the sections
  • Level of detail or instructions needed in each section
  • Whether the “Roll Out” section belongs in the canvas
  • Any surprises, things I don’t expect to learn
Posted in Kanban | Tagged , , , | Leave a comment

The Best of 2014

I’m starting the new year with a short summary of what was the best on this blog in the past year.

I wrote two extended series of posts on two different topics. Each topic deserved more than 300 to 1,000 words that would fit into one typical blog post. So I wrote several posts, varying in depth, focus and appropriateness for different audiences.

  • The first series was about my knowledge-centric approach to visualizing processes in creative, intellectual fields of work. Look not for process steps and hand-offs of work, but for various ways people collaborate to create knowledge.
  • The second series was about Lead Time. I tried to cover several new insights in to probability distributions of delivery times in intellectual work and how we could use them practically.
    • Inside a Lead Time Distribution is the key post in the series. It goes over the key points on a typical distribution curve and shows how we can practically use them. It is interesting that the ways we use the data differ significantly from the left to the right side of the curve. So I chose to paint those various points using rainbow colours so the rich picture they reveal doesn’t look like a boring bar chart.
    • Two practical (and technical) posts remained popular during the year. How to Match to Weibull Distribution in Excel (using only spreadsheet software) first appeared in 2013. Last year, I updated the formulas in the attached spreadsheet to automate a few operations and make it even easier to use. Also last year, I added a twin post to it, How to Match to Weibull Distribution Without Excel. You can still do math (the old way), but if you are willing to give up some precision, the technology becomes very simple – you can visually match several known patterns.
    • The remaining posts in the series should come up if you search the site for “lead time.”

The knowledge-discovery process stuff is fairly complete and stable at this time. The lead time, probabilistic approach and forecasting topics will likely see new developments in 2015. If you’d like to learn more how to apply this knowledge and practical experience in your company, please feel free to connect with me by email or Skype so that we can discuss it.

Besides the two post series, my popular post from the past, The Elusive 20% Time, was turned into a contribution to the new book More Agile Testing by Lisa Crispin and Janet Gregory. (The contribution was in the area of organizational practices helping achieve greater software quality.) I posted the condensed, cleaned-up, copyedited version of this article on this blog under the title The Still Elusive 20% Time. (There is a pending comment on it at the time I’m writing this, which deserves a reply and probably a new blog post – stay tuned.)

Of all other posts, the best one in terms of 2014 page views turned out to be this one, written in 2013: Scrum, Kanban and Unplanned Work. It contains one of my trademark phrases, switching from Scrum to Kanban, missing the most of both. It also rebuts some of the Kanban misconceptions, which I continued to hear from practitioners and their under-informed coaches throughout the year.

A quick review on my posts shows some of them are due for an update and some seem like old baggage and have lost their relevance. I will fix this in 2015 if time allows.

Posted in blog | Tagged | Leave a comment

#BeyondVSM: Understanding and Mapping Your Process of Knowledge Discovery

This short post will serve as the “table of contents” for a series of six posts I wrote this year about mapping processes in creative industries based on knowledge creation and information discovery and not by handoffs of work between people and teams.

  • Understanding Your Process as Collaborative Knowledge Discovery: the first post in the series explores the problem and the new ways of looking at it
  • Examples of using this approach to map processes of knowledge discovery in two different industries
  • Mapping Your Process as Collaborative Knowledge Discovery. I wrote about how to actually create such process maps with real people who do the work, why, and how to use these maps in Kanban system design. This post turned into three, each covering different layers:
    • Recipes: how to actually do it and what not to do, without much explaining why. Sorry, the post was already long enough. Of course, recipes are not enough, but that’s what I have the next two posts for!
    • Observations: what actually happened as I tried this new approach. The experience informed various tips on how to do it.
    • Thinking
Posted in Kanban | Tagged , , | 1 Comment

Lead Time and Iterative Software Development

I have introduced my forecasting cards and written about lead time distributions in my recent blog post series. Now I’d like to turn to how these concepts apply in iterative software development, particularly the popular process framework Scrum.

Let’s consider one of the reference distribution shapes (Weibull k=1.5), which often occurs in product development, particularly software. I went through various points on this curve and replaced them with what they ought to mean in this specific context.

Lead time distributions and the timebox. The chart shows the mode, the median, the average, and the 75th percentile relative to the sprint duration

Scrum teams often complain that their user stories are not finished in the same sprint they were started. I have often observed in such situations that their stories are simply too large.

Even if typical stories were smaller than the duration of the sprint, such as, 7-8 days in a 10-business-day, two-week sprint, that was not small enough. The teams, Scrum masters, product owners held, perhaps subconsciously, the notion that we can “keep the average and squeeze the variance”, that is keep the 7-8-day average but limit variability — estimate, plan and task better — so that the right side of the distribution fits within the timebox. Recent lead time distribution research, examining many data sets from different companies (including those using iterative Agile methods) refute this notion. One of the key properties of common lead time distributions is that the average and standard deviation are not independent.

Another suggestion — keeping the average story to half the sprint duration, so that the ends of the bell curve gives us zero in the best case and the sprint duration in the worst case — is another illusion. Lead time distributions are asymmetric!

Leftshifting diagram: as the lead time distribution curve shifts to the left, very few data points don't fit into the timebox

The real strategy is to left-shift the whole distribution curve.

This Kanban-sourced knowledge led to many quick wins as the Scrum teams, their Scrum masters and product owners I coached gave themselves a goal to systematically make their stories smaller. They simply asked, what can we do to double the count of delivered stories in the next few sprints, covering roughly the same workload in each sprint? After the doubling, ask the same question again until the stories are small enough.

How Small?

How small do user stories need to be? We can turn to our forecasting cards, which give the control-limit-to-average ratios between 3.9 and 4.9 for the two most common distribution shapes (1.25 and 1.5). In the extreme case, we have to assume the exponential distribution (I have observed quasi-exponential distributions in some cases in incremental software development), which gives us the ratio of 6.6. The ratio of average lead time to sprint duration in the range of 1:4 to 1:6 can be used as a guideline.

To make this rule of thumb a bit more practical, lets take into account these practical considerations: (1) the lead time is likely to be measured in whole days, (2) the number of business days in a sprint is likely to be a multiple of five, and (3) the median (half-longer, half-shorter) is easier to use in feedback loops than the average.

The control-limit-to-median ratios for the same distribution shapes are (consulting the forecasting cards again) 4.5 to 6.1; in the extreme case, 9.5. Therefore, half of the stories in one-fifth of the sprint duration can be used as a guideline. In the extreme cases, we may need one-tenth instead of one-fifth.

None of this is news to experienced Scrum practitioners, particularly those with eXtreme Programming backgrounds. XP tribe has appreciated the value of small stories since long ago, and invented and evangelized techniques, such as Product Sashimi to make them smaller.

Posted in Kanban | Tagged , , , , , , | 1 Comment

Introducing Lead Time Forecasting Cards

I’m introducing a simple tool: lead time forecasting cards.

The set of six (so far) forecasting cards

Each card displays a pre-calculated distribution shape, using Weibull distribution with shape parameters 0.75, 1 (Exponential distribution), 1.25, 1.5, 2 (Rayleigh distribution), and 3. (Since I printed the first batch, I realized I need to include k=0.5 in the collection.)

For each distribution, the following points are marked with rainbow colours:

  • mode
  • median
  • average
  • percentiles (63rd, 75th, 80th, 85th, 90th, 95th, 98th, and 99th)
  • the upper control limit (99.865%)

The scale of each card is such that the lead time average is 1. Your average is different, so multiply it by the numbers given in the table on each card.

I will be bringing a small number of printed cards to the upcoming conferences, training classes and consulting clients. The goal is, of course, to get feedback, refine them, and then make them more widely available.

Posted in Kanban | Tagged , , , | 1 Comment

Lead Time Distributions and Antifragility

This post continues the series about lead-time distribution and deals with risks involved in matching real-world lead-time data sets to known distributions and estimating the distribution parameters.

Convex option payoff curve. Losses are limited on the left, gains are unlimited on the right

One of the key ideas of Nassim Nicholas Taleb’s book Antifragile is the notion of convexity, demonstrated by this option payoff curve. The horizontal axis shows the range of outcomes, the vertical axis shows the corresponding payoff. With this particular option, the payoff is asymmetric. Our losses in case of negative outcomes on the left are limited to a small amount. But our gains on the right side (positive outcomes) are unlimited. Note that this is due to the payoff function’s convexity. If the function was only increasing as a straight line, both our losses and gains would be unlimited.

A concave payoff function would achieve the opposite effect: limited gains and unlimited losses.

An antifragile system exposes itself to opportunities where benefits are convex and harm is concave and avoids exposure to the opposite.

In the book’s Appendix, Taleb considers a model that relies on Gaussian (normal) distribution. Suppose the Gaussian bell curve is centered on 0 and the standard deviation (sigma) is 1.5. What is the probability of the rare event that the random variable will exceed 6? It’s a number, and a pretty small one, which anyone with a scientific calculator can calculate.

Gaussian distribution analysis: probability of a rare event as a function of sigma is a convex function.

Gaussian distribution analysis: probability of a rare event as a function of sigma is a convex function.

Right? Wrong. We don’t really know that sigma is 1.5. We simply calculated it from a set of numbers collected by observing some phenomenon. The real sigma may be a little bit more or a little bit less. How does that change the probability of our rare event? There is a chart in the Appendix, but I rechecked the calculations, and here it is — it’s a (very) convex function.

If we overestimate sigma a little bit, it’s really less than what we think it is, on the left side of the chart, we overestimate the probability of our rare event — a little bit. But if we underestimate sigma a little bit, we underestimate the probability of our rare event — a lot.

Convexity Effects in Lead Time Distributions

Weibull distribution analysis: probability of exceeding SLA as a function of parameter

Weibull distribution analysis: probability of exceeding SLA as a function of parameter

Weibull distribution analysis: probability of exceeding SLA as a function of scale parameter (shape parameter k=3).

Weibull distribution analysis: probability of exceeding SLA as a function of scale parameter (shape parameter k=3).

Let’s apply this convexity thinking to lead time distributions of service delivery in knowledge work. Weibull distributions with various parameters are often found in this domain. Let’s say we have a shape parameter k=1.5 and a service delivery expectation: 95% of deliveries within 30 days. If we are spot-on with our model, the probability to fail this expectation is exactly 5%. How sensitive is this probability to the shape and scale parameters?

With respect to the shape parameter, the probabilities to exceed the SLAs are all convex decreasing functions (I added the SLAs based on 98th and 99th percentiles to the chart). If we underestimate the shape parameter a bit, we overstate the risk a bit; if we overestimate it a bit, we understate the risk — a lot.

In other distribution shape types (k<1, k>2), it is the same story. The risk of underestimating-overestimating the shape parameter is asymmetric.

What about the scale parameter? It turns out there is less sensitivity with respect to it. The convexity effect (it pays to overestimate the scale parameter) is present for k>2 (as shown by the chart), it is weaker for 1<k<2, and the curves are essentially linear for k<1.


When analyzing lead time distributions of service delivery in creative industries, it is important not to overestimate the shape parameter. The under-overstatement of risk due to the shape parameter error is asymmetrical.

Matching a given lead-time data set to a distribution doesn’t have to be a complicated mathematical exercise. We should also not fool ourselves about the precision of this exercise, especially given our imperfect real-world data sets. Using several pre-calculated reference shapes should be sufficient for practical uses such as defining service level expectations, designing feedback loops and statistical process control. If we find our lead-time data set fits somewhere between two reference shapes, we should choose the smaller shape parameter.

Posted in Kanban | Tagged , , , | Leave a comment

How to Match to Weibull Distribution without Excel

A bit more than one year ago, I wrote a short, but fairly technical post on how to do this without complicated statistical tools, only using something found in many modern offices: spreadsheet software such as Excel.

Here is the problem we are trying to solve. Weibull distribution occurs often in service delivery lead times in various creative industries. We need to match given lead time data sets and find the distribution parameters (shape and scale), which help in understanding risks, establishing service-level expectations and creating forecasting models.

My post and the spreadsheet containing the necessary formulas (just copy and paste your own data) are still valid. However, I would like to propose a simpler method with even lower barrier to entry. It is less precise, but still reasonably accurate and you can use it to think on your feet without more complicated tools in your hands.

3 Main Shapes And 2 Boundary Cases

The three main shapes of Weibull distribution curves can be told from one another by observing convexity and concavity of the left side and the middle of the curve. (On the right side, they are all convex.) The three cases correspond to the shape parameter being (1) less than 1, (2) between 1 and 2, (3) greater than 2. Exponential (k=1) and Rayleigh (k=2) distributions give the boundary cases, separating three broad classes.

Let’s take a look at the charts.

Weibull distribution curve with shape parameter 0.75. The curve is convex over the entire range when the shape parameter is less than 1.

Weibull distribution, shape parameter k=0.75

Weibull distribution curve for the shape parameter 1.5

Weibull distribution, shape parameter k=1.5

Weibull distribution with shape parameter 1.25

Weibull distribution with shape parameter k=1.25

When the shape parameter is less than 1, the distribution curve (probability density function) is convex over the entire range. This shape parameter range occurs often in IT operations, customer care and other fields with a lot of unplanned work. The lowest value of the shape parameter that I have observed is 0.56. I chose k=0.75 as the representative of this class. The main visual features of this distribution are, besides convexity: the right-shifted average (for example, for k=0.75, the average falls on the 68th percentile) and a wide spread of common-cause variation (for example, for k=0.75, the ratio of the 99th percentile to the median is about 15:1).

When the shape parameter is between 1 and 2, the curve is concave on the left side and through the peak and turns convex on the back slope. This shape parameter range occurs in product development environments. I have observed shape parameters close to 1, close to 2 and everything in between, but parameters between 1 and 1.5 more often than between 1.5 and 2. I chose k=1.5 and k=1.25 as two representatives of this class. The main visual features of this distribution is the asymmetric “hump” rising steeply on the left side and sloping gently on the right. The mode (the distribution peak) is left-shifted (as I wrote earlier, it falls on the 18th and 28th percentiles, respectively, for k=1.25 and k=1.5). The median is slightly, but appreciably less than the average (87% of the average for k=1.5). The common-cause variation spread is significant, but narrower than for k<1.

Weibull distribution with shape parameter k=3

Weibull distribution with shape parameter k=3

In software engineering, the adoption of Agile methods may lead to left-shifting of the shape parameter. This corresponds to the team's growing ability to deliver software solutions in smaller batches faster. This was another reason for including k=1.25 as a reference point.

Exponential distribution, also Weibull distribution with shape parameter k=1

Exponential distribution, also Weibull distribution with shape parameter k=1

When the shape parameter is greater than 2, the curve is convex on the left side, then turns concave as it goes up towards the peak, and turns convex again on the back slope. This distribution shape often occurs in phase-gated processes. I have only few observations of this type of distribution curve and the greatest value of the shape parameter I have observed so far is 3.22. I chose k=3 as the representative of this class. The main visual feature of this distribution is that a much more symmetric peak (although it is slightly asymmetric). Compared to k<2, the spread of common-cause variation is narrower relative to the average, however, processes with this type lead time distribution tend to have very long average lead times.

Rayleigh distribution, also Weibull distribution with shape parameter k=2

Rayleigh distribution, also Weibull distribution with shape parameter k=2

The Exponential distribution (k=1) provides the boundary between two classes of shapes (k<1 and 1<k<2). It is a very well-studied distribution thanks to all the research of the queuing theory and Markov chains. This distribution has a unique property: its hazard rate, which is the propensity to finish yet unfinished work, remains constant and independent of time. When the hazard rate decreases with time, we get distributions such as with k<1; when it increases slowly, we get distributions such as the ones with 1<k<2 shown above.

Similarly, Rayleigh distribution (k=2) provides the boundary between the k>2 and 1<2<k classes.

Matching Your Data Set to Weibull Distribution

The first step is to narrow down the shape parameter range by observing convexity-concavity on the left side and in the centre.

The second step is to compare your lead time distribution histogram and to the few reference shapes provided in this post and see if it is close enough to one of them or is somewhere between the two. Ratios of various percentiles to the average can help. This will pin the shape parameter within a fairly narrow range (e.g. 1.25 to 1.5). This is not very precise, but accurate enough for many practical applications, such as reasoning about service-level expectations, control limits and feedback loops via the median.

Math Check

Finally, there are two quantitative shortcuts to estimating Weibull shape and scale parameters.

A good approximation of the scale parameter (thanks again to Troy for pointing this out) is the 63rd percentile. Indeed, the value of the Weibull cumulative distribution function where the random variable (lead time) equals the scale parameter is independent of the shape parameter and is given by:

The value of the quantile (inverse cumulative distribution) function at this “magic point” is indeed the scale parameter as shown by:

The “magic point” has another important property, expressed by the following formula:

If the lead time data set is smooth enough to allow approximating the slope of the quantile function at the 63rd percentile (using, for example, finite differences), then this formula can be used to estimate the shape parameter.


  • Weibull distribution occurs often in lead time data sets in creative industries. Estimating distribution parameters (shape and scale) is desirable so that we can use them in various quantitative models.
  • The linear regression method is still preferable for finding the shape and scale parameters.
  • In situations when the tools are not accessible, the shape and scale parameter can be approximated by comparison with several reference shapes. It is important to differentiate between three shape parameter ranges (k<1, 1<k<2, k>2) by observing convexity and concavity on the left side and the centre of the distribution curve.
  • Relatively low precision (two-tenths for the shape parameters) is still sufficient for practical, quick thinking about the service delivery capability described by the lead time data set.
  • There are two math tricks based on the unique properties of Weibull distribution and the 63rd percentile that can be used to roughly estimate the distribution parameters.
Posted in Kanban | Tagged , , , | 4 Comments