The Weibull Training Wheels

My old post, Inside a Lead Time Distribution, remains popular. I wrote it more than five years ago based on what I learned through practice and research 5-7 years ago. People continue to read it, reference it, and ask me questions about interpreting lead time distribution charts. And sometimes they ask me specifically about the Weibull distribution.

The post largely stood the test of time, but some clarifications are in order.

We need to acknowledge the special role Weibull distribution played in the attempts to understand the nature of lead time (time-in-process) in professional services and knowledge work. A good number of empirical lead time data sets matched Weibull reasonably well. And of course there were data sets that didn’t. A simplifying assumption, that the lead time data from yet another service would likely match Weibull, lead us to some discoveries and insights. Then it turned out that we can remove the “Weibull assumption”, but the insights and the practical advice we can derive from are still valid.

Thus Weibull distribution served us as some kind of intellectual crutch or training wheels. As the Kanban method matured through years of worldwide practical application, its guidance related to lead time became proven and could stand on its own. The training wheels became unnecessary.

Weibull helped me understand the practical meaning of statistical hazard functions in knowledge work. Hazard is imply the ratio of two probabilities: the probability that a previously unsolved problem will get solved in the next instant and the probability that the problem will stay unsolved from the beginning until that instant. As a reminder, we’re in business of solving problems of some type collaboratively and delivering solutions to customers who want to know the lead time it will take.

Several possibilities:

• If our hazard function is constant, the lead time will have the exponential distribution
• If our hazard function is decreasing (which can happen due to poor prioritization, implicit classes of service, poorly managed dependencies on unpredictable services — sounds like problems that can actually occur in professional services), the lead time distribution will be sub-exponential. This domain is colloquially known as Extremistan. Nassim Taleb calls this the Lindy effect. A problem that stayed unsolved for so long is likely to stay unsolved even longer. Services with lead time in this domain are wildly unpredictable and likely not fit for their customer’s purpose. The practical advice here is: apply risk reduction and mitigation to “trim the tail” and make the service fitter for purpose.
• If our hazard function is increasing faster than linearly, the lead time distribution will have little variability. All risks in our service are thin-tailed. That’s a signal we’re about to be disrupted. Can you think of something your business should do about it?
• If our hazard function is increasing, but slower than linearly (which can happen thanks to our incorporating customer feedback into our problem-solving activities and thanks to good management of delays and dependencies, keeping them out of Extremistan), then we land in what I called the Domain of Well-Managed Knowledge Work. Geeks can call it Borderline Mediocristan.

Of all distributions in the Weibull family, those with shape parameter 1<k<2 are in this happy well-managed knowledge work middle domain. Weibull k=1.5 provides an example of a distribution from this domain. Weibull with k<1 are in Extremistan and in the “fix-this-shit” domain of unpredictable services. Weibull with k>2 are Mainland Mediocristan, the domain of “you aren’t really doing knowledge work, ripe for disruption.” Weibull k=1 and k=2 mark the boundaries.

Interestingly, we can now remove the “Weibull assumption” (real-world lead time distributions may or may not be Weibull) and the three domains are still there. Any distribution fits in one of the three. And we can now ask, what domain does the signature of your process fit in? And then we say, here’s pragmatic actionable guidance for you, three different ways.

Assuming for the moment (not entirely correctly as you know now) that a lead time distribution is kind of Weibull allowed us to think in simple categories of shape and scale. The shape (with Weibull this literally means the shape parameter) encapsulated all information about the pattern of risks and delays in the service. The scale parameter tells us what time units we got under the horizontal axis.

Now remove the Weibull assumption. The pattern of risks and sources of delay still determines the shape, only now it’s not some number, but the outline of the lead time distribution chart the manager is bringing to the Service Delivery Review. And you still need to know the time scale under the x-axis: how fast does your service deliver in a typical, medium-happy scenario? Is it hours? days? weeks? months? If your primary source of this understanding is your “empirical” data, looking at the 60th to 70th percentile range is a healthy habit. Mathematicians out there can figure out the upper bound of the confidence interval for the estimation of the median lead time. Given the typical sizes of lead time data sets managers in professional services collect and deal with, this upper bound is likely to fall east of the 60th percentile. This is pure coincidence with the 63th-percentile shape-invariant point on the Weibull curve. And this applies to any distribution, Weibull or not.

As you can see, Weibull distribution was training wheels for us several years ago, but these training wheels have by now fallen off. The Kanban method practical advice on managing lead time of professional services stands on its own. We can also understand sources of delay and model them realistically. We can use such models to see the impact of improvement actions. We can then prioritize those actions that make the greatest impact for our customers.

Posted in Uncategorized | 1 Comment

Time to Update the STATIK A3

I created the so-called “STATIK canvas” several years ago. Several people from faraway countries have asked me about it recently. I’m glad they are using this paper or some variations of it.

STATIK stands for the System Thinking Approach To Introducing Kanban. You apply this approach to model and understand the business environment in which some service is delivered to customers (which may be a product development or a project workflow, to be clear) and to design and introduce a Kanban system with the intent to stimulate this service to improve.

Mike Burrows, who wrote a good guide to STATIK, which is Part III of his book, Kanban from the Inside, called it Kanban’s hidden gem in his conference speeches. I’ve always maintained that the STATIK A3 must then be like a cheap plastic container for the gem. It is still useful and it still works, though.

Now that I see people use this paper, I’d like to make some changes to it and to better communicate my intent for it. Which is what this post is about.

First, I want to stop calling this thing a “canvas.” It was a bad choice of a word on my part. The word “canvas” has recently become meaningless as people produced pieces of paper subdivided into areas to capture information and call them canvases, after the famous Business Model Canvas. Well the Business Model Canvas is a canvas, and it defines the word, in my view. It represents the most closely connected elements, such as customer segments and sales channels to reach them, as regions bordering each other. It uses geometry effectively to visually reinforce the coherence of a business model. Such logic is missing from most other “canvases.” With my STATIK paper it wasn’t even the intention.

(P.S. I’ve revisited my original 2015 post introducing this STATIK paper and found it still largely relevant. Except for the irritating word “canvas.”)

Second, I believe there are better reasons to call this paper an A3. Technically, it fits the A3 size paper (or 11-by-17 inch for North Americans). There are deeper reasons, of course. There was a time in my career when I studied and practiced A3 Thinking seriously. It was my part of my search to become more effective coach and consultant. I got good mentoring in A3 and read original literature on it. I realized there were different layouts of A3, but they all had two things in common. They were all effective at resisting the natural human tendency to jump to conclusions, such as to change something in the problematic process while believing their action would solve that problem. But even if we were to apply some deliberate problem-solving approach, that would still leave us exposed to another tendency: believing which problems were the most worth solving at the moment and leaning on familiar, perhaps ineffective ways to frame them. I learned how the good A3 designs were effective at resisting “jumping to problems” as well.

One natural bias we can observe in the Kanban method is of course how people rush to draw visual Kanban boards visualizing some process. They rush it without understanding all the design choices they’re making implicitly along the way or why they’re making them. This is the “jumping to conclusions” that STATIK counters and that I definitely want the STATIK A3 to resist visually. The STATIK A3 also counters the “jumping to problems” bias. Two key fields occur in the top inch of the paper. One asks you to define the service you’re trying to improve. If you “gathered my team to do STATIK” it may stop you right there. Another one asks, who is the manager here taking responsibility for delivering services to customers? Is this person in the room?

The rest of the STATIK A3 layout has somewhat odd proportions that reinforce similar messages. For example, the Workflow Mapping section allows capturing multiple variations of workflows for different work item types. When people use STATIK A3 in various geographical and industry contexts, they might find different proportions and layouts to be more effective for the same intent. The intent is, of course, not to jump to solutions and not to jump to problems. Instead, analyze carefully what is going on in the service process, what the service provider and customers want to improve in it, and how that informs various Kanban system design choices. And only then some of those choices will make it onto the blurry two-dimensional projection we call the visual Kanban board.

Posted in Kanban | | 1 Comment

Happy 50th Birthday, David J Anderson!

If I’m not mistaken, the community of practitioners, users, coaches and experts of Kanban has a special occasion today. It’s the 50th birthday of the pioneer of our method, David J Anderson.

Among David’s many innovations, there’s one that deserves a special mention on such occasion. It’s his discovery of the work of Ray Immelman on tribal social behaviour and its implications to leadership in the modern workplace. Immelman’s book, Great Boss, Dead Boss summarizes his model of tribal behaviour and leadership and gives practical guidance on this subject. This book (ideally) or David’s condensed interpretation of Immelman’s insights given in his book Lessons in Agile Management (at the least) have become required reading for Kanban coaches. This knowledge, fortified by practice, helps our coaches be more effective change agents in many complex situations.

Great Boss, Dead Boss is a business novel. Its protagonist Greg faces a crisis at work and has to learn new skills quickly. Greg finds a mentor named Butch who guides him to discover many (twenty-two, to be precise) “tribal attributes” — ways to read group behaviour in the workplace. The new skills help Greg steer clear of hidden dangers and magnify effectiveness of his actions. Like Alex from Goldratt’s The Goal, he achieves something nobody thought possible and turns around his struggling plant.

Near the end of the novel, Greg discovers a twenty-third attribute, which he believes is important, which his mentor possesses, but without realizing it. Greg writes it down:

“Strong leaders have capable mentors whose psychological limits exceed their own.”

I want to use today’s occasion on behalf of all “Gregs”, many men and women of the Kanban community, to give David appreciation for being such a mentor for many of us over many years.

Cheers!

Fitness for Purpose Diary, Part 3: Replace Scale with Taxonomy

I’m continuing to document incremental innovations beyond the net promoter score that occurred in recent years, leading to the current state of fitness-for-purpose analysis techniques. Previously in this series:

3. Apply segmentation

David J Anderson documented the next two steps in his blog post, Fitness for Purpose Score, so I’m linking to it here as I insert these two steps into the longer story of evolution of these techniques and my experience and perspective on them.

The first of these two steps, which is the subject of this post, was a significant departure from the NPS. The 11-point scale was gone, replaced with a taxonomy of six levels. Some text defined each level, thus providing the customer with options to answer the question about their satisfaction with our product or service.

An important episode from my own experience with the contrast between numeric scales and taxonomies occurred several years to when I became an accredited Kanban trainer (AKT). Early in each class, trainers ask participants to assess their Kanban knowledge and experience. And the way we don’t want to do this is to ask: “Rate yourself on the scale from 0 to 100.” The participant might answer 42 and we’d just wonder what that means. Answers may depend too much not on customers’ real input, but on how they calibrated the scale. Imagine a Kanban user with experience only in a certain flavour of proto-Kanban, which is usually listed third when we cover six proto-Kanban patterns in training. (I don’t have to imagine as I’ve met many such trainees in classes — and the point of the class is to take their game to the next level. The bigger problem than the shallowness of such Kanban implementations is often their not being informed choices — again, another point of taking training is to learn the options.) But they have lots of experience with it, so they may rate themselves 10 out 10 or, allowing for some possibility there’s still something for them to learn in the method, 8 or 9. But from the perspective of a more properly calibrated respondent, they’d certainly belong well on the left side of the scale. Their eight is not another participant’s eight. Thus the numeric scale has too much calibration error and fails to be factual from the point of view of the product vendor or service provider (in this case, me as a trainer) conducting the survey.

Fortunately, even before I joined the AKT program, the community of Kanban trainers had long found a solution to this problem. It was a six-level taxonomy, where substantial team-level Kanban experience was defined as level three. (Definitions of other levels are not the point here, so I’ll skip them. If you’ve taken any certified Kanban training class, you’ve seen them.) I believe this proven solution inspired the six-level taxonomy that appeared in the first Fitness for Purpose survey, which replaced the NPS.

Template

Below is the temporary template consisting of six fitness levels. Temporary, because we would soon adjust its wordings in the very next innovation step. The top two levels mean “promoters”, positive, satisfied customers. The level below them is neutral, and the bottom three levels are for dissatisfied customers with various degrees of dissatisfaction.

1. Product/service exceeded customer expectations, delighted them
2. Product/service fully met customer’s expectations
3. Mostly satisfied, but with some minor concerns or reservations
4. Significant concerns or unmet customer needs
5. Significantly dissatisfied customer
6. Nothing useful, complete loss and waste of time

And, of course, don’t forget to ask the customers to provide a narrative (explain why they chose one of these six answers), and segment the survey by components of the product or service offering.

Next

The next innovation occurred almost immediately after this one, but I’d still save it for a separate blog post. The problem this innovation was to solve was: when the customers choose one of the taxonomy levels, what question are they really answering and what criteria do they apply to judge our product or service? We still had a bit more to go to make survey responses more objective and factual.

Fitness-for-Purpose Diary, Part 2: Segmentation

I’m continuing my attempt to document, in a series of blog posts, the incremental improvements leading to today’s state of fitness-for-purpose analysis techniques. Previously in this series:

Segmentation was the next logical step. If I could subdivide my product in to several modules, I would ask customers to fill out the brief two-question NPS+narrative survey for each module. This wasn’t a burden for customers as the number of modules was always small and the survey itself is extremely brief and simple.

The results were useful as I saw which part of my service earned a high NPS such as +0.7 and which got a disappointing -0.3 instead of only seeing a mediocre +0.2 from the overall survey. This information was immediately actionable as I knew where to focus my improvement efforts.

If any of this sounds very simple and obvious, that’s because it is! Yet, when I am a customer, I still see, even several years later, simple, no-segmentation, 1-2-question NPS surveys or pages after pages of questions about parts of my experience that have nothing to do with my satisfaction. Maybe it is not so simple after all.

Template

Question 1. How likely is it that you will recommend this firm (service, product) to a colleague or friend? Scale of answers: 0 (very unlikely) to 10 (very likely).

(blank space for the customer’s story)

Repeat this for every part or module of the product or service offering.

Next

We can of course apply segmentation not only to our product or service offerings, but also to customer populations. This is not as easy as it sounds. If we use customer narratives as input into our segmentation process, we run the risk of our conclusions becoming circular logic: customers who tell this type of stories tend to tell this type of stories.

Using external (to our NPS survey) sources of information isn’t without problems either. Canadian organizational improvement coach Bernadette Dario provided a counter-example, which David J Anderson developed into a fictional, but realistic character, who has become known in the Enterprise Services Planning community as Neeta. Neeta is a modern professional woman and mother of elementary-school-aged children. She belongs to only one demographic segment, no matter how we define those. She is only one persona if we were to use personas as our customer research tool. Yet Neeta’s actual consumer behaviour reveals multiple personalities even when buying the same product. We want to understand these personalities, but neither the traditional demographic segmentation nor the more modern “agile” personas technique offer us a path to get there.

Resolving this challenge and further innovations were still ahead.

Posted in Enterprise Services Planning | | 1 Comment

Fitness-for-Purpose Diary, Part 1: Starting With (and moving away from) NPS

I’d like to document the evolution of concepts and techniques of the fitness-for-purpose analysis. Fitness for purpose has been a hot topic in the Kanban community in the last several years, particularly at the Leadership Retreat level. It is also a key and still evolving concept in Enterprise Services Planning. This story will take a series of blog posts, perhaps one post for each incremental innovation.

With fitness-for-purpose analysis, we try to get our customers stories, understand why they choose to buy or not to buy our products and services, how fit the customers find our products for their (customers’) purposes, and how we can make our products fitter.

Why do we want our products fitter? Because that would mean happiness for the customers, success for the business owners, and pride of workmanship for the workers making and delivering the products to customers.

The Starting Point: Net Promoter Score (NPS)

The journey began several years ago when many discovered the Net Promoter Score (NPS) and started applying it in their business. The credit for NPS belongs to Fred Reichheld, a management consultant who developed it after many years of investigation of customer loyalty and its business impact. Management guru Steve Denning further popularized NPS in his speeches, columns, and his book The Leader’s Guide to Radical Management: Reinventing the Workplace for the 21st Century.

What is NPS? Every time a company delivers some product or service to its customer, it asks a simple question (seemingly simple, but it took Reichheld years of experimentation to come up with its precise wording): How likely is it that you will recommend this firm or service or product to a colleague or friend? Many of us as consumers have seen this question in recent years, especially after making an online purchase or a travel reservation.

The range of possible answers is an eleven-point scale, from 0 (not at all likely) to 10 (extremely likely). Customers answering 9 or 10 are promoters, 7 or 8 neutral or passively satisfied, 6 or less detractors. Now we know what percentage of our customers are promoter, what percentage are detractors. The difference between the two is the NPS. It’s a number between -1 and +1, +1 of course meaning our product delights all our customers without exception and -1 meaning the opposite. Many NPS surveys produce scores above zero, but notably short of 1.

What’s the Problem with NPS?

Understanding of NPS’ limitations, criticisms, and wishing something better appeared eventually. Said a bank executive from a Central European country (which should remain unnamed): “We conduct NPS surveys regularly. The score goes up and down, but it doesn’t let us understand what we did to move it or what we should do to make it go up or stay at a higher level. It is not actionable!” David J Anderson captured this bank’s story.

NPS was probably never intended to meet this executive’s needs. But there was, no doubt, desire for a better technique.

First Small Step: Add a Narrative

The first incremental improvement was so obvious that many discovered it started using it independently. Simply add a second question to the survey: “Why did you choose this particular answer (on the scale from 0 to 10)?” and give the customer some space to explain their choice. The customer would get a chance to explain their choice and give their story, perhaps unexpected and insightful from the company’s point of view.

My Experiences

I’d like to share two experiences with this simple NPS+narrative format, one when I was the customer and one when I was the service provider.

As a customer, I remember filling out an NPS survey in this format after taking part in the week-long Problem Solving Leadership (PSL) workshop taught by Esther Derby and Jerry Weinberg. I noticed during the workshop that about half of its participants were having a life-changing experience. (I know several people who would describe their PSL experience this way.) I belonged however to another subgroup, to whom the workshop was simply very useful. So I scored it a 9 and used the provided space to explain my choice briefly.

On the other side, I once taught a one-day workshop on process metrics. The participants seemed to be quite engaged, but my NPS survey produced a disappointing result: -0.3. When I read the participants’ stories, they all went basically like this. Alexei, I took this workshop because I wanted guidance on the topic and have a high comfort level with math. And the workshop very useful! But most my colleagues aren’t like me. Out of all people in my company, I should be the one taking this workshop and not them. So, while I personally found the workshop very useful, I wouldn’t recommend it to them.

I was relieved to read such comments and I learned several things from them. First, I was lucky to attract the right type of customer to my offering, and, perhaps more importantly, avoid attracting the wrong type of customer. Second, I needed better techniques to understand both different components of my offerings and different customer segments.

Thus the next improvement could be summarized in one word: segmentation. But that’s a story for another blog post.

Summary

• This post begins to document the evolution of fitness-for-purpose (F4P) analysis techniques leading up to today’s F4P cards and box scores.
• NPS was the starting point of this journey.
• The first incremental improvement was obvious: give customers space to tell a short story.
Posted in Enterprise Services Planning | | 2 Comments

Knowledge Discovery Process Revisited

It occurred to me during a recent training class to make a small, but important change to my Knowledge Discovery Process diagram.

Please refer to this old, popular post from two and a half years ago – Understanding Your Process as Collaborative Knowledge Discovery — for the background and summary of the knowledge discovery process concept we use in the workflow visualization practice of the Kanban method.

As a very brief summary of this concept, we try to visualize the process by paying particular attention to information arrival and discovery. The process begins from a relatively poor state of knowledge that enriches as we approach the downstream delivery point (hence the German translation: Bereicherungsprozess). We see this accumulation of knowledge as a series of dominant activities punctuated by shifts in activity or collaboration pattern.

What’s the problem? The horizontal axis label was “time” in my original knowledge discovery process diagram because I couldn’t think of anything better. (Actually, the “original” drawing improved and replaced the S-curve visualization that I used for two and a half years before that while searching for better ways to communicate the concept visually.) When teaching the concept, I always took care to explain that the horizontal axis is not linear, that is, equal horizontal segments on the chart don’t mean equal time intervals. There was also no assumption that the activities take equal, comparable or deterministic times. I was also aware that using calendar time on the horizontal axis was a great way to hide queues in the process (this is the fatal flaw of Gantt charts – one of the insights from Don Reinertsen’s Lean Product Development 2nd Generation workshop).

Then it occurred to me during a recent training class that I can simply relabel the horizontal axis as “progress.” This training class followed my reading of Walter Isaacson’s biography of Benjamin Franklin where the title of one of the chapters was “A Pilgrim’s Progress.”

This solves the problem. Points on the left still occur in time before the points on the right, but the chart doesn’t say anything else on how the progress relates to the time scale. No hint of linearity or determinism.

The updated knowledge discovery process diagram now looks like this.

Posted in coaching, Kanban, training | | 1 Comment

Kanban Is Not a Card, It’s a Space

A participant of my recent training class told me afterwards of an insight most important to him. “Kanban is not a card, it’s a space.”

I received this feedback during a long streak without any blog posts. So I thought, I could log it and share it in a short blog post. If it was useful to one participant, it might be useful to more people in the audience. And not every post must be 500-1000 words long as most of my more popular posts turned out.

Let’s look at the following Kanban board (看板), representing kanban with slots.

The “Implement” column has two available slots. They are kanban (かんばん) — the permission-giving signals allowing two work items to enter this space, representing the available capacity in this workflow activity. Looking a bit to the left, “Create-Done”, we find three work items to choose from: D, A, and V. Now we have to make a pull decision: choose two out of these three.

Let’s now look at another board, visualizing the same Kanban system in the same state, but using a different visualization style.

All kanban here are virtual. We figure out their quantity by subtracting the number of cards from the work-in-process limit above each column. Thus the “Implement” column has capacity-space for the same two work items.

Note that the work items or the demand on our delivery process are not equal. They often come from different customers and market segments, with different risks and customer expectations attached. I did the minimum for illustration purposes and used two colours to visualize such distinctions.

Our capacity is also not the same. We may have some degree of specialization or decide to allocate capacity bands to serving certain sources of demand. Let’s look at the following board, the same as the first one, except we decided to use 20% of our implementation capacity (one work item in five) to serve the work item type visualized in orange.

One of our pull decisions follows from our explicit capacity allocation policy: we pull work item V. We have options A and D for another pull decision.

I’m really zooming in here on a very small episode of Kanban training and leaving a lot out of focus: what other visualization styles exist, what are the trade-offs of using each, and a whole week (minus five minutes) of other material. This is because I want to focus on one takeaway.

Summary

The cards on the Kanban board are not kanban! The cards represent deliverables, customer requests and needs. The kanban are permission-giving signals, representing the currently available capacity. They can be visualized very obviously with slots or as virtual kanban. In short, they are not cards, they are the spaces.

Posted in Kanban, training | | 1 Comment

Forecasting Cards

Those who met me at conferences in the last two years (some clients, too) have probably seen these Forecasting Cards. They’re easy to recognize by their rainbow colours. I created them to start conversations about lead time and to communicate the related ideas and findings (such as from this popular post: Inside a Lead Time Distribution).

These cards and the conversations we had while looking at them motivated some people to start measuring their time to market and to understand the probabilistic nature of delivery processes they’re trying to manage in their professional service enterprise.

I updated the cards several times and added some explanations based on the feedback, but I haven’t made such changes recently. So, I’m going to share the current version of the Forecasting Cards as an easy download. This will likely be their last revision.

In the meantime, I’ve started working on the next set of cards, which will communicate some insights about risk taking and fat-tailed distributions.

Kanban Guide Is Here – How Not To Read It

An important event in the Kanban community occurred about two months ago. The Essential Kanban – Condensed Guide was released. It’s a compact 20-page book, authored by Andy Carmichael and David J. Anderson. As you can guess, this book is very much up-to-date on most recent developments in the Kanbanland, but is at the same time very short and accessible.

Andy — I’ve known him for a year in-person and somewhat longer virtually — has a talent for word golf, putting important concepts in few words. At the Lean Kanban UK 2013 conference, he came up with the shortest definition of Kanban fitting in only 140 characters.

The Condensed Guide went through a review process by progressively increasing circles of reviewers. Starting with a relatively small number of Kanban coaches and trainers, the authors shared the refined versions of the Guide with broader circles, then with Lean Kanban conference attendees, and then with the public. Being one of the reviewers, I know Andy put a lot of work into the Guide. I wish the Brickell Key award committee recognizes his contribution with a nomination this year (here’s the link where you can do what I did about that.

Now I’d like to talk about how not to read this Guide.

I’ve heard the following phrases (or their variations) in many exchanges with various Agile coaches quite often. I need to qualify they weren’t Agile beginners, but from people with lots of experience, bona fide peer-reviewed status within the Agile community, and some pricing power when it comes to charging clients for Agile advice. Let’s listen:

• Agile Manifesto doesn’t say that…
• Scrum Guide says…
• According to the Agile Manifesto…
• Where in the Scrum Guide did you find that?

While I suppose some Kanban users will make similar references to the condensed Kanban guide, I expect experts to do so rarely.

I’ve been asked this week to give advice on some metrics-related material. My response was, the material was sound, but I pointed out it was a mismatch to the low organizational maturity. However, in a different organizational unit of the same client company, I made the opposite recommendation.

I couldn’t derive these two diametrically opposing pieces of advice from published values and principles. I didn’t do it by gut feel. I could have just said, let’s do it this way and then inspect and adapt, but that would’ve been nothing more than a case of intellectual laziness. I suppose there could have been some practice or algorithm to lead me to these two conclusions using different inputs. But I’d prefer having no such practice. Otherwise, we’d have to teach people to do things at the practice level. We’d get into arguments about the right way to do the practice. This would lead to dogma. Some of us would become purists, while others would practice the practice-but.

Instead, I simply made sense of a large number of available stories, of success and failure, my own and told by my peers (we meet often at conferences and leadership retreats, co-train, collaborate on engagements and keep up frequent correspondence). Even though these real-world stories are messy, gathering them, making sense of them and deriving useful heuristics are all teachable skills.

Conclusion

The Kanban community values observing what people actually do, how they act, what we can reasonably infer about their thinking. Kanban experts value paying attention to storytellers, making sense of their stories, and continuously questioning contextual appropriateness of their own actions and recommendations. We value it more than the printed word. We understand the knowledge worth having — and from the clients’ point of view, worth paying for — is and will be messy and conflicted.

In the four above situations, Kanban coaches didn’t assign much value to what the four respective Agile coaches said or to the printed word of the Agile Manifesto or the Scrum Guide. Instead, we simply paid attention to what the Agile coach actually recommended to their client in a given situation. Because we’ve come to value one thing more than the other.

The Essential Kanban – Condensed Guide will no doubt educate many current and future Kanban practitioners. But I don’t expect experts or proficient practitioners of the method to do the following very often: open the Guide on some page and say, here it says so.