Fitness for Purpose Diary, Part 3: Replace Scale with Taxonomy

I’m continuing to document incremental innovations beyond the net promoter score that occurred in recent years, leading to the current state of fitness-for-purpose analysis techniques. Previously in this series:

  1. Start with Net Promoter Score
  2. Add narratives
  3. Apply segmentation

David J Anderson documented the next two steps in his blog post, Fitness for Purpose Score, so I’m linking to it here as I insert these two steps into the longer story of evolution of these techniques and my experience and perspective on them.

The first of these two steps, which is the subject of this post, was a significant departure from the NPS. The 11-point scale was gone, replaced with a taxonomy of six levels. Some text defined each level, thus providing the customer with options to answer the question about their satisfaction with our product or service.

An important episode from my own experience with the contrast between numeric scales and taxonomies occurred several years to when I became an accredited Kanban trainer (AKT). Early in each class, trainers ask participants to assess their Kanban knowledge and experience. And the way we don’t want to do this is to ask: “Rate yourself on the scale from 0 to 100.” The participant might answer 42 and we’d just wonder what that means. Answers may depend too much not on customers’ real input, but on how they calibrated the scale. Imagine a Kanban user with experience only in a certain flavour of proto-Kanban, which is usually listed third when we cover six proto-Kanban patterns in training. (I don’t have to imagine as I’ve met many such trainees in classes — and the point of the class is to take their game to the next level. The bigger problem than the shallowness of such Kanban implementations is often their not being informed choices — again, another point of taking training is to learn the options.) But they have lots of experience with it, so they may rate themselves 10 out 10 or, allowing for some possibility there’s still something for them to learn in the method, 8 or 9. But from the perspective of a more properly calibrated respondent, they’d certainly belong well on the left side of the scale. Their eight is not another participant’s eight. Thus the numeric scale has too much calibration error and fails to be factual from the point of view of the product vendor or service provider (in this case, me as a trainer) conducting the survey.

Fortunately, even before I joined the AKT program, the community of Kanban trainers had long found a solution to this problem. It was a six-level taxonomy, where substantial team-level Kanban experience was defined as level three. (Definitions of other levels are not the point here, so I’ll skip them. If you’ve taken any certified Kanban training class, you’ve seen them.) I believe this proven solution inspired the six-level taxonomy that appeared in the first Fitness for Purpose survey, which replaced the NPS.


Below is the temporary template consisting of six fitness levels. Temporary, because we would soon adjust its wordings in the very next innovation step. The top two levels mean “promoters”, positive, satisfied customers. The level below them is neutral, and the bottom three levels are for dissatisfied customers with various degrees of dissatisfaction.

  1. Product/service exceeded customer expectations, delighted them
  2. Product/service fully met customer’s expectations
  3. Mostly satisfied, but with some minor concerns or reservations
  4. Significant concerns or unmet customer needs
  5. Significantly dissatisfied customer
  6. Nothing useful, complete loss and waste of time

And, of course, don’t forget to ask the customers to provide a narrative (explain why they chose one of these six answers), and segment the survey by components of the product or service offering.


The next innovation occurred almost immediately after this one, but I’d still save it for a separate blog post. The problem this innovation was to solve was: when the customers choose one of the taxonomy levels, what question are they really answering and what criteria do they apply to judge our product or service? We still had a bit more to go to make survey responses more objective and factual.

Posted in Enterprise Services Planning | Tagged , , , , , | Leave a comment

Fitness-for-Purpose Diary, Part 2: Segmentation

I’m continuing my attempt to document, in a series of blog posts, the incremental improvements leading to today’s state of fitness-for-purpose analysis techniques. Previously in this series:

  1. Start with Net Promoter Score (NPS)
  2. Add customer narratives

Segmentation was the next logical step. If I could subdivide my product in to several modules, I would ask customers to fill out the brief two-question NPS+narrative survey for each module. This wasn’t a burden for customers as the number of modules was always small and the survey itself is extremely brief and simple.

The results were useful as I saw which part of my service earned a high NPS such as +0.7 and which got a disappointing -0.3 instead of only seeing a mediocre +0.2 from the overall survey. This information was immediately actionable as I knew where to focus my improvement efforts.

If any of this sounds very simple and obvious, that’s because it is! Yet, when I am a customer, I still see, even several years later, simple, no-segmentation, 1-2-question NPS surveys or pages after pages of questions about parts of my experience that have nothing to do with my satisfaction. Maybe it is not so simple after all.


Question 1. How likely is it that you will recommend this firm (service, product) to a colleague or friend? Scale of answers: 0 (very unlikely) to 10 (very likely).

Question 2. Why did you choose your answer to Question 1?

(blank space for the customer’s story)

Repeat this for every part or module of the product or service offering.


We can of course apply segmentation not only to our product or service offerings, but also to customer populations. This is not as easy as it sounds. If we use customer narratives as input into our segmentation process, we run the risk of our conclusions becoming circular logic: customers who tell this type of stories tend to tell this type of stories.

Using external (to our NPS survey) sources of information isn’t without problems either. Canadian organizational improvement coach Bernadette Dario provided a counter-example, which David J Anderson developed into a fictional, but realistic character, who has become known in the Enterprise Services Planning community as Neeta. Neeta is a modern professional woman and mother of elementary-school-aged children. She belongs to only one demographic segment, no matter how we define those. She is only one persona if we were to use personas as our customer research tool. Yet Neeta’s actual consumer behaviour reveals multiple personalities even when buying the same product. We want to understand these personalities, but neither the traditional demographic segmentation nor the more modern “agile” personas technique offer us a path to get there.

Resolving this challenge and further innovations were still ahead.

Posted in Enterprise Services Planning | Tagged , , , , | 1 Comment

Fitness-for-Purpose Diary, Part 1: Starting With (and moving away from) NPS

I’d like to document the evolution of concepts and techniques of the fitness-for-purpose analysis. Fitness for purpose has been a hot topic in the Kanban community in the last several years, particularly at the Leadership Retreat level. It is also a key and still evolving concept in Enterprise Services Planning. This story will take a series of blog posts, perhaps one post for each incremental innovation.

With fitness-for-purpose analysis, we try to get our customers stories, understand why they choose to buy or not to buy our products and services, how fit the customers find our products for their (customers’) purposes, and how we can make our products fitter.

Why do we want our products fitter? Because that would mean happiness for the customers, success for the business owners, and pride of workmanship for the workers making and delivering the products to customers.

The Starting Point: Net Promoter Score (NPS)

The journey began several years ago when many discovered the Net Promoter Score (NPS) and started applying it in their business. The credit for NPS belongs to Fred Reichheld, a management consultant who developed it after many years of investigation of customer loyalty and its business impact. Management guru Steve Denning further popularized NPS in his speeches, columns, and his book The Leader’s Guide to Radical Management: Reinventing the Workplace for the 21st Century.

What is NPS? Every time a company delivers some product or service to its customer, it asks a simple question (seemingly simple, but it took Reichheld years of experimentation to come up with its precise wording): How likely is it that you will recommend this firm or service or product to a colleague or friend? Many of us as consumers have seen this question in recent years, especially after making an online purchase or a travel reservation.

The range of possible answers is an eleven-point scale, from 0 (not at all likely) to 10 (extremely likely). Customers answering 9 or 10 are promoters, 7 or 8 neutral or passively satisfied, 6 or less detractors. Now we know what percentage of our customers are promoter, what percentage are detractors. The difference between the two is the NPS. It’s a number between -1 and +1, +1 of course meaning our product delights all our customers without exception and -1 meaning the opposite. Many NPS surveys produce scores above zero, but notably short of 1.

What’s the Problem with NPS?

Understanding of NPS’ limitations, criticisms, and wishing something better appeared eventually. Said a bank executive from a Central European country (which should remain unnamed): “We conduct NPS surveys regularly. The score goes up and down, but it doesn’t let us understand what we did to move it or what we should do to make it go up or stay at a higher level. It is not actionable!” David J Anderson captured this bank’s story.

NPS was probably never intended to meet this executive’s needs. But there was, no doubt, desire for a better technique.

First Small Step: Add a Narrative

The first incremental improvement was so obvious that many discovered it started using it independently. Simply add a second question to the survey: “Why did you choose this particular answer (on the scale from 0 to 10)?” and give the customer some space to explain their choice. The customer would get a chance to explain their choice and give their story, perhaps unexpected and insightful from the company’s point of view.

My Experiences

I’d like to share two experiences with this simple NPS+narrative format, one when I was the customer and one when I was the service provider.

As a customer, I remember filling out an NPS survey in this format after taking part in the week-long Problem Solving Leadership (PSL) workshop taught by Esther Derby and Jerry Weinberg. I noticed during the workshop that about half of its participants were having a life-changing experience. (I know several people who would describe their PSL experience this way.) I belonged however to another subgroup, to whom the workshop was simply very useful. So I scored it a 9 and used the provided space to explain my choice briefly.

On the other side, I once taught a one-day workshop on process metrics. The participants seemed to be quite engaged, but my NPS survey produced a disappointing result: -0.3. When I read the participants’ stories, they all went basically like this. Alexei, I took this workshop because I wanted guidance on the topic and have a high comfort level with math. And the workshop very useful! But most my colleagues aren’t like me. Out of all people in my company, I should be the one taking this workshop and not them. So, while I personally found the workshop very useful, I wouldn’t recommend it to them.

I was relieved to read such comments and I learned several things from them. First, I was lucky to attract the right type of customer to my offering, and, perhaps more importantly, avoid attracting the wrong type of customer. Second, I needed better techniques to understand both different components of my offerings and different customer segments.

Thus the next improvement could be summarized in one word: segmentation. But that’s a story for another blog post.


  • This post begins to document the evolution of fitness-for-purpose (F4P) analysis techniques leading up to today’s F4P cards and box scores.
  • NPS was the starting point of this journey.
  • The first incremental improvement was obvious: give customers space to tell a short story.
Posted in Enterprise Services Planning | Tagged , , , , , | 2 Comments

Knowledge Discovery Process Revisited

It occurred to me during a recent training class to make a small, but important change to my Knowledge Discovery Process diagram.

Please refer to this old, popular post from two and a half years ago – Understanding Your Process as Collaborative Knowledge Discovery — for the background and summary of the knowledge discovery process concept we use in the workflow visualization practice of the Kanban method.

As a very brief summary of this concept, we try to visualize the process by paying particular attention to information arrival and discovery. The process begins from a relatively poor state of knowledge that enriches as we approach the downstream delivery point (hence the German translation: Bereicherungsprozess). We see this accumulation of knowledge as a series of dominant activities punctuated by shifts in activity or collaboration pattern.

What’s the problem? The horizontal axis label was “time” in my original knowledge discovery process diagram because I couldn’t think of anything better. (Actually, the “original” drawing improved and replaced the S-curve visualization that I used for two and a half years before that while searching for better ways to communicate the concept visually.) When teaching the concept, I always took care to explain that the horizontal axis is not linear, that is, equal horizontal segments on the chart don’t mean equal time intervals. There was also no assumption that the activities take equal, comparable or deterministic times. I was also aware that using calendar time on the horizontal axis was a great way to hide queues in the process (this is the fatal flaw of Gantt charts – one of the insights from Don Reinertsen’s Lean Product Development 2nd Generation workshop).

Then it occurred to me during a recent training class that I can simply relabel the horizontal axis as “progress.” This training class followed my reading of Walter Isaacson’s biography of Benjamin Franklin where the title of one of the chapters was “A Pilgrim’s Progress.”

This solves the problem. Points on the left still occur in time before the points on the right, but the chart doesn’t say anything else on how the progress relates to the time scale. No hint of linearity or determinism.

The updated knowledge discovery process diagram now looks like this.

Knowledge discovery process diagram circa 2016

Posted in coaching, Kanban, training | Tagged , , , , , , | 1 Comment

Kanban Is Not a Card, It’s a Space

A participant of my recent training class told me afterwards of an insight most important to him. “Kanban is not a card, it’s a space.”

I received this feedback during a long streak without any blog posts. So I thought, I could log it and share it in a short blog post. If it was useful to one participant, it might be useful to more people in the audience. And not every post must be 500-1000 words long as most of my more popular posts turned out.

Let’s look at the following Kanban board (看板), representing kanban with slots.

Kanban board where kanban are represented with slots. The "Implement" column has two slots available.

The “Implement” column has two available slots. They are kanban (かんばん) — the permission-giving signals allowing two work items to enter this space, representing the available capacity in this workflow activity. Looking a bit to the left, “Create-Done”, we find three work items to choose from: D, A, and V. Now we have to make a pull decision: choose two out of these three.

Let’s now look at another board, visualizing the same Kanban system in the same state, but using a different visualization style.

Kanban board with virtual kanban, visualizing the same Kanban system

All kanban here are virtual. We figure out their quantity by subtracting the number of cards from the work-in-process limit above each column. Thus the “Implement” column has capacity-space for the same two work items.

Note that the work items or the demand on our delivery process are not equal. They often come from different customers and market segments, with different risks and customer expectations attached. I did the minimum for illustration purposes and used two colours to visualize such distinctions.

Our capacity is also not the same. We may have some degree of specialization or decide to allocate capacity bands to serving certain sources of demand. Let’s look at the following board, the same as the first one, except we decided to use 20% of our implementation capacity (one work item in five) to serve the work item type visualized in orange.

Kanban board with slots. Pull decisions may be different than in the previous example due to a capacity allocation policy.

One of our pull decisions follows from our explicit capacity allocation policy: we pull work item V. We have options A and D for another pull decision.

I’m really zooming in here on a very small episode of Kanban training and leaving a lot out of focus: what other visualization styles exist, what are the trade-offs of using each, and a whole week (minus five minutes) of other material. This is because I want to focus on one takeaway.


The cards on the Kanban board are not kanban! The cards represent deliverables, customer requests and needs. The kanban are permission-giving signals, representing the currently available capacity. They can be visualized very obviously with slots or as virtual kanban. In short, they are not cards, they are the spaces.

Posted in Kanban, training | Tagged , , , , | Leave a comment

Forecasting Cards

Lead time curve with rainbow colours

Those who met me at conferences in the last two years (some clients, too) have probably seen these Forecasting Cards. They’re easy to recognize by their rainbow colours. I created them to start conversations about lead time and to communicate the related ideas and findings (such as from this popular post: Inside a Lead Time Distribution).

These cards and the conversations we had while looking at them motivated some people to start measuring their time to market and to understand the probabilistic nature of delivery processes they’re trying to manage in their professional service enterprise.

I updated the cards several times and added some explanations based on the feedback, but I haven’t made such changes recently. So, I’m going to share the current version of the Forecasting Cards as an easy download. This will likely be their last revision.


In the meantime, I’ve started working on the next set of cards, which will communicate some insights about risk taking and fat-tailed distributions.

Posted in decision making, Kanban | Tagged , , , | Leave a comment

Kanban Guide Is Here – How Not To Read It

An important event in the Kanban community occurred about two months ago. The Essential Kanban – Condensed Guide was released. It’s a compact 20-page book, authored by Andy Carmichael and David J. Anderson. As you can guess, this book is very much up-to-date on most recent developments in the Kanbanland, but is at the same time very short and accessible.

Andy — I’ve known him for a year in-person and somewhat longer virtually — has a talent for word golf, putting important concepts in few words. At the Lean Kanban UK 2013 conference, he came up with the shortest definition of Kanban fitting in only 140 characters.

The Condensed Guide went through a review process by progressively increasing circles of reviewers. Starting with a relatively small number of Kanban coaches and trainers, the authors shared the refined versions of the Guide with broader circles, then with Lean Kanban conference attendees, and then with the public. Being one of the reviewers, I know Andy put a lot of work into the Guide. I wish the Brickell Key award committee recognizes his contribution with a nomination this year (here’s the link where you can do what I did about that.

Now I’d like to talk about how not to read this Guide.

I’ve heard the following phrases (or their variations) in many exchanges with various Agile coaches quite often. I need to qualify they weren’t Agile beginners, but from people with lots of experience, bona fide peer-reviewed status within the Agile community, and some pricing power when it comes to charging clients for Agile advice. Let’s listen:

  • Agile Manifesto doesn’t say that…
  • Scrum Guide says…
  • According to the Agile Manifesto…
  • Where in the Scrum Guide did you find that?

While I suppose some Kanban users will make similar references to the condensed Kanban guide, I expect experts to do so rarely.

I’ve been asked this week to give advice on some metrics-related material. My response was, the material was sound, but I pointed out it was a mismatch to the low organizational maturity. However, in a different organizational unit of the same client company, I made the opposite recommendation.

I couldn’t derive these two diametrically opposing pieces of advice from published values and principles. I didn’t do it by gut feel. I could have just said, let’s do it this way and then inspect and adapt, but that would’ve been nothing more than a case of intellectual laziness. I suppose there could have been some practice or algorithm to lead me to these two conclusions using different inputs. But I’d prefer having no such practice. Otherwise, we’d have to teach people to do things at the practice level. We’d get into arguments about the right way to do the practice. This would lead to dogma. Some of us would become purists, while others would practice the practice-but.

Instead, I simply made sense of a large number of available stories, of success and failure, my own and told by my peers (we meet often at conferences and leadership retreats, co-train, collaborate on engagements and keep up frequent correspondence). Even though these real-world stories are messy, gathering them, making sense of them and deriving useful heuristics are all teachable skills.


The Kanban community values observing what people actually do, how they act, what we can reasonably infer about their thinking. Kanban experts value paying attention to storytellers, making sense of their stories, and continuously questioning contextual appropriateness of their own actions and recommendations. We value it more than the printed word. We understand the knowledge worth having — and from the clients’ point of view, worth paying for — is and will be messy and conflicted.

In the four above situations, Kanban coaches didn’t assign much value to what the four respective Agile coaches said or to the printed word of the Agile Manifesto or the Scrum Guide. Instead, we simply paid attention to what the Agile coach actually recommended to their client in a given situation. Because we’ve come to value one thing more than the other.

The Essential Kanban – Condensed Guide will no doubt educate many current and future Kanban practitioners. But I don’t expect experts or proficient practitioners of the method to do the following very often: open the Guide on some page and say, here it says so.

Posted in books, Kanban, Learning | Tagged , , , , , , | Leave a comment