Likbez (ликбез) is a neologism that entered the Russian language in the 1920s, after the revolution. Formed similarly to many new words of the day by abbreviating long phrases, it meant the “elimination of illiteracy” (ликвидация безграмотности). At the turn of the 20th century, for which the data are indisputable, the literacy rate in the Russian Empire did not exceed 25%. Also not disputed is that it was near 100% by the mid-century in the post-war Soviet Union.
It can and should be noted that the Soviet literacy program was not voluntary, millions of people were coerced to take part in it, and that the new Soviet state coerced many people into many other things, exacting a huge human toll. The tsar’s government also had some designs on increasing literacy in the early 20th century and the lower classes of the Russian society had developed some coping mechanisms to compensate for the Romanov dynasty’s failure to meet their basic education needs. It can be argued that such programs and mechanisms would have closed part of the literacy gap by mid-20th century. But then another question has to be thrown to this pile, whether the fall of the Romanovs in 1917 was not a consequence of how they ran the country for three centuries. All of these things cross into the political territory, which is not the subject matter of this blog.
The word likbez meant both the nationwide literacy program and the physical place in a town or village where people would go to after work to learn to read and write. Nowadays it also means a remedy to obvious ignorance of the basics of some subject matter.
Likbez is that rare word I needed
Such ignorance was in front of me yesterday when a colleague, who arrived prepared with some PowerPoint slides, outlined a six-step “continuous improvement” method. The Step Six was “Implement Solutions.”
This person did come from the part of the company that has been slower to learn and more difficult to engage than the rest. What I and other coaches are going to have to do about this is a private and confidential matter, so I’m not going to go into it in this post. Also for obvious reasons, I cannot post the spectacularly bad diagram of this six-step process – so, a textual description only.
I can google and buy a reprint of a New England Journal of Medicine article on treating some illness, and show you the PowerPoint slides I made from my read of it. If you happened to fall ill, would you like me to be your doctor? My treatment plan comes from an authoritative source! You would probably say no. If there was a real doctor in the room, they would probably notice that I don’t get something very basic about medicine. It will come out despite my preparation and best intentions and it will happen in ways I don’t understand. They will call me out on that.
This post is for those who may face such situations at work — they are not uncommon in the software and IT fields — so that they can recognize and do something about them. This is not about how to pursue continuous improvement, but how to stop producing BS.
Let’s go there
If you could analyze, design and implement continuous improvement solutions, you wouldn’t need continuous improvement! Think about it again.
There are many ways to pursue continuous improvement and it almost doesn’t matter if you use PDCA cycles or PDSA or OODA or POOGI or Build-Measure-Learn or YMCA or LMAO. Here are some of the basic elements that exist in all of them.
Continuous improvement relies on the scientific method. The main sign that the scientific method is in use is not large amounts of data and complicated formulas. It is the presence of models, hypotheses and experimentation. We create models of our system, formulate a hypothesis, and design an experiment that can validate or invalidate it. Experiments can succeed or fail, that’s how they create information that leads to knowledge. If an experiment always succeeds, it produces no information.
Any continuous improvement method at the very minimum includes steps to validate that something we did was actually an improvement. It has to be conducted after we have implemented it and designed before we start the implementation. There is no validity of experiments without predictions. We also need to cope with any possible outcomes of the experiment that can alter the state of our system (what if it succeeds? what if it fails?) These are, respectively the C (Check, or S-Study) parts of the PDCA/PDSA cycle (the fitness test) and the A (Act) part of the same cycle — to illustrate this point using one method as an example.
The earlier steps of a continuous improvement process subordinate to the later steps. How we go about an early step depends on what we want to learn at the later step. The learning steps are a bit later in the process, because there is nothing to test for fitness early on.
For example, in the Build-Measure-Learn loop (the main adaptation mechanism of the Lean Startup method), Measure subordinates to Learn — we don’t measure more than what we need to measure in order to learn — and Build subordinates to Measure — we don’t build more than what we need to build in order to measure. The order of the Five Focusing Steps in Eli Goldratt’s Process of OnGoing Improvement is also not coincidental.
Metacognition. A fancy word for learning to learn, this is classic Deming, part of the System of Profound Knowledge. “Going meta”, context-specific conversation about how knowledge is created and managed and about the boundaries of experimentation are essential. Are you hearing “this is too meta for me?” You’re in the make-believe land.
You Wouldn’t Need Continuous Improvement, If…
…if you could analyze, design and implement continuous improvement solutions. There would be no need to empirically accumulate the knowledge that separates the current condition from the desired, improved state. A one-step improvement process would suffice!
Step 1: Book your tee time
Proof by contradiction? Or we need a math likbez, too?