I showed in my previous post how we can understand a delivery process as a process of knowledge discovery.
The resulting process diagram showed how knowledge continuously arrives and accumulates through a sequence of dominant activities. The points separating those activities are not handoffs between functional specialists, but rather shifts in collaboration patterns. There was an example of such process diagram for a generic Agile software delivery process.
Now is the time for some examples of looking at processes this way. And I want to give some examples from the world outside software delivery. We’re going back to early 2003 and my first build-measure-learn loops in what we would now call a lean startup. I lived in New York City at the time and was one of the few engineers in a startup Internet advertising firm.
What Are We Delivering?
The endpoint of this process is the business validation of the next, updated version of our product. Note that what we’re discussing is not the process of creating and delivering the product itself. The product is already made and ready to meet its users. How did we produce validated learning from that point on?
The diagram of this process could look like this:
One activity dominated this process early on: ensuring that our VP of advertising operations didn’t have to yell the next morning, “Dude, where’s my revenue?” This potential exclamation can be rephrased as a statistical hypothesis and tested on a small percentage of users, with a control group, of course. We aimed to prove that our latest features didn’t do any harm. We wanted to be 100% sure that our advertising campaign managers could take the updated product, run the same portfolio of ads and earn the same revenue for the day as the control group using the previous product.
Note this had nothing to do with regression testing, which was already done as part of building the product. We could extrapolate our confidence of regression testing to our hypothesis testing, but one of the rules on Madison Avenue is, don’t assume.
Running such experiments involved a core product engineer (usually me), the VP of Advertising Operations, a senior Advertising Campaign Manager, and an Operations guy. As statistically-significant results came in and our confidence grew, the activity began to fade and a new activity started to dominate.
Our next activity was to measure the improvement due to the new features, such as the increased revenue per user. A different hypothesis: the test group, using new ads or a new algorithm, would perform better than the control group. The new experiment required a new group of collaborators. We needed new ads, so we brought in a Graphic Designer, a UI Programmer and our Creative Director. The experiment could prove, for example, that the new geographical targeting algorithm would yield higher click-through rates in the travel category. As the results rolled in and we gained confidence for the full user-base rollout, this activity also eventually faded.
We then faced the last test: Could we use the improved product and the proof of improvement to attract more paying customers? For example, there was a regional airline at the time, flying out of one hub airport to several vacation destinations. They would never buy a nationwide advertising campaign, spending like American, United or Delta. Could our new product, targeting ads with high precision, turn this airline into our customer? (It did.) This activity required yet another collaborator, Sales, to work with the same advertising campaign managers and the creative staff, while the role of engineers was minimal. So we saw yet another shift in the collaboration pattern.
In this example, we’ve looked at a delivery process in a professional services field as a knowledge discovery process. We’ve visualized it as a sequence of dominant activities. All once-dominant activities fade and yield to new activities. Each time that happens, it signals not a handoff of work to another functional department, but a shift in the collaboration pattern.
Another example to come.