How to Do Agile Software Development with Offshore Resources

Doing Agile with offshore resources is an important topic to many software professionals. To succeed with Agile using offshore resources, you have to understand two principles:

  • The process of doing Agile with offshore resources is highly dependent on the geography of the country where you do the development as well as where the resources are located.
  • It may involve drilling.

Now let’s use several countries as examples to illustrate these principles.

Canada

Flag_of_Canada.svg

Map_Canada_political-geo

Canada is a country with very a large land area as well as a very long coastline. It is very rich in both onshore and offshore resources. We’ve got massive hydroelectric resources at Niagara Falls, in B.C. and Quebec, various metal ores in Northern Ontario, potash in Saskatchewan, and, to the scorn of all non-Albertans, oil sands in Alberta. Doing Agile with these resources is conceptually simple: you have to produce the resource, sell it to a customer, that is exchange it for money, use the money to hire people, and let the people do the work. However some businesses have already got money to hire people, so they can skip the first two steps.

Given the abundance of onshore Agile resources in Canada, there is absolutely no need to drill for them offshore in the pristine waters covering our vast continental shelf. You can do Agile without drilling here. Thus, Canadians’ answer to how to do Agile with offshore resources is: don’t.

Denmark

Flag_of_Denmark.svg

293px-Denmark_Map

Denmark is Canada’s neighbour. Believe it or not, the two countries share a land border after they carved up Hans Island. Actually, it is a small European country perfectly located to harness wind energy. The wind turbines can be located both on land and offshore and many of the wind farms are actually located offshore as you can see in the picture.

220px-Middelgrunden_wind_farm_2009-07-01_edit_filtered

This Agile resourcing strategy is uniquely suited to the country’s geography.

141px-Greenland_ice_sheet_AMSL_thickness_map-en

But being a small country, Denmark lacks other kinds of resources, so this strategy may reach its limits someday. However, one of Denmark’s possessions is Greenland, which has huge gas deposits under its continental shelf. Using that kind of offshore resource to do Agile would involve drilling. Canadians’ advice to its neighbours on how to do Agile software development with offshore resources in Greenland is: don’t.

Switzerland

Flag_of_Switzerland_(Pantone)

Switzerland

Our last example is Switzerland. How do you do Agile software development with offshore resources in Switzerland? Sorry, but this is a stupid question, because Switzerland is a landlocked country.

Posted in Uncategorized | 1 Comment

Some Remarks on the History of Kanban

I have recently had to reply to a thread about history of the (software/IT) Kanban method on LinkedIn, which stated that the (software and IT) Kanban method originated from Taiichi Ohno’s Toyota Production System and went into the direction I didn’t find useful. I hope my remarks were useful to the original poster as well as several others and I want to repost them here for the ease of future reference as well as, possibly, for others’ benefit.

The main point of my remarks was that the Kanban method as we know it today has many other influencers and origins besides Ohno and TPS. Two such influencers were of course W. Edwards Deming and Eliyahu Goldratt. Demings 14 Points and the System of Profound Knowledge guide Kanban change agents worldwide.

Goldratt’s influence manifested in what is now considered the first software Kanban implementation dating back to 2004. It was a successful attempt to apply Goldratt’s Theory of Constraints, particularly its Five Focusing Steps part in the software world after modeling an existing process with a drum-buffer-rope system. The details of this implementation can be found in a certain blue book with a cartoon on the cover and I assume, if you’re interested in the subject of this post, you have already read that book. It is important to note that there was little relationship between Goldratt’s Theory of Constraints as it applied in that particular case and the TPS. As noted in the foreword to the above-mentioned book, improvements within TPS had little to do with bottlenecks, the main concern of the Theory of Constraints, and a lot to do with reducing batch sizes.

The start of the 2006-07 Kanban implementation that followed was influenced in large part by Donald Reinertsen, who had researched product development flow for many years before that. Among Reinertsen’s insights were that variability has economic value in knowledge work and that kanban systems were better suited to it than drum-buffer-rope. Such realizations had to be preceeded by a deep inquiry into the nature of knowledge work, which was carried out in the middle of the last century by Peter Drucker. David Anderson, in his book “Lessons in Agile Management”, actually credits Drucker as one of his key influencers.

The year 2007 also stands out in this story as the year of discovery of the Kanban method as opposed to a kanban system. Establishing the distinction between the two marked a significant departure from the TPS and other Lean manufacturing systems.

Post-2007, as the popularity of the Kanban method grew, questions about its applicability in a variety of contexts arose. Developed by Dave Snowden for many years prior to that, independently and for unrelated purposes, the Cynefin framework turned out to be instrumental in answering such questions.

The fourth element of Deming’s System of Profound Knowledge is psychology. Deming realized that, since systems of work we’re trying to improve involve human beings, understanding how humans behave and make decisions had to be an important part of the improvement; however, he really left it to future generations to figure out. On the boundary of psychology and economics a whole new field evolved in the 2nd half of the last century, known as behavioural economics. Many researches worked in this field, but one that stands out is its pioneer Daniel Kahneman, who not only made key discoveries in this field but also documented them in a popular form. Psychology and behavioural economics have become an important part of the practice of the Kanban method today.

Thus the “watershed” of the Kanban method circa 2013 has many “tributaries” of which the TPS is only one. Those other sources should be studied by those how want to apply the Kanban method effectively as change agents.

Posted in Uncategorized | 6 Comments

The Software Engineering Moneyball

Estimation has been a hot topic in software development blogosphere recently. Morgan Ahlström contributed to it with a nice blog post several days ago: Why Estimates Don’t Matter.

This post was heavily trafficked via social media and several people discussed more nuanced positions on this matter (for example, in these discussions lead by Martin Burns: here and here).

What I read in both the blog post and these discussions is the new thinking that is starting to take root in our industry.

Moneyball

Moneyball is a book (and a movie based on the book) about the new thinking that revolutionized the game of baseball. The Oakland Athletics led by their General Manager Billy Beane (played in the movie by Brad Pitt) started to apply it during the 2002 season. The result: the team with the next-to-last payroll of the 30 teams in the league and having lost its three biggest stars who were signed away by richer teams as free agents won the most games during the season and posted an all-time record 20-game winning streak. The Athletics’ ways were soon adopted by the Boston Red Sox, a team of much greater financial means at the time, who used this approach to win the 2004 World Series.

Moneyball movie poster

Billy Beane noticed that the traditional methods the scouts used to evaluate players were little more than guesswork. Scouts often made recommendations based on players’ looks and vague descriptions like “love his swing” and “five-tool player.” A former five-tool player himself who declined a Stanford scholarship to take a scout’s offer, but failed to make it in the big leagues, Billy Beane believed a more rational, data-driven approach to the game that would tie decisions to desired outcomes (winning more games) could be his competitive advantage as a GM.

Throughout their Moneyball season, the Oakland A’s discovered that many baseball beliefs were not supported by rational analysis and debunked them. For example, they stopped stealing bases. The conventional wisdom said stealing bases was a great way to “manufacture runs” and put the pressure on the opposing team. The economic analysis, however, showed that base-stealing tactics resulted in virtually no difference in the number of games won over the 162-game season. The gain from advancing the runner was offset by the risk of being caught stealing, ending an inning early, with a high on-base percentage hitter on deck, killing a potential high-scoring rally.

The fictional character Peter Brand (representing Oakland’s assistant GM Paul DePodesta in the movie) summarized the approach by saying: “There is an epidemic failure within the game to understand what is really happening…. When other teams look at Johnny Damon (a great defensive outfielder, leadoff hitter and base-stealing threat, whom the A’s lost before the 2002 season), they see a star who is worth $8m a year. When I look at Johnny Damon, I see an imperfect understanding of where runs come from.

Folks, This Is About Us

You can repeat it almost word for word. There is an epidemic failure in the software industry to understand how all these activities that we go about to develop a software product — how all these activities contribute to the customer value and quality.

What many developers, managers, and customers see when they look at estimation is a cornerstone practice of software engineering. What Morgan Ahlström sees is an imperfect understanding of value, quality, risk, predictability, and how estimation affects them. He creates an economic model that allows him to get better understanding. Similar questions can be asked about many practices. What is the economically optimal sprint length? (Not in general, of course, but in your context.) What are the economic trade-offs of sequencing features in this order or that? How much market payoff is lost due to delays incurred because a specialist serves multiple teams?

Economic models of product development flow, helping us understand how various practices contribute to our desired outcomes and informing our process design, are the Moneyball of our industry. Just as in professional baseball over the last decade, they are a source of competitive advantage. One of the big challenges will be helping the “scouts” change.

What are the five tools, the the stolen bases, the runs batted in, the earned-run average and the fielding percentage of your software delivery process?

Posted in life | 2 Comments

It’s Not About Estimation, It’s About Risk

The Pragmatic Bookshelf has recently published an article by Ron Jeffries, Estimation is Evil.

I didn’t enjoy it for a number of reasons. The word evil turns the article into a religious argument and sensationalizes it needlessly. I would have preferred to see a model allowing us to calculate the economic value of estimation, a way to see how it can be negative in some cases, and a case study where it turned out to be negative. Besides, I wasn’t even sure the author wanted to make an anti-estimation point as he spent a large portion of the article explaining how to decompose a big unknown into estimable chunks and argued that estimation is essential to selecting work from the product backlog into the iteration backlog. He also devoted the middle third of the article to how the Chrysler C3 project should have been chartered, which seemed tangential to estimation.

Something interesting happened when a colleague of mine sent the article to an internal mailing list of managers and Scrum masters, mostly new to Agile. The mailing list, usually quiet, came back to life as several people felt compelled to write to their peers with their detailed interpretations of the article. Good for them, but most of what was written resembled blind people touching an elephant.

The Elephant In The Room Is Risk Management

If I were to reduce Ron Jeffries article to one paragraph, it would be the one where he says, spot-on, that “the very beginning” is the worst possible moment to make decisions about “all” “requirements.” (This point has nothing to do with estimation, but you’ve already got my point about it.) It shows the relationship between information, decision-making, and risk.

When the decision is forced at the point when the minimum amount of information exists, it carries the most risk. If we defer the decision until more information arrives or somehow create more information before the fixed decision point, we can reduce risk. Estimation is but one possible source of such information and it may be unreliable, expensive to obtain or redundant in the presence of information supplied by other sources. We must discover and pay attention to other sources of information and remember that the goal is not to obtain as much of it as possible, but to make a decision and reduce risk.

The main risk-management modes are: (a) avoid, (b) reduce, (c) transfer (insure), and (d) retain. We can systematically use information arrival to (a) make some decisions risk-free so the question of avoidance becomes moot, (b) reduce other risks, (c) make insurance cheaper due to the risk reduction, (d) make the remaining risks acceptable.

The best moment to predict the SuperBowl XLVII winner was at the end of the game. Predicting that the Baltimore Ravens would win it before the playoffs was a much riskier decision. The Ravens were the fourth seed in their conference and eight of the twelve playoff teams had better records. The probability of a loss was significantly greater than 50% at that point. As the playoffs progressed, information arrived continually and it reduced continually the risk of this bet (bookmakers’ odds, being a fairly efficient market for this particular type of risk, reflected that). After the Ravens’ final defensive play, the risk was completely gone.

You Cannot Bet On the SuperBowl After It’s Over

Of course you can’t. But that has everything to do with bookmakers’ business model (to make money, they need you to carry bigger risk than theirs, so if you carry no risk, they can’t accept your bet) and nothing to do with our ability to use information to inform risky decisions.

Move decision points to the right on the time axis (deferred commitment) and move information arrival points to the left.

Malleability of Software Creates New Risk-Management Opportunities

The last bit of advice may sound difficult to implement in many areas, but many more opportunities for it exist in the world of software. Many Agile engineering practices can fasten information arrival. For example, continuous integration, if done right, can produce information on our product’s integration status tonight — as opposed to a month later. Agile management practices (and engineering practices can strengthen them) can help defer decision points. For example, if a team delivers working software every week, as opposed to every month, decisions about what to select from the product backlog into the iteration backlog can be made three weeks later — and with the benefit of all information that arrived during those three weeks. The difference between potentially shippable and shipped can weeks’ or months’ worth of information — and all the risks the business has to carry that are intertwined with the information.

I have done a lot of work in Service-Oriented Architecture over the years and observed how many technical and business risks arise at integration boundaries. I have found that the appreciation of continuous integration as a primarily social, rather than technical, practice; personas; specification by example; and James Grenning’s learning-test technique (which he describes in the chapter that he wrote for Robert C. Martin’s Clean Code book) are all excellent tools for discovering tomorrow’s problems today. Risk thinking combined with appropriate technical practices simply make the difference between predictable delivery and panic. I have also observed that many ignore this and pay the price.

Lean And Not So Lean Enterprises

The second part of Ron Jeffries’ article that stood out for me (and again it has nothing to do with estimation) is his story of Agile’s first successes and its first failure with the Chrysler C3 project. Jeffries explained how the way the C3 project was chartered — as a complete replacement of Chrysler’s existing payroll systems — set it up for failure.

Every time we start work — on a large project, a small project, a user story, a feature, or an epic — it is a transaction, in which two sides exchange risk. A lot of such transactions happen in the software industry every day without any due diligence, in other words, consideration of all risks involved and how they can be managed with the help of information.

Whether they see it that way or not and whether they like it or not, the Vendor and the Client are tied together in a Lean (or not-so-Lean) Enterprise. The Vendor promises to deliver something valuable to the Client, using their unique expertise; while the Client, using their unique expertise, plans to add more value to deliverable and deliver the total to their Customer. It’s the same value stream. If the Vendor fails or falls short, the Client’s plan fails as well. They trade risk when enter the transaction.

Very often, all consideration given to this risk fits in one sentence: “This client requires a fixed commitment to the deliverable.” Another way is “a more Agile approach.” An assumption is made that the interpretation of all risk information can be encoded in a “groomed”, prioritized backlog and then the Product Owner is vested with God-like powers to produce a prioritized order every two weeks. When Wall Street types use this sort of “risk management” with our money, you know what we all want done to them.

Ron Jeffries tells us a really valuable story how Chrysler, by choosing to charter the C3 project one way over another, unwittingly underwrote a huge amount of risk and, after a string of successes, ultimately footed the bill for the failure.

Conclusion

I don’t find the back-and-forth estimation debate useful. Estimation is only one of many information sources involved in making decisions and managing risk. The real question is how it factors into the decision you have to make today.

Posted in hands-on | 2 Comments

Commitment, Forecast and the Toyota Kata

My last post, On WIP Limits, Velocity and Variability, explained how variability can combine with certain habits and the ignorance of pull systems to create a situation where a Scrum team repeatedly delivers roughly half of their user stories committed to the iteration. This leads to the questions, should they commit to fewer stories, do away with commitment, replace it with forecast, or do something else?

Before we dive into that question, let’s introduce Toyota Kata. Toyota Kata (which is actually a combination of two katas, improvement and coaching) is a systematic technique for discovering improvement opportunities, quickly capitalizing on them, and building the organizational capability to do it. Application of Toyota Kata in the manufacturing sector was documented by Mike Rother in his 2009 book. A trend that emerged in the software/IT Kanban community in 2012 is to adapt the Toyota Kata approach to the knowledge work. Hakan Forss, a Kanban coach from Sweden, experiments, blogs and speaks about it a lot. Yuval Yeret, an Agile and Kanban coach from Israel, writes about it in his book, Holy Land Kanban.

One of the important elements of Toyota Kata is finding the improvement direction, which is usually done with a help of a shared vision. The vision is usually unattainable, like zero defects or a very small batch size that is wildly uneconomical given the current changeover times. However, the unattainable vision helps find the next target condition, which is attainable and lies in the direction of the vision. The improvement then focuses on solving the problems that prevent the system of work from reaching the target condition. This is dramatically different from the “suggestions” method, where the suggestions can lead in various directions from the current condition. Some techniques used to select suggestions, such as voting, can further reduce the problem-solving focus.

Commitment as an Unattainable Vision

The struggling, unlimited-work-in-progress, over-committing, under-delivering Scrum team from the start of this post (and the previous one) could benefit from this approach. While acknowledging that the commitment is unattainable every iteration due to the common-cause variation, they can use it to find the next target condition. The problem-solving then could focus on the real problems – user stories too large to flow well, expanding work, lack of knowledge sharing through pair programming, division of work that prevents thinly sliced stories, etc.

By contrast, replacing the commitment with forecast, or reducing commitment to “velocity” can have the effect of covering up those problems and hindering improvement.

Posted in hands-on | 1 Comment

On WIP Limits, Velocity and Variability

A prominent agilist posted a question on Twitter that lead to a long discussion: “Can Scrum teams use velocity as the WIP limit for a sprint? Will it take them closer to pull-based planning?”

Several replies quickly followed from many in the Kanban community, pointing out various problems:

Different units. WIP limits are measured in the units of work (e.g. the count of work items), while velocity (and throughput) are measured in terms of amount of work completed in a unit of time (e.g. items per week). Therefore, you cannot compare them as numbers; it’s like comparing kilometres and kilometres per hour.

Constraint on throughput. Using velocity (the average number of stories or story points the team completes within a sprint) as a limit of what they commit to in the upcoming sprint would effectively serve as a self-imposed constraint on throughput. Such constraint has nothing to do with limiting work-in-progress. Kanban actually suggests the opposite as it sets balancing capability and demand as a goal. If throughput can serve as a measure of capability in a given context, then, to the extent that balancing capability and demand involves growing capability, we want to increase throughput, rather than limit it. This can be achieved, indirectly, by limiting work-in-progress, which is, again, different from the work completion rate.

The discussion served as another reminder that Kanban remains in a blind spot of the Agile community. Agile gurus cannot be counted on to figure out some Kanban basics while Kanban community holds frequent, intense conferences where they discuss more complex stuff. But let’s save this for another day and see how this discussion can help real-world teams.

Just Add Variability

Scrum team using velocity as a WIP limit guideline – effectively not limiting WIP

Let’s say a Scrum team decided to set their in-sprint work-in-progress limit equal to the their velocity number, that is the number of stories they’ve delivered on average in each sprint. This means a WIP limit of N and an average of N stories delivered per sprint. Assuming a somewhat stable system, we can apply Little’s Law and estimate the average story’s cycle time to be equal to the duration of one sprint. Now, if there was absolutely no variability, all N stories would be done exactly in one sprint. However, even minimal variability will ensure that about half of the stories would be done by the end of the timebox. (Reference: Donald Reintertsen, Managing the Design Factory, page 18).

This must sound very familiar to mainstream Scrum adopters. A standup meeting goes like this: “Yesterday, I took part in sprint planning and started working on story X; today, I’m working on story X and nothing is blocking me”, “Yesterday, …I started working on story Y”, “…story Z…” And then the team delivers half the stories they committed to.

Lean Workflow


Lean workflow in-sprint with a WIP limit based on understanding of Little’s Law and effects of variability.

Lean workflow – a very simple way to combine Scrum and Kanban – involves limiting work-in-progress within each sprint. (Reference: William Rowden, Lean Workflow: A Parable in Pictures.) It has the effect of shortening the story cycle time and the team can potentially choose it such that the upper control limit of the cycle time fits within the timebox. It follows from the variability considerations presented above that the in-sprint WIP limit should be a fraction of the number of stories planned for the sprint. In the duration terms, the expected story cycle time should be a fraction of the sprint length.

XP and Pair Programming

It is my observation that people who came to Agile many years ago from eXtreme Programming are generally much better than today’s mainstream Scrum adopters at creating well-performing teams and avoiding dysfunctions. By prescribing pair programming, XP takes an approach which can half the work-in-progress and the cycle time, accomplishing essentially the same goal as the lean workflow.

Emergent Practices

It is clear that, after setting the “lean workflow fraction” to 1/2, XP practitioners didn’t stop there. Seeing something that worked, extreme programmers found more ways to slice user stories into even smaller pieces of value, creating techniques such as Product Sashimi and Elephant Carpaccio.

On the other hand, mainstream Scrum teams not practicing lean workflow, whether in its “mathematical” or XP form, may see the opposite practice emerge – aggregation of smaller, related stories into something “doable” by one programmer within two weeks. “I’m working on the FooBar Service, that’s my story for this sprint.” (Where there is one obvious way to slice the FooBar Service into at eight user stories and slightly less obvious ways to slice it into more than eight.) Honk if you’ve never heard this!

Posted in hands-on | 2 Comments

A Software G-Forces Experiment

I ran a quick experiment with a group of colleagues yesterday. Everybody picked up two coloured stickies and placed them somewhere on Kent Beck’s “Software G-Forces” frequency scale.

Software G-Forces is a reference to Kent Beck’s model, where he establishes a logarithmic scale of software deployment frequencies. His model postulates that many technical, organizational, and management practices that make up software development processes depend on this frequency. Changing the frequency creates forces that bend the process into a new shape, causing practices to appear and disappear. See the full video where Beck explains his model here.

In our experiment, each green sticky represented a software organization where one of us worked before the current job. The purple stickies represented the job before that. The current job was excluded from the experiment because that would have skewed the results as nearly all stickies would land in the same column. The experiment was of course unscientific, but some conclusions can still be drawn from it.

Here is the result:

Distribution of software organizations by deployment frequency

The experiment quickly revealed a hockey-stick pattern, which similar to the distribution Kent Beck showed in his talk. The hockey-stick handle descends into daily-hourly fringe. The peak is on quarterly and the organizations with quarterly deployment cycles outnumber the annuals who are facing extinction.

Where Are the Things Going?

The distribution of colour tells the direction where the things are going, although a statistician would argue, probably correctly, that the difference in the positions of green and purple stickies is not statistically significant.  The sample is simply to small for that.

The purple stickies, representing earlier times, are on average just left of quarterly. The green stickies, representing more recent times, are on average between quarterly and monthly. The weekly column has all green stickies.

Posted in life | Leave a comment

Scrum and Kanban Combinations As Improvement Options

I’d like to summarize several patterns where Scrum (a well known Agile process) combines with Kanban (an evolutionary improvement method). These patters are pretty obvious to many agile coaches and people in the Lean-Kanban community. But I feel I need to outline them for a wide audience of practitioners “in the trenches” – especially those who say “we’re sticking with Scrum for now” or “we’ve invested in Scrum and are not yet ready to try a new process.” The important thing that all these patters have in common is that they are options for improvement – just options, no commitment to substitute them for something else – and all of them involve some proficiency or understanding of Kanban.

Combo #1: Scrum with Lean Workflow

So you’re doing Scrum. Now, although the Scrum framework doesn’t prescribe it, do this: map the value stream of a delivered user story (what would count as “done” at the end of the iteration under the team’s definition of done). Establish an end-to-end pull system with a work-in-progress (WIP) limit at every step (design, development, testing, demo, etc.) This type of pull systems are known as kanban systems. Use tight WIP limits (encouraging pair programming and developer-tester pairing) and the stop-starting-start-finishing mentality to cut the cycle time of each story. This reduces the risk of missing the sprint forecast badly or having many unfinished stories at the end of the sprint. This is the most obvious combination.

Combo #2: Scrum with Personal Kanban

You’re doing Scrum. After the iteration planning, you know what stories have been selected and how they break down into tasks. The scrum board shows who is working on what. But that is not all they’re working on! A team member may have 2 or 3 tasks they’re working on right now plus, perhaps an improvement task from the last retrospective. Also, this week he has to take a training class and file some administrative reports. Oh, and we forgot two interviews of job candidates are coming up. And that’s before production issues that will inevitably show up and cause an interruption.

It is very easy to get a killer dose of multitasking, even if the team board shows you’re working on two stories. It is very easy to make sub-optimal decisions when most work items are hidden. Invisible inventory and not limiting work in progress lead to poor flow for the individual and affect the whole team. Personal Kanban brings the invisible inventory back into the open. It limits work-in-progress to establish good flow and to spark improvement on the personal level. That helps the whole team.

Combo #3: Scrum with Portfolio Kanban

This pattern is the opposite of the previous one. You still do Scrum on the team level. As you scale up, apply Kanban to the higher-tier flow: epic stories, large features, and projects. It is those high-level items that break down into user stories delivered in each sprint on the team level. At higher levels, executives scrumming over epic projects in three-month iterations – that doesn’t work very well.

Combo #4: Scrumban

Kanban itself is not a process. As an evolutionary change method, it applies to an existing process with the goal to improve this process. What if the starting process is Scrum? Then you have a special case of Kanban implementation known as Scrumban.

The relationship between Scrum, Kanban, and Scrumban

The relationship between Scrum, Kanban, and Scrumban.

Since Kanban does not prescribe the target process (that is, the process you arrive after improving the starting process with Kanban), there is no prescribed target process in Scrumban. The starting process, Scrum, has synchronized cadences: scheduling, delivery, retrospective. It may happen that the target process still has synchronized cadences, like in this example – in this case you’re still doing Scrum, only it’s a better instance of Scrum. Or it may happen that you arrive at a flow-based process with decoupled cadences.

Conclusion

The above are four popular combinations. You can probably come up with more. The important thing they all have in common they help you improve your Agile process and teamwork and they all require some knowledge of Kanban. You can use one or several of these options at the same time, without a risky commitment to a substitution of your existing process.

Posted in hands-on | 1 Comment

Your Deployment Frequency And Your Tool Vendor’s

I recently facilitated a process for soliciting requirements for an Agile work management system for a medium-size enterprise. You know, the kind of system that needs to scale beyond a scrum team and even scrum of scrums to many of boards across the enterprise, support reporting and analytics on multiple levels, integrate with various existing systems, and accommodate a variety of workflows. Many people proposed a large number of requirements and it was interesting to see how a large number of oddities like, “I want to see my burndown chart on Android; no, Blackberry” converged to a smaller set of really important stories.

Last week I was at a demo event organized by one of the vendors and it was during the demo that I realized a lot of these requirements are rooted in one principle. If vendor whose product you’re looking it is not aligned with this principle, you don’t need a very long RFP.

Your vendor’s deployment frequency must be at least as high as your organization’s target deployment frequency.

To understand this thesis, watch Kent Beck’s 2011 lecture Software G-Forces: The Effects of Acceleration. In his talk, Beck presents a scale of deployment frequencies – yearly, quarterly, monthly, weekly, daily, hourly. How high your frequency is largely determines a set of practices, both technical and organizational, that your company has to have. It has to follow some practices and get rid of others to sustain its frequency. For example, if you deploy yearly, you can have a siloed QA department to do testing at the end; if you want to deploy more often, you need a more collaborative organization. If you want to deploy weekly, you have to remove “code freeze” from your vocabulary. If you want to deploy at least once a day, you may need a continuous delivery pipeline. Changing the frequency creates forces that bend your process and your organization to a new shape. (Watch the whole video here or on YouTube – I believe it’s a must-see for any software professional today.)

In the past, yearly deployments were the norm and very few deployed more often. Today, the quarterly frequency seems to be the norm for the industry, but there are quite a few firms with Web-based products that practice continuous delivery and deploy daily. There are firms that deploy every 12-18 months, but they are facing extinction. The trend is towards increasing the frequency.

The great thing about Kent Beck’s model that it allows practitioners to be completely non-judgemental when it comes to differences between systems of work and practices in their organizations. There is no need to argue which way of organizing code branches is better. There is no right or wrong way; there are just the ones that are a good match for a particular frequency range and the ones that aren’t!

What does that mean if we need to choose a tool?

Your tool vendor’s area of subject matter expertise is Agile itself – collaborative systems of work, principles and practices of Agile teams and organizations. We have just learned how those depend on the deployment frequency. If you’re ahead of your tool vendor in terms of frequency, it’s very likely that the vendor lacks expertise in operating at that frequency. Why would you then buy their product?

You can sit through the demo and take notes of where their product lacks features you want (from a long desired feature list) or where their software product codifies behaviours that can straight-jacket your collaborative teams. When you have taken enough notes, you can see the common theme in them: these guys have too much time between deployments compared to you.

Posted in tips | 1 Comment

Report from Agile Coach Camp Canada: Using Lean Techniques in Professional Development

I’m continuing a series of posts summarizing sessions that took place at the Agile Coach Camp Canada 2012 that took place at the end of June in Ottawa.

In one of the sessions, we discussed how some lean thinking can be applied to professional development and learning of software and IT professionals. The current paradox is that while people in our profession try to apply agile techniques to coding, architecture, requirements analysis, team and organizational leadership – learning all this good stuff still relies on up-front planning, early commitment, and outdated to-do lists. It’s urgent that we start applying what have learned about Agile and Lean to learning itself!

The Overwhelming Demand

The amount of software development related information generated every year is enormous. First, there are several enterprise technology stacks, such as .NET, Java and Ruby on Rails, and several mobile platforms and it can be said about each of them that their respective programming languages and frameworks evolve and new products, tools and techniques are developed every year in their respective developer communities. As a result, every year there is more books to read, more new skills to practice, webinars and podcasts to listen to, user groups to join, conferences to attend and training classes to take. And this is only the beginning. Whole new paradigms emerge, such as cloud computing, functional programming, and big data, and each continually generates a huge volume of information to process. Then there is Agile, its technical and organizational practices; then there is Lean, Kanban, Lean Startup, Real Options, Beyond Budgeting, Cynefin…

Personal Kanban to Help Cope with Demand and Establish Flow

The value in individual professional development is delivered when people are happy with things they have learned – new knowledge and skills that help them discover (or rediscover) their purpose, drive, that help do today’s jobs and that help them in the long run, that fit into their career path or open up new paths, that align with the needs of their companies. In fewer words, professional learning is highly contextual. We must establish the flow of such value. But if we simply try to react to every new bit of information generated by the abundant sources we’ve identified above, very soon we find ourselves going to a user group meeting every other day. And on other days, we have 250 blog subscriptions waiting for us and 25 books each opened on Chapter 2. We have as much flow as rainwater pooling in the middle of a tarp.

Personal Kanban is proven to establish flow in individual’s value streams by visualizing and limiting work in progress. Visualize the backlog all learning opportunities: books you’re interested in reading, conferences you might want to attend, training classes that might be useful, webinars and user groups to join. Then limit your work in progress. This means you need to know or learn your capacity, how many of these things you can juggle at the same time. Maybe it’s one or two books, maybe three user group meetings per month or per quarter, maybe some number of conferences and open space events. Then select your learning opportunities from the backlog as long as they fit your personal Learning-in-Progress limit. Remember that professional learning is contextual, so select your opportunities as they fit your current situation.

Everything Is an Option

It is important to understand that everything in your learning backlog is an option, not an obligation. Your backlog is not a to-do list! To-do lists – annual professional development plans if you do them in your company are a good example – are prematurely prioritized lists of needless context-free commitments. They destroy the value of professional learning. You commit at the beginning of the year to take a training class on Foo programming language and a series of webinars on the Bar framework, but as the year goes on, the real hit turns out to be FizzBuzz. Its creator was in town last month and you missed it.

A couple of years ago I and several colleagues started an Agile book club. Its value stream eventually evolved to look something like this:

Here, the club is retrospecting on one of the books, while having selected the next one. The club had rules not to start reading the next book until a retrospective has been held for the previous book and also not to start selection of the next book until the current book has been read and a retrospective scheduled for it.

First, notice the relative size of the backlog. It is huge! However, it is made entirely of options. We may refer to them by authors’ names and titles, but they are really not books, but options to read their respective books. It is important to understand that many options will expire before we exercise them.

The same reasoning applies to learning opportunities other than books: training classes, conferences, webinars, user group meetings, and so on.

Selection over Prioritization

Second, notice that the backlog is not prioritized. We tried to establish numeric priority order in the book club only to learn quickly that the order changed dramatically before the next selection opportunity (which took place usually once every two or three months). The Lean-Kanban community has learned to value selection over prioritization and professional development is a good example, where the application of this principle is particularly important. What we are really trying to accomplish is a pull decision that respects our Leaning-in-Progress limit and to select learning opportunities most valuable in today’s context. Prioritization – assigning a rational number to each backlog item and selecting the largest number(s) – is means to accomplish that and not the only one. Prioritization assumes that our backlog is a set with the same mathematical properties as the field of rational numbers (density and full, transitive ordering). This assumption is more often false than true; it is definitely untrue for the backlog of professional learning opportunities.

Conclusion

Lean Professional Development can be quickly summarized as follows: use Personal Kanban to establish a kanban system to pull from the backlog of learning opportunities. Empirically find your own capacity and Learning-in-Progress limit. Remember that professional development is highly contextual. Use Real Options thinking or Lean Procrastination and treat all backlog items as options. Make sure the backlog is regularly replenished. Abandon prioritization of your learning backlog and use contextual just-in-time selection respecting your capacity limits.

Posted in conferences | 2 Comments