A key principle of systems thinking is that optimizing any single part of a system through process improvement does not necessarily improve the overall capability of the entire system. Improving widget production from 55 per hour to 100 per hour does not help much if shipping can only handle 70 per hour. If your organization is highly compartmentalized, individual groups may strive to improve their sphere of influence only to be frustrated by the lack of overall improvement.
Let’s look at the amount of time for a truck to travel a highway route during rush hour. Some project managers would look at a 10 mile stretch of road and, knowing that the speed limit is 60 mph, conclude that you should be able to transit that stretch in 10 minutes. When they measure that the vehicle is taking 45 minutes to travel 10 miles at 7:30 in the morning, they incorrectly conclude that the driver isn’t fully capable or the vehicle is defective. Without taking the time to understand what the driver is experiencing, they might schedule driver training or a tune up for the truck. Worse yet, they may conclude that they need to send additional trucks along the route at the same time. Obviously, the real problem is the large batch of 20,000 cars trying to utilize a road that was designed to handle 5,000.
The challenge is to identify the leveraged changes that can improve the system as a whole. Identify changes that improve the entire system and avoid trying to optimizing any single component first. For example, more detailed requirements do not necessarily reduce cycle time if they bog down the remaining steps in the coding and testing chain.
Mathematical analysis of queues has determined that putting large batches of work through a system actually increases cycle time rather decreasing it. If you have ever played “the penny game”, you were able to observe that as the batches passed between stages became smaller, the overall delivery capacity improved. Industry experience shows that large batches also increase defects, further reducing quality as well as increasing cycle time as defects must be put through the system.
In terms of predictability, queue theory proves mathematically that large batches of backlog items create a stochastic queue. Stochastic queues are not deterministic or linear. Stochastic queues have very high levels of variability which makes them dynamic and not predictable through a linear equation. This means that accurate and deterministic estimation up front is not reliable in predicting system cycle time because of the inherent variability in the system.
When adopting Kanban, first look at all of the queues in the system rather than just focusing on development velocity. This is even more challenging in a complex environment that has multiple queues. For example, a team may have a feature queue, but the process of checking-in, building, and deploying software for test is another set of queues that the team must deal with throughout a release. Even in simplest form, there is some iteration between the flow stages within a single team.
Steel threads to other teams are another example of this complex queue relationship. The teams and the program do not experience a simple linear queue. The diagram below shows the complex interaction between the teams and the delivery process.
Once the queues are identified, create a monitoring system that will provide a means to identify backlogs and delays at any time and take quick action.
Retrospective metrics are too late to support the level of decision making necessary to avoid delays and maintain flow. Measuring team velocity at a monthly program status meeting misses the opportunity to make corrections throughout the month.
The real improvement from Kanban is not simply moving to a visual board with work in progress limits, it is in understanding the complexity and barriers that exist in your software development process.