DevOps and The Theory of Constraints (II)

We know that the consequences of not being able to set-up an efficient delivery pipeline are diminishing the ability of business to capitalize on applications. Hence, let’s explore how applying the 5 steps described in the processes of continuous improvement introduced by the ToC can bring out sensible solutions, which in essence, represent the core values of techniques such as Agile and DevOps.

Step #1: Identify the bottleneck.

A bottleneck is any resource whose capacity is equal to or less than the demand placed upon it. And a non-bottleneck is any resource whose capacity is greater than the demand placed on it.

As mentioned earlier, DevOps is all about understanding the entire SDLC as one system, so in consequence, we should be trying to optimize the entire system capacity and not the local areas. In this context, in order to overcome the effects of statistical fluctuations and dependent events in the delivery pipeline, some teams have to have more capacity than others, especially those at the end of the line. In our case, we have detected how testing and operations teams are clear bottlenecks in the delivery pipeline; statistical fluctuations and the position of these work-centers in the chain forced them to perform over 100% capacity to fulfill all the demand.

 

Step #2: Exploit the bottleneck.

Trying to balance the capacity with the demand is an illusion – although the capacity will average out with time, team will be always too short or too large. Instead, we need to balance the flow of work with the demand.

Translated to the IT world, the idea is using test and operations teams to control the flow through the delivery pipeline into the market (for SaaS products) or into the business (for internal Apps). Taking a real-world sample, triannual release periods divided in monthly code drops are too long compared with the actual market demand. What we need to do is make the most of the testing and operations resources and make them more productive by throttling release of work to them.

Agile methodologies introduce techniques to palliate this effect by just reducing batch sizes into smaller user stories and increasing the release frequency to weekly sprints. By introducing this way of working we will make sure that there is always enough work to do for the teams so they work at a steady pace. More importantly, queue and waiting time at the bottlenecks is reduced as an effect of controlling the flow coming from development team.

What remains clear is that hiring more testing or operations resources to increase capacity will not solve the problem. All what we can get by doing this is creating new bottlenecks or deteriorate the capacity of the delivery pipeline as an end-to-end system.

Step #3: Subordinate any other decision to the bottleneck

One of the important aspects of exploiting the bottlenecks is that forcing more work to the system will not automatically increase throughput. Instead, it will create more bottlenecks. So in essence, we need to create some buffers of capacity at non-bottlenecks for various reasons:

  • Being able to deal with variations and statistical fluctuations
  • Letting non-bottlenecks take some required (but low value adding) work from bottlenecks.
  • The entire system works at the pace of the bottleneck to avoid overloading with work in process (inventory)

Translated to the IT space, this means that we should have some buffers of capacity at the development team so the flow of work coming out from them don’t generate inventory piles – this is, different versions of code and artifacts – at testing and operations. At this stage, this is when the core techniques introduced by Agile and DevOps (which have been praised by the software industry during years) are crucial to control the flow of work and subordinate the entire delivery pipeline to the bottlenecks:

  • Controlling the flow between Business and Dev: One of the key behaviors promoted by Agile is to continuously work with the business in order to define user stories and make sure the product backlog is always filled for the development team.
  • Controlling the flow between Development and Testing: To this extent, Test-driven Development (TDD) encourages to put testing team at the front of the delivery pipeline, in collaboration with development Scrum teams, allowing testers to write test scripts based directly on the requirements – even before the actual code for that requirement is built (a.k.a. testing as code). It is true that not all test cases can be covered by test scripts and still some smoke testing is needed in later phases, but at least this technique is helping to reduce queues of inventory and speed up release time. Additionally, as suggested by the ToC, we can use some of the capacity buffer at development team to help testers write up test scripts, since tests can be treated as code.
  • Controlling the flow between Development and Operations: Exactly in the same way TDD puts the testing team at the front of the delivery pipeline, all what DevOps promotes is to involve operations teams together with the development Scrum teams at the early stages of the project in order to plan and design dynamic infrastructure capacity based on applications requirements. Today, this is possible thanks to new configuration management tools combined with Cloud technologies that will allow systems administrators to create scripts to provision, dimension and configure infrastructure and platforms required by the applications (a.k.a. infrastructure as code).

Overall, one the key side effects of bringing all work-centers together and putting bottlenecks at the front of the delivery pipeline is that teams can receive quick and valuable feedback from each other in a very natural way. The good thing about these return between teams is that the output of one team can be quickly factored in as the input for the others.

 

Step #4: Elevate the bottleneck

The position that this step takes in the process of improvement is key to success. Actions such as adding more resources, more machines and more tools is the first thing that intuitively many managers would do first; but this is something that we need to avoid. In order to ensure payback from these capital expenditures aimed at improving the bottleneck, those actions should be only implemented after the other improvements have taken place.

Once again, the solution comes from one of the core principles DevOps is based upon: automation technologies – which are now mature enough to use in the enterprise – applied at build, test and deploy processes will help lifting up performance of operations and testing teams. Implementing automation in the delivery pipeline has a direct and positive impact: throughput goes up, inventory goes down and operational costs are reduced accordingly.

On the flip side, the side effect of introducing changes to boost performance at the testing and operations teams, is that it takes time before the investments materialize in actual results, such as the ones mentioned earlier. This is due to learning curves of new automation tools or knowledge transfer required by new staff members.

 

Step #5: And start over again!

Although we have already introduced a few enhancements, there is always room for improvement! And this is actually one of the mantras introduced by the ToC.

One of the side effects of applying the 4 previous steps is that management will be provided with new tools and metrics to detect what can be improved. In essence, tools like Kanban will help us quickly and visually spot where a bottleneck is arising, even before it has a negative impact in the delivery pipeline. Additionally, new indicators introduced such as team velocity, lead time for changes, release frequency, defect and on-time delivery rates will help us measure the performance of our delivery pipeline towards the business goals: increasing throughput while simultaneously reducing both the inventory and operational expenses.

Summary

The process of continuous improvement introduced by manufacturing’s ToC is providing a powerful framework for those IT leaders who have a vision of changing the role of IT in the organization. These are pace-setters who believe IT will be a key stakeholder helping the business to achieve their goals – such as improve product portfolio, increase sales or improve customer retention – and recognize in Apps development a key driver to do so. That’s the reason why many companies will be facing the challenge of becoming world-class software development firms.

Advertisements