A Condensed History of Progressive Delivery
August 20, 2019
Blog
As packaged software became decoupled from hardware, there was an all-around shift in thinking about the way we delivered software. Now Progressive Delivery is designed for speed and risk mitigation.
Software development has changed over time, much in the way coding languages have. These changes are parallel with major changes in software delivery mechanics.
Back in the day, users obtained new software by purchasing a physical hardware device, like disks or CDs. Because of this, software development was designed to accommodate long cycles that represented hardware design, fabrication, and validation intervals. Waterfall complemented this staged process very well.
As packaged software became decoupled from hardware, there was an all-around shift in thinking. Even the way we delivered software was reevaluated.
Quite simply, software development could move faster than hardware cycles. So teams looked to incorporate this into waterfall delivery models by leveraging a validation stage that enabled feedback through beta testing or early access programs. The goal was to maximize customer value, but only minimal changes could be affected because this process didn’t start until the validation phase.
Dexterous Delivery Models
The desire to obtain feedback and incorporate change led to the basic tenets of Agile and Scrum delivery models. Both assert that movement towards a goal would be broken into small, changeable tasks, which creates a big advantage for development teams. Rather than plotting a course and staying on autopilot, these teams could make minor course corrections during the bigger journey to their destination.
Meanwhile, the waterfall method left one of two choices when projects changed midstream: either stay the course and wait for the next cycle, or abandon work that was no longer relevant. The second option typically meant teams were unable to deliver additional value to existing customers, and would probably have difficulty raising additional capital for research and development.
Meanwhile, industry was evolving, and software delivery models transitioned from packaged software to Software as a Service (SaaS). Once the application or service lived in the cloud, a few things changed in the way software was delivered. Most importantly, physical delivery ceased.
Without the need for software to be physically handed from supplier to consumer, it could feasibly be updated at any time or place. This “Continuous Delivery” model allowed teams to deliver updates continuously, and customers to consumed those updates immediately.
However, immediate updates also meant there were immediate security risks. Specifically, the risk was that finding bugs always took longer than fixing bugs.
“Flagging” the Code
As teams migrated to a Continuous Delivery model, they required infrastructure that could reduce the risk of shipping hazardous code. This is when teams reverted to the old technique of “feature flagging” to separate the act of code deployment from the actual release of features.
Then came two architectural changes at once: microservices with globally distributed apps and data science entering the application delivery process. These phenomenon changed the way teams delivered software once again.
Microservices and global distribution revealed the need to isolate changes and the populations impacted by those changes. Roll out a new feature to Australia first and see what adoption looks like. Maybe make some changes prior to rolling out to the EMEA market? Or maybe change your message queueing service for only a small cohort of users.
Either way, development teams and operations teams started to observe how these partial rollouts were being received and adopted.
Progressive Delivery’s Sphere of Influence: The Blast Radius
Progressive Delivery begins with the premise that the release of features or updates is staged in a way that manages the impact of change. If a new code release is considered the epicenter, each cohort that is exposed represents and increasing blast radius.
To be successful in this delivery model, teams need a system that manages the separation of code deployment from feature releases, as well as control of the release, like a valve or gate that you can slowly open and close. Along with these control points, there is a need for feedback (event) data about how various features are accessed and consumed.
Continuous vs. Progressive Delivery
So what’s the difference between Continuous and Progressive Delivery? For Continuous Delivery teams, Progressive Delivery will feel like a natural progression – as it should. Many of the core benefits, tools, and mechanics remain the same. The primary distinction between the two is Progressive Delivery’s “built-for-failure” mentality.
On development side, Progressive Delivery’s “built-for-failure” mechanisms are designed for speed and risk mitigation. By initiating all new software development with feature flags, code can be moved into production immediately with zero risk. This allows developers to move more quickly and make changes or updates more independently, and indoctrinates novice developers with flag-first development best practices that stick with them from onboarding through production.
However, Progressive Delivery involves two core changes in the delivery model on the business side:
- Release Progression – Progressively increasing the number of users that are able to see (and are impacted by) new features (e.g., Stage 1: Visible to developers only; Stage 2: Visible to developers and beta users; Stage 3: Visible to more users; Stage n: Visible to everyone).
- Delegation – Progressively delegating the control of the feature to the owner that is most closely responsible for the outcome (e.g., Stage 1: Release owner = Developer; Stage 2: Release owner = Project Manager; Stage 3: Release owner = Marketing; Stage n: Release owner = Customer Success)
Importantly, initially delegating responsibility involves both the assignment of responsibility as well as the need for manual approval or action. The goal of delegation should be to base all changes or release progressions on criteria and use metrics and event data to automate transitions.
While Continuous Delivery allowed people to fail fast, its tools did not have the inherent safety valves and feature flags to protect the service and the consumer. Those protections had to be added in a bespoke fashion by individual teams, or by multiple teams within a single company. Of course, this led to a mess of overlapping systems that, at best, slowed down the pipeline. At worst, it caused cascading failures.
Progressive Delivery’s “built-for-failure” model integrates feature flags as well as a feature management platform that consolidates all control points into a single interface. These tools can be used by multiple teams across an organization as well as across different organizations entirely. While Continuous Delivery is carried out with or without the use of a feature management platform, Progressive Delivery requires it.
Pictures, or it Didn’t Happen
Teams at IBM, Microsoft and Target have written about the ways “Continuous Delivery ++” has worked for them. But James Governor from Redmonk puts it best in a recent post:
“On the technology side, Kubernetes and Istio bring new management challenges, but also opportunities – service mesh approaches can enable a lot more sophistication in routing of new application functions to particular user communities.
“Continuous Integration/Continuous Delivery has been extremely useful in taking us forward as an industry. CI/CD is the basis of everything good in modern software development. But I do feel there are some new aspects and practices that we don’t currently have a name for.”
It’s exciting to see teams increase the velocity of delivery while also isolating risk based on the readiness of code and features. I look forward to seeing how this software development model can help teams build better software.
Adam Zimman is VP of Product and Platform at LaunchDarkly. He has more 20 years of experience working in a variety of roles from software engineering through to technical sales. He has worked in both enterprise and consumer companies such as VMware, EMC and GitHub.