June 20, 2024
The goals & challenges of designing CI/CD pipelines
See Liquibase in Action
Accelerate database changes, reduce failures, and enforce governance across your pipelines.
This is one section of our complete 12-part guide to CI/CD for databases - get your copy.
In the first blog of this series, we talked about bringing databases into the CI/CD discussion along with application changes. In this part, we are going to take our general knowledge of pipelines, apply the five principles we discussed in blog one, and use that combination to set a plan.
3 Stages of software delivery pipelines
To begin, let’s agree that software delivery pipelines all generally break down into three stages:
- Creating the needed change
- Validating that the change is “production-ready”
- Delivering the change
As soon as the change is created in the first stage, it represents the potential to achieve the value put forth by the business need that caused it to be created in the first place. That potential value is not realized, however, until the third stage — when it is actually being used by the customers/users of the system. This leaves the second stage, Validation, as a bottleneck to realizing value.
That view is too simplistic.
There are actually bottlenecks throughout the pipeline. Long validation processes are often symptoms as much as they are causes of problems. Either way, our new pipeline structure must focus on ensuring that changes spend no more time than truly necessary in the validation stage.
5 Principles of CI/CD
Next, let’s go a bit deeper into the CI/CD principles mentioned in the introduction. They help establish an interrelated way of thinking about the pipeline in order to ensure that validation work is minimized — regardless of whether the bottleneck in question is a direct cause or symptom of a deficiency elsewhere in the pipeline. Let’s consider them in this context.
“Shift Left” to build quality in
This principle deals with the idea of checking for quality and fitment of changes as early as possible in a change delivery pipeline. The value is obvious — if you can consistently do something correctly the first time, you will be more efficient than someone who has to do it multiple times. And you gain more efficiency because you can quickly identify if something is broken in minutes or hours instead of waiting hours or days for feedback from someone else - or from the fact that the change broke something downstream.
This principle also enables removal of redundant checks in the ‘middle’ of the pipeline which further speeds up the overall flow.
Work with small, atomic changes
A small number of well-defined changes takes less time to:
- Assess for impact
- Troubleshoot if there are problems
- Correct any problems, if necessary
Consider what it would take to figure out which of 100 changes is causing a problem relative to what it would take to figure out a problem if there is only one change in flight. It is a lot faster to correct as well. A small-batch approach means that the actual task of validating individual changes gets much simpler and carries less overhead.
Drive out the toil of repetitive tasks
This principle is all about using automation intelligently. Automation applies to EVERY repetitive task in the pipeline. It serves as a means to provide consistency and speed while minimizing the opportunity for human error and reducing the need for people to wait for manual handoffs. Automating tedious tasks is key to freeing up people’s time to focus more on:
- Innovations
- Solving complex product problems
- Improving how the team works
Measure and adjust to continuously improve
Automated processes that are moving small, easily tracked batches of well-defined changes are much easier to measure than manual work and hand-offs documented by manual record keeping. That means the team can more quickly identify and remediate flow problems to continuously optimize the pipeline using clean and reliable data.
The whole team owns the outcome
In a continuously improving system, the points of greatest friction will be systematically identified and optimized. That means the points of greatest friction will shift elsewhere in the process and require work by different people. This cannot be viewed as culpability – it is simply a consequence of the dynamic that once the biggest bottleneck has been resolved, the second biggest bottleneck is effectively promoted (one fixed leak identifies the next one).
Therefore, the whole team is responsible for:
- Tweaking their parts of the process
- Understanding how those adjustments impact others
- Understanding how adjustments elsewhere impact them
- The health of the overall delivery pipeline
Bringing CI/CD to the database
By combining the goal of creating low friction, low validation pipeline flows with the five principles, we can focus our design efforts and identify a series of questions we will have to answer to bring CI/CD flow to our database changes:
- How do we bring database changes from multiple creators together to define a batch to be processed?
- How do we empower change creators with a self-service facility to determine whether their database change is acceptable and learn what that means?
- How do we use that self-service facility to evolve our definition of “good enough to test” and therefore the quality of database changes coming into the validation cycle?
- How do we make the validation process itself a rugged and highly tested asset?
- How do we ensure that the infrastructure that underpins our pipeline is trustworthy?
- How do we equip our change creators with the tools they need to create the best database changes?
- How do we provide safety and guardrails to identify when the pipeline itself has a problem?
- How do we track the progress of our database changes through the pipeline?
- How do we handle problems with database changes once they get into the pipeline?
- How do we measure our pipeline effectiveness and efficiency?
This list of questions will shape how we redesign our database change pipeline and will be cumulative as we move from left (change creation) to right (production). So, each section in this guide will address one of these questions beginning with section 2, where we look at defining batches.
Ready to dive into all 12 parts of the complete CI/CD for databases guide? It covers:
- The goals & challenges of designing CI/CD pipelines
- Batching database changes
- Automating database “builds”
- Ensuring quality database changes
- Reliable and predictable database deployments
- Trustworthy database environments
- Sandbox databases
- Database governance for consistency, quality, and security
- Observability for database CI/CD
- Measuring database CI/CD pipeline performance
- Handling and avoiding problems with database changes
- Bring CI/CD to the database with Liquibase