New Webinar: The New GitHub Action That Replaces 50 Others
Blog Post

Liquibase Init Containers for Kubernetes Migrations

June 17, 2024

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo

Table of contents

Content updated September 2025

Key Takeaways

  • Kubernetes simplifies app deployment, but database migrations can cause schema conflicts and stuck locks.
  • Liquibase integrates with Kubernetes to track, version, and automate database changes alongside application releases.
  • Using init containers ensures migrations complete before app startup, preventing stuck locks.
  • Liquibase’s locking mechanism (DATABASECHANGELOGLOCK) protects against simultaneous schema updates.
  • Storing changelogs as ConfigMaps and mounting them in init containers is a Kubernetes best practice.

Database migrations in Kubernetes can be tricky. Let's explore how to use Liquibase effectively in containerized environments without the common pitfalls that trip up development teams.

The Kubernetes Migration Challenge

When you deploy applications to Kubernetes, database migrations face unique problems:

  • Race conditions: Multiple pods starting simultaneously can attempt the same migration
  • Stuck locks: Kubernetes kills unresponsive pods, potentially leaving database locks active
  • Schema drift: Ensuring all instances have consistent database state during rolling updates

Understanding Liquibase Locks

Liquibase prevents concurrent migrations using a DATABASECHANGELOGLOCK table. When a migration starts, it sets LOCKED = 1. When complete, it sets LOCKED = 0. Other instances wait for the lock to be released.

The problem? Kubernetes' "kill and restart" approach can terminate pods mid-migration, leaving locks stuck at LOCKED = 1 forever. Your next deployment will hang indefinitely waiting for a lock that will never be released.

The Init Container Solution

The cleanest approach is using Kubernetes init containers. Init containers run before your main application container and must complete successfully before the app starts.

Here's how to set it up:

apiVersion: v1
kind: Pod
metadata:
  name: my-app-pod
spec:
  initContainers:
  - name: liquibase
    image: liquibase/liquibase:latest
    command: ["liquibase", "update", "--changeLogFile=/liquibase/changelog/changelog.xml"]
    env:
    - name: LIQUIBASE_URL
      value: "jdbc:postgresql://postgres:5432/mydb"
    - name: LIQUIBASE_USERNAME
      value: "myuser"
    - name: LIQUIBASE_PASSWORD
      value: "mypassword"
    volumeMounts:
    - name: liquibase-changelog-volume
      mountPath: /liquibase/changelog
  containers:
  - name: my-app
    image: my-app:latest
    env:
    - name: DATABASE_URL
      value: "jdbc:postgresql://postgres:5432/mydb"
    ports:
    - containerPort: 8080
  volumes:
  - name: liquibase-changelog-volume
    configMap:
      name: liquibase-changelog

Managing Your Changelogs

Store your Liquibase changelog in a ConfigMap for easy management:

# Create the ConfigMap from your changelog file
kubectl create configmap liquibase-changelog --from-file=changelog.xml

# Deploy your application
kubectl apply -f my-pod-definition.yaml

How This Solves Common Issues

Eliminates stuck locks: If the init container fails, the entire pod fails. No partial states, no stuck locks.

Clear failure visibility: Failed migrations show up as pod initialization failures in your monitoring.

Prevents race conditions: Only one pod's init container can hold the database lock at a time.

Fits Kubernetes patterns: Uses native Kubernetes lifecycle management instead of fighting against it.

Pro Tips

  1. Use Secrets for credentials instead of plain environment variables in production
  2. Monitor init container logs for migration issues during deployments
  3. Test your changelogs in lower environments before production deployments
  4. Keep changelogs in version control alongside your application code

This pattern transforms database migrations from a deployment headache into a reliable, automated process that scales with your Kubernetes workloads.

Frequently Asked Questions

Q1: Why use init containers with Liquibase in Kubernetes?
Init containers ensure schema migrations run before the main application starts, preventing conflicts and stuck locks.

Q2: What causes stuck Liquibase locks in Kubernetes?
Kubernetes often restarts pods quickly. If Liquibase is killed mid-process, locks may remain active, blocking further updates.

Q3: How does Liquibase handle migration locks?
It uses the DATABASECHANGELOGLOCK table, allowing only one process to update the schema at a time.

Q4: Where should I store Liquibase changelogs in Kubernetes?
Use a ConfigMap to mount changelogs as volumes so init containers can access them securely.

Q5: What happens if the init container fails?
The pod will fail to start. You can inspect init container logs to diagnose and fix migration issues before retrying.

Nathan Voxland
Nathan Voxland
Share on:

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo