Database Schema Migration Strategies

Philip Rehberger Feb 23, 2026 9 min read

Manage schema changes safely. Version migrations, handle rollbacks, and coordinate with application deployments.

Database Schema Migration Strategies

Database schema migration tools manage the evolution of your database structure over time. As applications grow and change, tables are added, columns are modified, indexes are created, and relationships shift. Migration tools track these changes as versioned, executable scripts that can be applied consistently across development, staging, and production environments.

Without migration tools, database changes become a coordination nightmare. Developers email SQL scripts around. Someone forgets to apply a change to staging. Production gets ahead of development. Data is lost because someone ran DROP instead of ALTER. Migration tools solve these problems by treating database schema as code; versioned, tested, and deployed like any other part of the application.

Framework-Native Migrations

Most modern frameworks include built-in migration support. Laravel, Rails, Django, and others provide migration systems that integrate with their ORMs and development workflows.

Laravel's migration system uses PHP classes that define up() and down() methods. The up method applies the change; the down method reverses it. Migrations are timestamped files in a dedicated directory, applied in chronological order.

// database/migrations/2025_01_15_create_clients_table.php
return new class extends Migration
{
    public function up(): void
    {
        Schema::create('clients', function (Blueprint $table) {
            $table->id();
            $table->string('company_name');
            $table->string('email')->unique();
            $table->string('phone')->nullable();
            $table->text('address')->nullable();
            $table->enum('status', ['active', 'inactive', 'pending'])
                ->default('pending');
            $table->timestamps();
            $table->softDeletes();

            $table->index('status');
            $table->index('created_at');
        });
    }

    public function down(): void
    {
        Schema::dropIfExists('clients');
    }
};

The framework tracks which migrations have been applied in a migrations table. When you run migrations, it executes only those not yet applied. This ensures each migration runs exactly once per environment.

# Run pending migrations
php artisan migrate

# Rollback the last batch
php artisan migrate:rollback

# Rollback and re-run all migrations
php artisan migrate:fresh

# Show migration status
php artisan migrate:status

Framework migrations work well for teams standardized on that framework. The schema builder abstraction works across database engines, though you lose access to engine-specific features. Complex migrations can use raw SQL when the builder falls short.

SQL-Based Migrations

SQL-based migration tools like Flyway, Liquibase, and golang-migrate use plain SQL files rather than framework-specific code. This approach is database-agnostic in a different sense; you write native SQL for your specific database engine, getting full access to its features.

Flyway uses versioned SQL files with a naming convention. Files named V1__Create_users_table.sql, V2__Add_email_column.sql execute in version order.

-- V1__Create_clients_table.sql
CREATE TABLE clients (
    id BIGSERIAL PRIMARY KEY,
    company_name VARCHAR(255) NOT NULL,
    email VARCHAR(255) NOT NULL UNIQUE,
    phone VARCHAR(50),
    address TEXT,
    status VARCHAR(20) NOT NULL DEFAULT 'pending'
        CHECK (status IN ('active', 'inactive', 'pending')),
    created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
    deleted_at TIMESTAMP WITH TIME ZONE
);

CREATE INDEX idx_clients_status ON clients(status);
CREATE INDEX idx_clients_created_at ON clients(created_at);

SQL migrations excel when you need database-specific features like partial indexes, materialized views, or stored procedures. They're also useful in polyglot environments where not everyone uses the same application framework.

-- V5__Add_client_search_index.sql
-- PostgreSQL-specific full-text search

ALTER TABLE clients ADD COLUMN search_vector tsvector;

CREATE INDEX idx_clients_search ON clients USING GIN(search_vector);

CREATE OR REPLACE FUNCTION clients_search_trigger() RETURNS trigger AS $$
BEGIN
    NEW.search_vector :=
        setweight(to_tsvector('english', COALESCE(NEW.company_name, '')), 'A') ||
        setweight(to_tsvector('english', COALESCE(NEW.email, '')), 'B') ||
        setweight(to_tsvector('english', COALESCE(NEW.address, '')), 'C');
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER clients_search_update
    BEFORE INSERT OR UPDATE ON clients
    FOR EACH ROW EXECUTE FUNCTION clients_search_trigger();

Writing Safe Migrations

Migrations that work in development can fail catastrophically in production. Development databases are small and quiet. Production databases are large and active. Operations that complete instantly on 1000 rows might lock the database for hours on 10 million rows.

Safe migrations consider locking behavior. Adding a column with a default value in PostgreSQL (before version 11) locked the table while rewriting all rows. In MySQL, many ALTER TABLE operations lock the table entirely. Understanding your database's locking behavior prevents surprise outages.

// DANGEROUS: Locks table while adding column with default
public function up(): void
{
    Schema::table('clients', function (Blueprint $table) {
        $table->boolean('is_verified')->default(false);
    });
}

// SAFER: Add nullable column, then backfill, then add constraint
public function up(): void
{
    // Step 1: Add nullable column (fast, minimal locking)
    Schema::table('clients', function (Blueprint $table) {
        $table->boolean('is_verified')->nullable();
    });
}

// In a separate migration or background job:
// Step 2: Backfill in batches
public function up(): void
{
    DB::table('clients')
        ->whereNull('is_verified')
        ->update(['is_verified' => false]);
}

// Step 3: Add NOT NULL constraint
public function up(): void
{
    Schema::table('clients', function (Blueprint $table) {
        $table->boolean('is_verified')->default(false)->change();
    });
}

Index creation can also be problematic. Standard index creation locks writes. PostgreSQL's CREATE INDEX CONCURRENTLY builds indexes without blocking, at the cost of more time and transient overhead.

-- DANGEROUS: Blocks writes during index creation
CREATE INDEX idx_clients_email ON clients(email);

-- SAFER: Creates index without blocking writes
CREATE INDEX CONCURRENTLY idx_clients_email ON clients(email);

Dropping columns requires care too. If application code still references the column during deployment, queries fail. The safe sequence is: deploy code that doesn't use the column, then drop it in a subsequent deployment.

Rollback Strategies

Down migrations provide theoretical reversibility, but practical rollbacks are complicated. If a migration transformed data, the reverse transformation might lose information. If the migration ran in production and other migrations followed, rolling back one migration in the middle breaks the sequence.

Some teams don't write down migrations at all, arguing they provide false confidence. When problems occur in production, the solution is usually fixing forward with a new migration rather than rolling back. Down migrations do have value in development for iterating on schema design.

// Not all migrations are reversible
public function up(): void
{
    // Merge first_name and last_name into full_name
    Schema::table('users', function (Blueprint $table) {
        $table->string('full_name')->nullable();
    });

    DB::statement("UPDATE users SET full_name = CONCAT(first_name, ' ', last_name)");

    Schema::table('users', function (Blueprint $table) {
        $table->dropColumn(['first_name', 'last_name']);
    });
}

public function down(): void
{
    // Can't reliably reverse this - where does the first name end?
    throw new Exception('This migration cannot be reversed');
}

For irreversible changes, consider the expand-contract pattern. Expand the schema to support both old and new structures. Migrate data. Contract by removing the old structure. Each step is independently deployable and reversible.

Zero-Downtime Migrations

Production systems often can't tolerate downtime for database changes. Zero-downtime migrations apply changes without interrupting service. This requires careful sequencing and often spans multiple deployments.

The key principle is backward compatibility. Each deployment must work with both the current and previous database schema. New code can handle old schema (missing columns return null). Old code doesn't break on new schema (extra columns are ignored).

Adding a column with zero downtime:

  1. Deploy code that handles the column being absent
  2. Run migration to add the column (nullable, no default)
  3. Deploy code that writes to the new column
  4. Backfill existing rows
  5. Deploy code that requires the new column

Removing a column with zero downtime:

  1. Deploy code that doesn't read the column
  2. Deploy code that doesn't write the column
  3. Run migration to drop the column
// Step 1: Deploy code that tolerates missing column
class Client extends Model
{
    public function getNotificationPreference(): string
    {
        // Handle column not existing yet
        return $this->notification_preference ?? 'email';
    }
}

// Step 2: Migration adds nullable column
public function up(): void
{
    Schema::table('clients', function (Blueprint $table) {
        $table->string('notification_preference')->nullable();
    });
}

// Step 3: Deploy code that writes the column
class Client extends Model
{
    protected static function booted(): void
    {
        static::creating(function (Client $client) {
            $client->notification_preference ??= 'email';
        });
    }
}

// Step 4: Backfill migration
public function up(): void
{
    DB::table('clients')
        ->whereNull('notification_preference')
        ->update(['notification_preference' => 'email']);
}

// Step 5: Add constraint in subsequent migration
public function up(): void
{
    Schema::table('clients', function (Blueprint $table) {
        $table->string('notification_preference')
            ->default('email')
            ->change();
    });
}

Testing Migrations

Migrations should be tested like other code. Running migrations against a test database catches syntax errors and logical problems before production.

class MigrationTest extends TestCase
{
    public function test_migration_runs_successfully(): void
    {
        // Start with a clean database
        $this->artisan('migrate:fresh');

        // Run all migrations
        $this->artisan('migrate')
            ->assertSuccessful();

        // Verify expected tables exist
        $this->assertTrue(Schema::hasTable('clients'));
        $this->assertTrue(Schema::hasColumn('clients', 'company_name'));
    }

    public function test_migration_rollback_works(): void
    {
        $this->artisan('migrate:fresh');
        $this->artisan('migrate');

        // Rollback should not fail
        $this->artisan('migrate:rollback')
            ->assertSuccessful();
    }
}

Testing migrations with realistic data volumes reveals performance problems. A migration that works on empty tables might lock production for hours. Load test databases with production-like row counts help identify these issues.

Migration in CI/CD

Integrate migrations into your deployment pipeline. Migrations should run automatically as part of deployment, not manually by humans who might forget or make mistakes.

# GitHub Actions deployment workflow
deploy:
  steps:
    - name: Run migrations
      run: php artisan migrate --force
      env:
        DB_CONNECTION: mysql
        DB_HOST: ${{ secrets.DB_HOST }}
        DB_DATABASE: ${{ secrets.DB_NAME }}
        DB_USERNAME: ${{ secrets.DB_USER }}
        DB_PASSWORD: ${{ secrets.DB_PASS }}

    - name: Deploy application
      run: ./deploy.sh

The --force flag bypasses confirmation prompts in production. The migration runs with the same credentials and connectivity as the application, catching configuration problems early.

Consider running migrations in a separate step that can fail independently of application deployment. If migrations fail, the deployment should stop. If migrations succeed but application deployment fails, you might need to handle the partial state.

Conclusion

Database migration tools transform schema changes from risky manual operations into repeatable, testable, automated processes. Whether using framework-native migrations or SQL-based tools, the benefits are the same: version control, consistency across environments, and safe evolution of database structure.

Write migrations that consider production realities. Large tables, concurrent access, and zero-downtime requirements constrain what's possible in a single migration. Split risky operations into safe steps. Test with realistic data. Integrate migrations into your deployment pipeline.

The discipline of migrations pays dividends throughout application lifetime. Schema changes become routine rather than feared. New environments spin up with correct schemas automatically. The database structure's history is documented in code rather than tribal knowledge.

Share this article

Related Articles

Need help with your project?

Let's discuss how we can help you build reliable software.