Database schema migrations in production environments require careful planning and execution to avoid costly downtime. Traditional MySQL schema changes can lock tables for extended periods, causing application outages and data inconsistencies. This tutorial demonstrates how to perform zero-downtime online schema changes on MySQL 8.4 LTS using two powerful tools: gh-ost and pt-online-schema-change.
By the end of this guide, you’ll understand how to safely execute schema migrations with cutover controls, replica lag monitoring, and rollback capabilities on your Singapore VPS, Sydney VPS, or Amsterdam VPS.
Prerequisites
Before starting this tutorial, ensure you have:
- Ubuntu 24.04 LTS VPS with at least 4GB RAM and 20GB storage
- MySQL 8.4 LTS installed and configured (see our MySQL upgrade tutorial)
- Root or sudo access to the server
- MySQL user with SUPER/REPLICATION CLIENT privileges
- Active MySQL replication setup (master-replica or InnoDB Cluster)
- Basic understanding of MySQL administration and schema design
For production environments, we recommend using our highly available MySQL InnoDB Cluster setup to ensure maximum reliability during schema migrations.
Step 1: Install Schema Migration Tools
Install both gh-ost and pt-online-schema-change to have multiple migration strategies available:
Install gh-ost
Download and install GitHub’s gh-ost tool:
# Download gh-ost binary
wget https://github.com/github/gh-ost/releases/download/v1.1.6/gh-ost-binary-linux-20231207144046.tar.gz
# Extract and install
tar -xzf gh-ost-binary-linux-20231207144046.tar.gz
sudo mv gh-ost /usr/local/bin/
sudo chmod +x /usr/local/bin/gh-ost
# Verify installation
gh-ost --version
Install Percona Toolkit
Install Percona Toolkit containing pt-online-schema-change:
# Add Percona repository
wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
sudo apt update
# Install Percona Toolkit
sudo apt install -y percona-toolkit
# Verify installation
pt-online-schema-change --version
Step 2: Prepare Test Environment
Create a test database and table to demonstrate online schema changes:
-- Connect to MySQL
mysql -u root -p
-- Create test database
CREATE DATABASE schema_test;
USE schema_test;
-- Create sample table with data
CREATE TABLE users (
id INT PRIMARY KEY AUTO_INCREMENT,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_username (username)
);
-- Insert sample data
INSERT INTO users (username, email) VALUES
('alice', '[email protected]'),
('bob', '[email protected]'),
('charlie', '[email protected]');
Step 3: Configure MySQL Privileges
Create a dedicated MySQL user for schema migrations with appropriate privileges:
-- Create migration user
CREATE USER 'schema_migrator'@'localhost' IDENTIFIED BY 'secure_password_here';
-- Grant necessary privileges
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'schema_migrator'@'localhost';
GRANT ALTER, CREATE, DELETE, DROP, INDEX, INSERT, LOCK TABLES, SELECT, UPDATE ON schema_test.* TO 'schema_migrator'@'localhost';
-- Apply changes
FLUSH PRIVILEGES;
Step 4: Perform Migration with gh-ost
Execute an online schema change using gh-ost to add a new column:
# Basic gh-ost migration with safety controls
gh-ost \
--user="schema_migrator" \
--password="secure_password_here" \
--host="127.0.0.1" \
--database="schema_test" \
--table="users" \
--alter="ADD COLUMN phone VARCHAR(20) DEFAULT NULL AFTER email" \
--execute \
--allow-on-master \
--concurrent-rowcount \
--default-retries=120 \
--heartbeat-interval-millis=100 \
--max-lag-millis=2000 \
--throttle-control-replicas="127.0.0.1:3306" \
--serve-socket-file="/tmp/gh-ost.schema_test.users.sock" \
--postpone-cut-over-flag-file="/tmp/ghost.postpone.flag"
The migration runs in the background. Monitor progress and control cutover timing:
# Monitor migration status
echo "status" | nc -U /tmp/gh-ost.schema_test.users.sock
# When ready for cutover, remove postpone flag
rm -f /tmp/ghost.postpone.flag
# Emergency stop if needed
echo "panic" | nc -U /tmp/gh-ost.schema_test.users.sock
Step 5: Perform Migration with pt-online-schema-change
Execute another schema change using pt-online-schema-change to add an index:
# pt-online-schema-change with replica lag control
pt-online-schema-change \
--user=schema_migrator \
--password=secure_password_here \
--host=127.0.0.1 \
--port=3306 \
--alter="ADD INDEX idx_email (email)" \
D=schema_test,t=users \
--execute \
--max-lag=2 \
--check-interval=1 \
--chunk-size=1000 \
--chunk-time=0.5 \
--progress=percentage,5 \
--print \
--statistics
Step 6: Advanced Safety Controls
Replica Lag Monitoring
Configure advanced replica lag monitoring for production environments:
# Create replica lag monitoring script
cat > /tmp/check_replica_lag.sh << 'EOF'
#!/bin/bash
max_lag_seconds=5
current_lag=$(mysql -u schema_migrator -psecure_password_here -h replica_host -e "SHOW SLAVE STATUS\G" | grep "Seconds_Behind_Master" | awk '{print $2}')
if [[ $current_lag -gt $max_lag_seconds ]]; then
echo "High replica lag detected: ${current_lag}s"
# Pause gh-ost migration
echo "throttle" | nc -U /tmp/gh-ost.schema_test.users.sock
else
echo "no-throttle" | nc -U /tmp/gh-ost.schema_test.users.sock
fi
EOF
chmod +x /tmp/check_replica_lag.sh
Automated Rollback Preparation
Prepare rollback procedures before starting migrations:
# Create rollback script for gh-ost migration
cat > /tmp/rollback_migration.sh << 'EOF'
#!/bin/bash
echo "Initiating emergency rollback..."
echo "panic" | nc -U /tmp/gh-ost.schema_test.users.sock
# Wait for ghost table cleanup
sleep 10
# Verify original table is intact
mysql -u schema_migrator -psecure_password_here -e "DESC schema_test.users;"
echo "Rollback completed. Original table preserved."
EOF
chmod +x /tmp/rollback_migration.sh
Step 7: Production Migration Workflow
Implement a comprehensive production migration workflow:
# Production-ready gh-ost command with all safety features
gh-ost \
--user="schema_migrator" \
--password="secure_password_here" \
--host="127.0.0.1" \
--database="production_db" \
--table="orders" \
--alter="ADD COLUMN order_status ENUM('pending','processing','shipped','delivered') DEFAULT 'pending'" \
--execute \
--allow-on-master \
--concurrent-rowcount \
--default-retries=300 \
--heartbeat-interval-millis=100 \
--max-lag-millis=1500 \
--throttle-control-replicas="replica1:3306,replica2:3306" \
--throttle-flag-file="/tmp/gh-ost.throttle" \
--postpone-cut-over-flag-file="/tmp/gh-ost.postpone" \
--panic-flag-file="/tmp/gh-ost.panic" \
--serve-socket-file="/tmp/gh-ost.production_db.orders.sock" \
--initially-drop-ghost-table \
--initially-drop-old-table=false \
--cut-over-exponential-backoff \
--exact-rowcount
Best Practices
Follow these essential practices for safe online schema migrations:
- Test migrations on staging environments with production-size datasets
- Monitor resource usage during migrations using tools from our VPS benchmarking guide
- Schedule migrations during low-traffic periods
- Implement proper monitoring with our observability stack
- Configure backup systems using our MySQL PITR backup tutorial
- Use dedicated migration users with minimal required privileges
- Set appropriate replica lag thresholds based on your application requirements
- Prepare rollback procedures before starting any migration
Warning: Always test rollback procedures in staging environments before production migrations. Online schema changes can be complex, and having a tested recovery plan is essential.
Conclusion
You’ve successfully learned how to perform zero-downtime online schema changes on MySQL 8.4 LTS using both gh-ost and pt-online-schema-change. These tools provide robust safety mechanisms including cutover controls, replica lag monitoring, and emergency rollback capabilities.
The combination of proper tool selection, safety controls, and monitoring ensures your database schema evolution doesn’t impact application availability. Whether you’re running a single MySQL instance or a complex InnoDB Cluster setup, these techniques provide the reliability needed for production environments.
Ready to deploy your own high-performance MySQL infrastructure? Explore our Singapore VPS, Sydney VPS, or Amsterdam VPS offerings with NVMe storage, automated backups, and advanced networking features perfect for database workloads.




