NEWS Earn Money with Onidel Cloud! Affiliate Program Details - Check it out

How to Set Up Point‑in‑Time Recovery Backups for MySQL 8.4 LTS on an Ubuntu 24.04 VPS with MySQL Shell Dump, Binary Logs, and S3‑Compatible Storage (2025 Tutorial)

Setting up comprehensive point-in-time recovery (PITR) for MySQL 8.4 LTS is crucial for production databases. This tutorial demonstrates how to implement a robust backup strategy combining MySQL Shell’s logical dumps with binary log backups to S3-compatible storage, enabling recovery to any specific moment in time.

Introduction

Point-in-time recovery allows you to restore your MySQL database to any specific timestamp, protecting against data corruption, accidental deletions, or application errors. Unlike basic snapshots, PITR combines full logical backups with continuous binary log archiving to provide granular recovery options.

In this tutorial, you’ll learn to configure automated backups using MySQL Shell’s dump utilities, set up binary log streaming to object storage, and perform complete point-in-time recovery procedures. This approach works excellently on cloud VPS environments where consistent backup automation is essential.

Prerequisites

Before starting, ensure you have:

  • Ubuntu 24.04 LTS VPS with at least 4GB RAM and 40GB storage
  • MySQL 8.4 LTS installed and running
  • MySQL Shell 8.4 installed
  • S3-compatible storage credentials (AWS S3, MinIO, etc.)
  • Root or sudo access to the server
  • Basic knowledge of MySQL administration and binary logs

System requirements: minimum 4GB RAM for MySQL Shell operations, with additional storage for local dump staging.

Step 1: Install MySQL Shell and Dependencies

First, install MySQL Shell 8.4 and required utilities:

# Update system packages
sudo apt update && sudo apt upgrade -y

# Install MySQL Shell
wget https://dev.mysql.com/get/mysql-apt-config_0.8.32-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.32-1_all.deb
sudo apt update
sudo apt install mysql-shell -y

# Install AWS CLI for S3 operations
sudo apt install awscli -y

# Verify MySQL Shell installation
mysqlsh --version

Step 2: Configure MySQL for PITR

Configure MySQL with optimal settings for point-in-time recovery. Edit the MySQL configuration:

sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

Add or modify these settings in the [mysqld] section:

# Binary logging configuration
log-bin = /var/log/mysql/mysql-bin
server-id = 1
binlog-format = ROW
binlog-row-image = FULL
max_binlog_size = 100M
expire_logs_days = 7

# GTID configuration for easier recovery
gtid-mode = ON
enforce-gtid-consistency = ON

# Performance and safety
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
innodb_doublewrite = ON

Restart MySQL to apply changes:

sudo systemctl restart mysql
sudo systemctl status mysql

Step 3: Create Backup User and Directories

Create a dedicated MySQL user for backup operations with minimal required privileges:

mysql -u root -p

CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'SecureBackupPass2025!';

# Grant necessary privileges for MySQL Shell dump
GRANT SELECT, SHOW DATABASES, SHOW VIEW, TRIGGER, LOCK TABLES, 
      REPLICATION SLAVE, REPLICATION CLIENT, EVENT, PROCESS 
      ON *.* TO 'backup_user'@'localhost';

# Additional privileges for consistent dumps
GRANT RELOAD, FLUSH_TABLES, SUPER ON *.* TO 'backup_user'@'localhost';

FLUSH PRIVILEGES;
EXIT;

Create backup directories:

# Create backup staging directories
sudo mkdir -p /opt/mysql-backups/{dumps,binlogs,scripts}
sudo chown mysql:mysql /opt/mysql-backups -R
sudo chmod 750 /opt/mysql-backups -R

Step 4: Configure S3 Credentials

Set up AWS CLI or S3-compatible storage credentials. For standard AWS S3:

# Configure AWS credentials
aws configure
# Enter your Access Key ID, Secret Access Key, Region, and output format

For S3-compatible storage like MinIO, create a custom profile:

# Create custom S3 config
cat > ~/.aws/config << EOF

[profile minio]

region = us-east-1 output = json s3 = endpoint_url = https://your-minio-endpoint.com signature_version = s3v4 EOF

Step 5: Create MySQL Shell Dump Backup Script

Create an automated script for MySQL Shell logical dumps:

sudo nano /opt/mysql-backups/scripts/mysql-dump-backup.sh
#!/bin/bash

# MySQL Shell Dump Backup Script with S3 Upload
set -euo pipefail

# Configuration
MYSQL_USER="backup_user"
MYSQL_PASSWORD="SecureBackupPass2025!"
BACKUP_DIR="/opt/mysql-backups/dumps"
S3_BUCKET="your-mysql-backups"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d_%H%M%S)
DUMP_NAME="mysql_dump_${DATE}"

# Create timestamped dump directory
DUMP_PATH="${BACKUP_DIR}/${DUMP_NAME}"
mkdir -p "$DUMP_PATH"

echo "Starting MySQL Shell dump: $DUMP_NAME"

# Execute MySQL Shell dump with compression and parallel threads
mysqlsh --no-wizard --uri="${MYSQL_USER}:${MYSQL_PASSWORD}@localhost:3306" \
  --js -e "
util.dumpInstance('${DUMP_PATH}', {
  compression: 'gzip',
  threads: 4,
  triggers: true,
  users: false,
  consistent: true,
  gtids: 'ALL',
  chunking: true,
  bytesPerChunk: '128M'
})
"

echo "Dump completed. Uploading to S3..."

# Upload to S3 with server-side encryption
aws s3 sync "$DUMP_PATH" "s3://${S3_BUCKET}/dumps/${DUMP_NAME}/" \
  --storage-class STANDARD_IA \
  --sse AES256 \
  --delete

echo "Upload completed. Cleaning local dump..."
rm -rf "$DUMP_PATH"

# Clean old backups from S3
echo "Cleaning backups older than ${RETENTION_DAYS} days..."
aws s3api list-objects-v2 --bucket "$S3_BUCKET" --prefix "dumps/" \
  --query "Contents[?LastModified<='$(date -d "${RETENTION_DAYS} days ago" --iso-8601)'].Key" \
  --output text | xargs -r -n1 aws s3 rm "s3://${S3_BUCKET}/"

echo "MySQL dump backup completed: $DUMP_NAME"

Make the script executable:

sudo chmod +x /opt/mysql-backups/scripts/mysql-dump-backup.sh

Step 6: Create Binary Log Streaming Script

Create a script to continuously backup binary logs:

sudo nano /opt/mysql-backups/scripts/binlog-backup.sh
#!/bin/bash

# Binary Log Backup Script
set -euo pipefail

# Configuration
MYSQL_USER="backup_user"
MYSQL_PASSWORD="SecureBackupPass2025!"
BINLOG_DIR="/var/log/mysql"
STAGING_DIR="/opt/mysql-backups/binlogs"
S3_BUCKET="your-mysql-backups"
STATE_FILE="/opt/mysql-backups/binlog_state.txt"

# Get list of binary log files
BINLOGS=$(mysql -u"$MYSQL_USER" -p"$MYSQL_PASSWORD" -e "SHOW BINARY LOGS;" --batch --skip-column-names)

# Read last processed binlog
LAST_BINLOG=""
if [[ -f "$STATE_FILE" ]]; then
    LAST_BINLOG=$(cat "$STATE_FILE")
fi

echo "Starting binary log backup process..."
echo "Last processed: $LAST_BINLOG"

# Process each binary log
while IFS=

Make executable:

sudo chmod +x /opt/mysql-backups/scripts/binlog-backup.sh

Step 7: Schedule Automated Backups

Set up cron jobs for automated backup scheduling:

sudo crontab -e

Add these cron entries:

# Daily full dump at 2 AM
0 2 * * * /opt/mysql-backups/scripts/mysql-dump-backup.sh >> /var/log/mysql-backup.log 2>&1

# Binary log backup every 15 minutes
*/15 * * * * /opt/mysql-backups/scripts/binlog-backup.sh >> /var/log/binlog-backup.log 2>&1

# Flush binary logs daily at 1:45 AM (before dump)
45 1 * * * mysql -u backup_user -pSecureBackupPass2025! -e "FLUSH BINARY LOGS;" >> /var/log/mysql-backup.log 2>&1

Step 8: Perform Point-in-Time Recovery

To restore to a specific point in time, follow this procedure:

# 1. Download the latest full backup before your target time
aws s3 sync s3://your-mysql-backups/dumps/mysql_dump_20250115_020000/ ./restore_dump/

# 2. Stop MySQL service
sudo systemctl stop mysql

# 3. Remove existing data directory (BACKUP FIRST!)
sudo mv /var/lib/mysql /var/lib/mysql.backup
sudo mkdir /var/lib/mysql
sudo chown mysql:mysql /var/lib/mysql

# 4. Restore the dump
mysqlsh --no-wizard --uri="root@localhost:3306" --js -e "
util.loadDump('./restore_dump', {
  threads: 4,
  loadIndexes: true,
  ignoreExistingObjects: false,
  resetProgress: true
})
"

# 5. Download required binary logs
aws s3 sync s3://your-mysql-backups/binlogs/ ./binlogs/

# 6. Apply binary logs up to target time
mysqlbinlog --start-datetime="2025-01-15 02:00:00" \
             --stop-datetime="2025-01-15 14:30:00" \
             ./binlogs/mysql-bin.000* | \
             mysql -u root -p

Best Practices

Security Considerations:

  • Store backup credentials securely using environment variables or dedicated secret management
  • Enable S3 bucket encryption and appropriate IAM policies
  • Use separate S3 buckets for dumps and binary logs for better organization
  • Implement backup verification by periodically testing restore procedures

Performance Optimization:

  • Adjust threads and bytesPerChunk based on your VPS performance characteristics
  • Use S3 Intelligent Tiering for cost optimization on large backup volumes
  • Monitor backup duration and adjust scheduling to avoid peak traffic periods
  • Consider using S3 Transfer Acceleration for faster uploads from distant regions

Monitoring and Alerting:

  • Set up log monitoring for backup failures using tools from our observability stack guide
  • Implement backup size and timing alerts to detect inconsistencies
  • Test PITR procedures monthly using non-production environments

Conclusion

You've successfully implemented a comprehensive point-in-time recovery solution for MySQL 8.4 LTS using MySQL Shell dumps and binary log streaming. This setup provides both full logical backups and continuous transaction log archiving, enabling recovery to any specific moment in time.

The combination of automated scheduling, S3 integration, and proper retention policies ensures your MySQL data remains protected against various failure scenarios. Regular testing of your recovery procedures is essential to maintain confidence in your backup strategy.

For production environments requiring even higher availability, consider implementing this backup strategy alongside our MySQL InnoDB Cluster guide. When deployed on high-performance infrastructure, this approach provides enterprise-grade data protection suitable for mission-critical applications.

Meta: Discover how to set up point-in-time recovery backups on MySQL 8.4 for Ubuntu 24.04 VPS. Secure your data with Onidel's cloud solutions today!

Tags: MySQL 8.4,Ubuntu VPS,point-in-time recovery,MySQL Shell,binary logs,S3 storage,cloud backups,Onidel

\t' read -r binlog_file size; do # Skip if already processed if [[ "$binlog_file" == "$LAST_BINLOG" ]]; then continue fi # Skip current active binlog CURRENT_LOG=$(mysql -u"$MYSQL_USER" -p"$MYSQL_PASSWORD" -e "SELECT @@log_bin_basename;" --batch --skip-column-names) CURRENT_LOG=$(basename "$CURRENT_LOG") if [[ "$binlog_file" == "${CURRENT_LOG}"* ]]; then echo "Skipping active binlog: $binlog_file" continue fi echo "Processing binlog: $binlog_file" # Copy binlog to staging cp "${BINLOG_DIR}/${binlog_file}" "${STAGING_DIR}/" # Upload to S3 aws s3 cp "${STAGING_DIR}/${binlog_file}" "s3://${S3_BUCKET}/binlogs/${binlog_file}" \ --sse AES256 # Remove local copy rm "${STAGING_DIR}/${binlog_file}" # Update state echo "$binlog_file" > "$STATE_FILE" echo "Completed: $binlog_file" done <<< "$BINLOGS" echo "Binary log backup process completed"

Make executable:





Step 7: Schedule Automated Backups

Set up cron jobs for automated backup scheduling:





Add these cron entries:





Step 8: Perform Point-in-Time Recovery

To restore to a specific point in time, follow this procedure:





Best Practices

Security Considerations:

  • Store backup credentials securely using environment variables or dedicated secret management
  • Enable S3 bucket encryption and appropriate IAM policies
  • Use separate S3 buckets for dumps and binary logs for better organization
  • Implement backup verification by periodically testing restore procedures

Performance Optimization:

  • Adjust threads and bytesPerChunk based on your VPS performance characteristics
  • Use S3 Intelligent Tiering for cost optimization on large backup volumes
  • Monitor backup duration and adjust scheduling to avoid peak traffic periods
  • Consider using S3 Transfer Acceleration for faster uploads from distant regions

Monitoring and Alerting:

  • Set up log monitoring for backup failures using tools from our observability stack guide
  • Implement backup size and timing alerts to detect inconsistencies
  • Test PITR procedures monthly using non-production environments

Conclusion

You’ve successfully implemented a comprehensive point-in-time recovery solution for MySQL 8.4 LTS using MySQL Shell dumps and binary log streaming. This setup provides both full logical backups and continuous transaction log archiving, enabling recovery to any specific moment in time.

The combination of automated scheduling, S3 integration, and proper retention policies ensures your MySQL data remains protected against various failure scenarios. Regular testing of your recovery procedures is essential to maintain confidence in your backup strategy.

For production environments requiring even higher availability, consider implementing this backup strategy alongside our MySQL InnoDB Cluster guide. When deployed on high-performance infrastructure, this approach provides enterprise-grade data protection suitable for mission-critical applications.

Share your love