Mysql Multi-Threaded Backups & Restores

now browsing by category


How to do multi-threaded backups and restores for mysql

How to do multi-threaded backups and restores for mysql

So there are probably a lot of people out there who have the standard master-slave mysql database servers running with InnoDB and MyISAM Databases.

This not usually a problem unless you have high amount of traffic going to your databases using InnoDB, since mysql does not do multithreaded dumps or restores, this can be problematic if your replication is broken to the point where you need to do a full restore.

Now if your database was say 15gigs in size, consisting of Innodb and myisam db’s in a production environment this would be  brutal, as you would need to lock the tables on the primary while your restoring to the slave. Since mysql does not do a multithreaded restores, this could take 12 hours or more, keep in mind this is dependent on hardware.  To give you an idea the servers we had when we ran into this issue, to help you gauge your problem.

Xeon quad core, sata 1 T drives, 18 gigs of ram (Master and Slave)

Fortunately, there is a solution 🙂

There is a free application called xtrabackup by Percona which does multithreaded backup and restores of myisam and innodb combined. In this blog I will be explaining how to set it up, and what I did to minimize downtime for the businesses.

What you should consider doing

Since drive I/O is a factor with high traffic Database servers which can seriously impede performace significantly. We built new servers same specs but with SSD drives this time.

Xeon quad core, (sata3) 1T, (SSD) 120G 18 gigs of ram

Now this is not necessary, however if database traffic is high you should consider SSD or even fiber channel drives if your setup supports it.

Xtrabackup is free unless you use mysql enterprise, then its $5000/server to license it. Honestly using mysql enterprise in my opinion is just stupid, is exactly the same except you get support, the same support you could get online or irc on any forum which is probably better, why pay for something you don’t need to.

Install and setup

Note: This will need to be installed on both master and slave database servers, as this process will replace the mysqldump and restore method you use.

  1. rpm -Uhv
  2. Yum install percona-xtrabackup.x86_64 (Master & Slave both servers)

Create  backup

Note: There are number of ways you can do this. You can have it output to a /tmp directory while its doing the backup process, or you can have it output to stdout and compress to a directory. I will show you how to do both ways.

  1. Since innobackupex, which is the tool with xtrabackup we are going to use, looks at the /etc/my.cnf file for the data directory for mysql, we do not have to define a lot in our command string. For this example we do not setup a mysql password, however if you did you simply add –user <user> -pass <pass> to the string.

This process took 5 minutes on a 15gig Database with Xeon quad core, (sata3) 1T, (SSD) 120G 18Gram

2. Innobackupex <outputdirectory>
     Eg. Innobackupex /test (this command will create another directory inside this one with a time stamp, it’s a fullback of all databases innodb and myisam uncompressed.)

3. innobackupex –stream=tar ./ | gzip – > /test/test.tar.gz (This command will do the same as the above except will output to stdout and compress the fullbackup into the tar file

Note: you also need to use the -i option to untar it eg. tar -ixvf test.tar.gz, ensure mysql is stopped on any slave before restoring, and dont forget to chown -R mysql:mysql the files after you restore the data to the data directory using the innobackupex –copy-back command.

Note: I have experienced issues with getting replication to start doing a full backup and restore on to a slave with innodb and myisam, using the innobackupex stream compression to gzip, after untarring for whatever reason the innodb log files had some corruption, which caused the slave to stop replication upon immediate connection of the master.

if the stream compression doesnt work do a uncompressed backup as shown above, and then rsync the data from your master to the slave via a gige switch if possible (ie. rsync -rva –numeric-ids <source> <destination>:/)

Our 15gig DB compressed to 3.4gigs

  1. Now copy tar file or directory to that innobackupex created to the slave server via scp

Scp * user@host:  <-(edit accordingly)

Doing a Restore

Note: The beauty of this restore is it will be a multi-threaded restore utilizing multiple cores instead of just one, since our server data directory is now sitting on SSD, disk I/O will be almost nill, increasing performance significantly, and reducing load.

  1. On your primary database server log into mysql and lock the tables
    2. Now on your slave: To do a restore of all the databases its pretty easy.
      1. innobackupex –copy-back /test/2013-02-03_17-21-52/ (update the path to where ever the innobackupex files are.)

This took 3 mins to restore a 15gig DB with innodb and myisam for us on
Xeon quad core, (sata3) 1T, (SSD) 120G 18 gigs of ram

Setting up the backup crons

  1. Now if you were using mysqldump as part of your mysql backup process then you will need to change it to use the following.
  2. Create a directory on the slave called mysqldumps
  3. Create a file called and save it.
    1. Add the following to it.

innobackupex –stream=tar ./ | gzip – > /mysqldumps/innobackup-$(date +”%F”)

Note: that our backups are being stored on our sata3 drive and data directory resides on the SSD

  1. Now now add this to your crontab as root, again change the cron to run however often you need to run.
    1. 0 11,23 * * * /fullpath/

Setting up diskspacewatch for the SSD drive.

  1. Since the SSD drive is 120G, we need to setup alert to monitor to watch the space threshold. If you not have the resources to implement a tool to specifically to monitor diskspace, then you can write a script that watches the diskspace and send out an email alert in the event the threshold is reached.
  2. Run a df –h on your server find the partition you want it to watch edit (df /disk2) on the script to which ever partition you want it to watch, threshold is defined by ( if [ $usep -ge 80 ]; then)
  3. Create a file called diskspacewatch, add and save below

df /disk2 | grep -vE ‘^Filesystem|tmpfs|cdrom’ | awk ‘{ print $5 ” ” $1 }’ | while read output;
echo $output
usep=$(echo $output | awk ‘{ print $1}’ | cut -d’%’ -f1  )
partition=$(echo $output | awk ‘{ print $2 }’ )
if [ $usep -ge 80 ]; then
echo “SSD Disk on slave database server Running out of space!!  \”$partition
($usep%)\” on $(hostname) as on $(date)” |
mail -s “Alert: Almost out of disk space $usep%”

  1. Now you want to setup a cron that runs this script every 1 hour, or however long you want
    1. 0 * * * * /path/diskspacewatch

That’s my tutorial on a mysql multithreaded backup and restore setup. If you have questions email