mysqldump
, it locks the tables. You can change that by passing --lock-tables=false
as a parameter.cron
gzip
compression.. You can use the time
command to find out how long it took. The backup file was 21GB in size.rsync
instead of scp
. scp
gets stalled. rsync
copies faster. use the -P
flag to see progress and the ability to pause/resume file transfers1mysqldump --user=XXX --password=XXX --single-transaction --routines --triggers --quick --all-databases > XXX.sql
--single-transaction
only works with InnoDB, let’s you backup data without blocking any applications. The --single-transaction
option and the --lock-tables
option are mutually exclusive--routines
copies stored procedures and functions--triggers
Include triggers for each dumped table. A trigger is a named database object that is associated with a table, and that activates when a particular event occurs for the table. Some uses for triggers are to perform checks of values to be inserted into a table or to perform calculations on values involved in an update.--quick
forces mysqldump to retrieve rows for a table from the server a row at a time rather than retrieving the entire row set and buffering it in memory before writing it out.--single-transaction
option with the --quick
option--all-databases
, use the --routines
and --events
options explicitly.performance_schema
database, is not dumped even with the --all-databases
option. You can mention it explicitly with the --databases
option.--single-transaction
or --skip-lock-tables
option when moving live databases.Use the --no-data
(or -d
) flag to not dump table data. It’ll only dump the CREATE TABLE
statement for the table (for example, to create an empty copy of the table by loading the dump file)
1mysqldump --user=XXX --password=XXX --no-data --routines --events XXX > dump-defs.sql
Database Size | Backup Size (.sql) | Compressed Size (.sql.gz) |
---|---|---|
36.56 GB | 21 GB | 4.0 GB |
20.62 GB | 11 GB | 2.4 GB |
1# 36.56 GB input > 21 GB output file
2time mysqldump -uroot -p --databases foo > bak_foo.sql
3Enter password:
4
5real 21m3.347s
6user 5m21.110s
7sys 1m26.830s
1# 20.62 GB input > 11 GB output file
2time mysqldump -uroot -p bar > bak_bar.sql
3Enter password:
4
5real 12m40.238s
6user 2m36.310s
7sys 0m37.380s
gzip
compresses the original file. Meaning it won’t say the compressed backup .sql.gz
as a separate file and you lose the original .sql
file.1# 11 GB input file > 2.4 GB
2time gzip -9 bak_bar.sql
3
4real 20m30.855s
5user 17m8.170s
6sys 0m14.760s
.sql
file resulted in a 4GB .sql.gz
file.sql.gz
file.1time gunzip bar.sql.gz
2
3real 2m4.765s
4user 1m22.872s
5sys 0m12.850s
The database you import should already exist. When using --databases
, CREATE DATABASE
and USE
statements are included in the output before each new database.
1CREATE DATABASE foo;
1# uncompressed .sql file
2mysql -uroot -p DBNAME < BAKFILE.sql
3
4# compressed .sql.gz file
5pv mydump.sql.gz | gunzip | mysql -u root -p
pv
lets you monitor the progress of data through a pipe, meaning you’ll see a progress bar!
-- Open the console and start the interactive MySQL mode
USE <name_of_your_database>;
SOURCE <path_of_your_.sql>;
/var/lib/mysql
Another way of moving the databases (plus users and permissions), is to sync the entire MySQL data directory (default is /var/lib/mysql
defined in /etc/mysql/mysql.conf.d/mysqld.cnf
) to the new server.
You can also find out what directory it is with
1SELECT @@datadir;
1rsync -vPhaze "ssh -i /root/.ssh/id_rsa -p ${REMOTE_PORT}" ${REMOTE_USER}@${REMOTE_HOST}:/var/lib/mysql/ /var/lib/mysql/ &>> ${LOGFILE}
Here’s a bash script for achieving this that also logs the progress. Run this script via Cron so that you don’t end up being stuck sitting in front of a Terminal
1crontab -e