I have very large databases with tens of thousands of tables some of which have up to 5GB of data in 10's of millions of entries. (I run a popular service)... I've always had headaches when backing up these databases. Using default mysqldump it quickly spirals the server load out of control and locks up everything... affecting my users. Trying to stop the process can lead to crashed tables and lots of downtime during recovery of those tables.
I now use...
mysqldump -u USER -p --single-transaction --quick --lock-tables=false DATABASE | gzip > OUTPUT.gz
The mysqldump reference at dev.mysql.com even says...
To dump large tables, you should combine the --single-transaction
option with --quick.
Says nothing about that being dependent on the database being InnoDB, mine are myISAM and this worked beautifully for me. Server load was almost completely unaffected and my service ran like a Rolex during the entire process. If you have large databases and backing them up is affecting your end user... this IS the solution. ;)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…