MySQL备份与恢复


Establishing a Backup Policy mysqldump makes a full backup of all our InnoDB tables in all databases. shell> mysqldump --single-transaction --all-databases > backup_sunday_1_PM.sql This backup operation acquires a global read lock on all tables at the beginning of the dump (using FLUSH TABLES WITH READ LOCK). If long updating statements are running when the FLUSH statement is issued, the backup operation may stall until those statements finish. After that, the dump becomes lock-free and does not disturb reads and writes on the tables. mysqldump also has an option to flush the logs. shell> mysqldump --single-transaction --flush-logs --master-data=2 \ --all-databases > backup_sunday_1_PM.sql After executing this command, the data directory contains a new binary log file, gbichot2-bin.000007, because the --flush-logs option causes the server to flush its logs. The --master-data option causes mysqldump to write binary log information to its output, so the resulting .sql dump file includes these lines: -- Position to start replication or point-in-time recovery from -- CHANGE MASTER TO MASTER_LOG_FILE='gbichot2-bin.000007',MASTER_LOG_POS=4; Because the mysqldump command made a full backup, those lines mean two things: 1.The dump file contains all changes made before any changes written to the gbichot2-bin.000007 binary log file or newer. 2.All data changes logged after the backup are not present in the dump file, but are present in the gbichot2-bin.000007 binary log file or newer. On Monday at 1 p.m., we can create an incremental backup by flushing the logs to begin a new binary log file. For example, executing a mysqladmin flush-logs command creates gbichot2-bin.000008. All changes between the Sunday 1 p.m. full backup and Monday 1 p.m. will be in the gbichot2-bin.000007 file. This incremental backup is important, so it is a good idea to copy it to a safe place. On Tuesday at 1 p.m., execute another mysqladmin flush-logs command. All changes between Monday 1 p.m. and Tuesday 1 p.m. will be in the gbichot2-bin.000008 file . The MySQL binary logs take up disk space. To free up space, purge them : shell> mysqldump --single-transaction --flush-logs --master-data=2 \ --all-databases --delete-master-logs > backup_sunday_1_PM.sql Note:Deleting the MySQL binary logs with mysqldump --delete-master-logs can be dangerous if your server is a replication master server, because slave servers might not yet fully have processed the contents of the binary log. To safely purge binary log files, follow this procedure: 1.On each slave server, use SHOW SLAVE STATUS to check which log file it is reading. 2.Obtain a listing of the binary log files on the master server with SHOW BINARY LOGS. 3.Determine the earliest log file among all the slaves. This is the target file. If all the slaves are up to date, this is the last log file on the list. 4.Make a backup of all the log files you are about to delete. 5.Purge all log files up to but not including the target file. The PURGE BINARY LOGS statement has no effect if the server was not started with the --log-bin option to enable binary logging. PURGE BINARY LOGS TO 'mysql-bin.010'; PURGE BINARY LOGS BEFORE '2008-04-02 22:46:26'; You can also set the expire_logs_days system variable to expire binary log files automatically after a given number of days. Using Backups for Recovery Now, suppose that we have a catastrophic crash on Wednesday at 8 a.m. that requires recovery from backups. To recover, first we restore the last full backup we have (the one from Sunday 1 p.m.). The full backup file is just a set of SQL statements, restoring it is very easy: shell> mysql < backup_sunday_1_PM.sql At this point, the data is restored to its state as of Sunday 1 p.m.. To restore the changes made since then, we must use the incremental backups; that is, the gbichot2-bin.000007 and gbichot2-bin.000008 binary log files. Fetch the files and then process their contents like this: shell> mysqlbinlog gbichot2-bin.000007 gbichot2-bin.000008 | mysql We now have recovered the data to its state as of Tuesday 1 p.m., but still are missing the changes from that date to the date of the crash. To not lose them, we would have needed to have the MySQL server store its MySQL binary logs into a safe location different from the place where it stores its data files, so that these logs were not on the destroyed disk. (That is, we can start the server with a --log-bin option that specifies a location on a different physical device from the one on which the data directory resides. That way, the logs are safe even if the device containing the directory is lost.) If we had done this, we would have the gbichot2-bin.000009 file (and any subsequent files) at hand, and we could apply them using mysqlbinlog and mysql to restore the most recent data changes with no loss up to the moment of the crash: shell> mysqlbinlog gbichot2-bin.000009 ... | mysql Backup Strategy Summary 1.Always run the MySQL server with the --log-bin option, or even --log-bin=log_name. 2.Make periodic full backups, using the mysqldump command. 3.Make periodic incremental backups by flushing the logs with FLUSH LOGS or mysqladmin flush-logs. Dumping Data in SQL Format with mysqldump By default, mysqldump writes information as SQL statements to the standard output. shell> mysqldump [arguments] > file_name To dump all databases, invoke mysqldump with the --all-databases option: shell> mysqldump --all-databases > dump.sql To dump only specific databases, name them on the command line and use the --databases option: shell> mysqldump --databases db1 db2 db3 > dump.sql The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names. With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement. To dump a single database, name it on the command line: shell> mysqldump --databases test > dump.sql To dump only specific tables from a database, name them on the command line following the database name: shell> mysqldump test t1 t3 t7 > dump.sql Reloading SQL-Format Backups If the dump file was created by mysqldump with the --all-databases or --databases option, it contains CREATE DATABASE and USE statements and it is not necessary to specify a default database into which to load the data: shell> mysql < dump.sql Alternatively, use a source command: mysql> source dump.sql If the file is a single-database dump not containing CREATE DATABASE and USE statements, create the database first : shell> mysqladmin create db1 Then specify the database name when you load the dump file: shell> mysql db1 < dump.sql Alternatively, from within mysql, create the database, select it as the default database, and load the dump file: mysql> CREATE DATABASE IF NOT EXISTS db1; mysql> USE db1; mysql> source dump.sql If you invoke mysqldump with the --tab=dir_name option, it uses dir_name as the output directory and dumps tables individually in that directory using two files for each table. The table name is the basename for these files. For a table named t1, the files are named t1.sql and t1.txt. The following command dumps the contents of the db1 database to files in the /tmp database: shell> mysqldump --tab=/tmp db1 To enable data files to be written using a different format, mysqldump supports these options: --fields-terminated-by=str The string for separating column values (default: tab). --fields-enclosed-by=char The character within which to enclose column values (default: no character). --fields-optionally-enclosed-by=char The character within which to enclose non-numeric column values (default: no character). --fields-escaped-by=char The character for escaping special characters (default: no escaping). --lines-terminated-by=str The line-termination string (default: newline). For example, on Unix, you can quote the double quote like this: shell> mysqldump --tab=/tmp --fields-terminated-by=, --fields-enclosed-by='"' --lines-terminated-by=0x0d0a db1 The server uses SELECT ... INTO OUTFILE to write the files, so you must have the FILE privilege to perform this operation, and an error occurs if a given .txt file already exists. Here is an example that produces a file in the comma-separated values (CSV) format : SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM test_table; Reloading Delimited-Text Format Backups For backups produced with mysqldump --tab, each table is represented in the output directory by an .sql file containing the CREATE TABLE statement for the table, and a .txt file containing the table data. shell> mysql db1 < t1.sql shell> mysqlimport db1 t1.txt An alternative to using mysqlimport to load the data file is to use the LOAD DATA INFILE statement from within the mysql client: mysql> USE db1; mysql> LOAD DATA INFILE 't1.txt' INTO TABLE t1; If you used any data-formatting options with mysqldump when you initially dumped the table, you must use the same options with mysqlimport or LOAD DATA INFILE to ensure proper interpretation of the data file contents: shell> mysqlimport --fields-terminated-by=, --fields-enclosed-by='"' --lines-terminated-by=0x0d0a db1 t1.txt Or: mysql> USE db1; mysql> LOAD DATA INFILE 't1.txt' INTO TABLE t1 -> FIELDS TERMINATED BY ',' FIELDS ENCLOSED BY '"' -> LINES TERMINATED BY '\r\n'; Making a Copy of a Database shell> mysqldump db1 > dump.sql shell> mysqladmin create db2 shell> mysql db2 < dump.sql Do not use --databases on the mysqldump command line because that causes USE db1 to be included in the dump file, which overrides the effect of naming db2 on the mysql command line. Copy a Database from one Server to Another On Server 1: shell> mysqldump --databases db1 > dump.sql Copy the dump file from Server 1 to Server 2. On Server 2: shell> mysql < dump.sql Use of --databases with the mysqldump command line causes the dump file to include CREATE DATABASE and USE statements that create the database if it does exist and make it the default database for the reloaded data. On Server 1: shell> mysqldump db1 > dump.sql On Server 2: shell> mysqladmin create db1 shell> mysql db1 < dump.sql Dumping Stored Programs Several options control how mysqldump handles stored programs (stored procedures and functions, triggers, and events): --events: Dump Event Scheduler events --routines: Dump stored procedures and functions --triggers: Dump triggers for tables The --triggers option is enabled by default so that when tables are dumped, they are accompanied by any triggers they have. The other options must be specified explicitly to dump the corresponding objects. To disable any of these options explicitly, use its skip form: --skip-events, --skip-routines, or --skip-triggers. Dumping Table Definitions and Content Separately For example, to dump table definitions and data separately for the test database, use these commands: shell> mysqldump --no-data test > dump-defs.sql shell> mysqldump --no-create-info test > dump-data.sql For a definition-only dump, add the --routines and --events options to also include stored routine and event definitions: shell> mysqldump --no-data --routines --events test > dump-defs.sql Point-in-Time (Incremental) Recovery Using the Binary Log To see a listing of all binary log files: mysql> SHOW BINARY LOGS; To determine the name of the current binary log file: mysql> SHOW MASTER STATUS; Executing events from the binary log causes the data modifications they represent to be redone. This enables recovery of data changes for a given span of time. shell> mysqlbinlog binlog_files | mysql -u root -p Viewing log contents can be useful when you need to determine event times or positions to select partial log contents prior to executing events. To view events : shell> mysqlbinlog binlog_files | more Alternatively, save the output in a file and view the file in a text editor: shell> mysqlbinlog binlog_files > tmpfile shell> ... edit tmpfile ... Saving the output in a file is useful as a preliminary to executing the log contents with certain events removed, such as an accidental DROP DATABASE. You can delete from the file any statements not to be executed before executing its contents. After editing the file, execute the contents as follows: shell> mysql -u root -p < tmpfile If you have more than one binary log to execute on the MySQL server, the safe method is to process them all using a single connection to the server. Here is an example that demonstrates what may be unsafe: shell> mysqlbinlog binlog.000001 | mysql -u root -p # DANGER!! shell> mysqlbinlog binlog.000002 | mysql -u root -p # DANGER!! To avoid problems like this, use a single connection to execute the contents of all binary logs that you want to process. Here is one way to do so: shell> mysqlbinlog binlog.000001 binlog.000002 | mysql -u root -p Another approach is to write all the logs to a single file and then process the file: shell> mysqlbinlog binlog.000001 > /tmp/statements.sql shell> mysqlbinlog binlog.000002 >> /tmp/statements.sql shell> mysql -u root -p -e "source /tmp/statements.sql" Point-in-Time Recovery Using Event Times As an example, suppose that exactly at 10:00 a.m. on April 20, 2005 an SQL statement was executed that deleted a large table. To restore the table and data, you could restore the previous night's backup, and then execute the following command: shell> mysqlbinlog --stop-datetime="2005-04-20 9:59:59" \ /var/log/mysql/bin.123456 | mysql -u root -p This command recovers all of the data up until the date and time given by the --stop-datetime option. If you did not detect the erroneous SQL statement that was entered until hours later, you will probably also want to recover the activity that occurred afterward. Based on this, you could run mysqlbinlog again with a start date and time, like so: shell> mysqlbinlog --start-datetime="2005-04-20 10:01:00" \ /var/log/mysql/bin.123456 | mysql -u root -p the two mysqlbinlog commands restores everything up until one second before 10:00 a.m. and everything from 10:01 a.m. on. To use this method of point-in-time recovery, you should examine the log to be sure of the exact times to specify for the commands. To display the log file contents without executing them, use this command: shell> mysqlbinlog /var/log/mysql/bin.123456 > /tmp/mysql_restore.sql Excluding specific changes by specifying times for mysqlbinlog does not work well if multiple statements executed at the same time as the one to be excluded. Point-in-Time Recovery Using Event Positions Using positions may enable you to be more precise about which part of the log to recover, especially if many transactions occurred around the same time as a damaging SQL statement. To determine the position numbers,This can be done like so: shell> mysqlbinlog --start-datetime="2005-04-20 9:55:00" \ --stop-datetime="2005-04-20 10:05:00" \ /var/log/mysql/bin.123456 > /tmp/mysql_restore.sql Open this file with a text editor and look for the statement that you do not want to repeat. Determine the positions in the binary log for stopping and resuming the recovery and make note of them. Positions are labeled as log_pos followed by a number. After restoring the previous backup file, use the position numbers to process the binary log file. shell> mysqlbinlog --stop-position=368312 /var/log/mysql/bin.123456 \ | mysql -u root -p shell> mysqlbinlog --start-position=368315 /var/log/mysql/bin.123456 \ | mysql -u root -p The first command recovers all the transactions up until the stop position given. The second command recovers all transactions from the starting position given until the end of the binary log. MyISAM Table Maintenance and Crash Recovery MyISAM table maintenance can also be done using the SQL statements that perform operations similar to what myisamchk can do: To check MyISAM tables, use CHECK TABLE. To repair MyISAM tables, use REPAIR TABLE. To optimize MyISAM tables, use OPTIMIZE TABLE. To analyze MyISAM tables, use ANALYZE TABLE. How to Check MyISAM Tables for Errors To check a MyISAM table, use the following commands: myisamchk tbl_name myisamchk -m tbl_name myisamchk -e tbl_name myisamchk -e -i tbl_name In most cases, a simple myisamchk command with no arguments other than the table name is sufficient to check a table. How to Repair MyISAM Tables Symptoms of corrupted tables include queries that abort unexpectedly and observable errors such as these: tbl_name.frm is locked against change Can't find file tbl_name.MYI (Errcode: nnn) Unexpected end of file Record file is crashed Got error nnn from table handler To get more information about the error, run perror nnn. shell> perror 126 127 132 134 135 136 141 144 145 MySQL error code 126 = Index file is crashed MySQL error code 127 = Record-file is crashed MySQL error code 132 = Old database file MySQL error code 134 = Record was already deleted (or record file crashed) MySQL error code 135 = No more room in record file MySQL error code 136 = No more room in index file MySQL error code 141 = Duplicate unique key or constraint on write or update MySQL error code 144 = Table is crashed and last repair failed MySQL error code 145 = Table was marked as crashed and should be repaired Note that error 135 (no more room in record file) and error 136 (no more room in index file) are not errors that can be fixed by a simple repair. In this case, you must use ALTER TABLE to increase the MAX_ROWS and AVG_ROW_LENGTH table option values: ALTER TABLE tbl_name MAX_ROWS=xxx AVG_ROW_LENGTH=yyy; If you are going to repair a table from the command line, you must first stop the mysqld server. Stage 1: Checking your tables Run myisamchk *.MYI or myisamchk -e *.MYI if you have more time. Use the -s (silent) option to suppress unnecessary information. If the mysqld server is stopped, you should use the --update-state option to tell myisamchk to mark the table as “checked.” You have to repair only those tables for which myisamchk announces an error. For such tables, proceed to Stage 2. If you get unexpected errors when checking (such as out of memory errors), or if myisamchk crashes, go to Stage 3. Stage 2: Easy safe repair First, try myisamchk -r -q tbl_name (-r -q means “quick recovery mode”). This attempts to repair the index file without touching the data file. If the data file contains everything that it should and the delete links point at the correct locations within the data file, this should work, and the table is fixed. Start repairing the next table. Otherwise, use the following procedure: Make a backup of the data file before continuing. Use myisamchk -r tbl_name (-r means “recovery mode”). This removes incorrect rows and deleted rows from the data file and reconstructs the index file. If the preceding step fails, use myisamchk --safe-recover tbl_name. Safe recovery mode uses an old recovery method that handles a few cases that regular recovery mode does not (but is slower). Note If you want a repair operation to go much faster, you should set the values of the sort_buffer_size and key_buffer_size variables each to about 25% of your available memory when running myisamchk. If you get unexpected errors when repairing (such as out of memory errors), or if myisamchk crashes, go to Stage 3. Stage 3: Difficult repair You should reach this stage only if the first 16KB block in the index file is destroyed or contains incorrect information, or if the index file is missing. In this case, it is necessary to create a new index file. Do so as follows: Move the data file to a safe place. Use the table description file to create new (empty) data and index files: shell> mysql db_name mysql> SET autocommit=1; mysql> TRUNCATE TABLE tbl_name; mysql> quit Copy the old data file back onto the newly created data file. (Do not just move the old file back onto the new file. You want to retain a copy in case something goes wrong.) Important If you are using replication, you should stop it prior to performing the above procedure, since it involves file system operations, and these are not logged by MySQL. Go back to Stage 2. myisamchk -r -q should work. (This should not be an endless loop.) You can also use the REPAIR TABLE tbl_name USE_FRM SQL statement, which performs the whole procedure automatically. There is also no possibility of unwanted interaction between a utility and the server, because the server does all the work when you use REPAIR TABLE. Stage 4: Very difficult repair You should reach this stage only if the .frm description file has also crashed. That should never happen, because the description file is not changed after the table is created: Restore the description file from a backup and go back to Stage 3. You can also restore the index file and go back to Stage 2. In the latter case, you should start with myisamchk -r. If you do not have a backup but know exactly how the table was created, create a copy of the table in another database. Remove the new data file, and then move the .frm description and .MYI index files from the other database to your crashed database. This gives you new description and index files, but leaves the .MYD data file alone. Go back to Stage 2 and attempt to reconstruct the index file. MyISAM Table Optimization To coalesce fragmented rows and eliminate wasted space that results from deleting or updating rows, run myisamchk in recovery mode: shell> myisamchk -r tbl_name You can optimize a table in the same way by using the OPTIMIZE TABLE SQL statement. OPTIMIZE TABLE does a table repair and a key analysis, and also sorts the index tree so that key lookups are faster. myisamchk has a number of other options that you can use to improve the performance of a table: --analyze or -a: Perform key distribution analysis. This improves join performance by enabling the join optimizer to better choose the order in which to join the tables and which indexes it should use. --sort-index or -S: Sort the index blocks. This optimizes seeks and makes table scans that use indexes faster. --sort-records=index_num or -R index_num: Sort data rows according to a given index. This makes your data much more localized and may speed up range-based SELECT and ORDER BY operations that use this index. Setting Up a MyISAM Table Maintenance Schedule 1. It is a good idea to perform table checks on a regular basis rather than waiting for problems to occur. One way to check and repair MyISAM tables is with the CHECK TABLE and REPAIR TABLE statements 2. Another way to check tables is to use myisamchk. For maintenance purposes, you can use myisamchk -s. The -s option causes myisamchk to run in silent mode, printing messages only when errors occur.
还剩7页未读

继续阅读

下载pdf到电脑,查找使用更方便

pdf的实际排版效果,会与网站的显示效果略有不同!!

需要 8 金币 [ 分享pdf获得金币 ] 0 人已下载

下载pdf

pdf贡献者

hnust2010

贡献于2012-08-11

下载需要 8 金币 [金币充值 ]
亲,您也可以通过 分享原创pdf 来获得金币奖励!
下载pdf