From: ANT_THOMAS 8 Jul 2021 09:21
To: ALL1 of 7
I've got a MySQL server with multiple databases. Total data is around 8GB.

For the past quite a while I've been running a nightly mysqldump backup to create a full backup.
A bit excessive, and I probably should have looked at incremental/differential backups. Current backup folder is at 1.4TB... oops!

Anyway, said server is having some system drive issues (where the database lives) and I really need to migrate over to my new server that's been sat there just running some big storage for around 9 months.

The system drive issues have happened whilst running the mysqldump backup process so I've stopped that for the last couple of nights.

I've got a full backup from a few days ago which I can populate the new server with, but want to then populate it with the missing stuff from the recent few days.

Is this possible? Is it a trivial task? I don't want to run mysqldump and do a full db dump. Should I be asking this on stackexchange?

I've had a look at Master/Slave setups and what I've read requires a full dump to sync, then they run side-by-side. Whereas my slave would be a few days behind the master. Do they just sync up?
From: koswix12 Jul 2021 23:52
To: ANT_THOMAS 2 of 7
These system drive issues... have you tried running the backup with the bathroom light off?
From: ANT_THOMAS13 Jul 2021 08:48
To: koswix 3 of 7
Nightly backups m8. Lights off!
From: Peter (BOUGHTONP)13 Jul 2021 13:47
To: ANT_THOMAS 4 of 7
I don't remember seeing this thread before, but it was "2 new of 3" so maybe I was just half asleep before.

Anyhow, I've only had limited exposure to database master/slave stuff in the past, and that wasn't with MySQL, so no guarantees this is accurate, but I think for the master/slave scenario to work then you'd need to have had the main DB running as a master before the difference started, so that it was recording transaction logs (of inserts/updates/deletes) and it could then re-play these on top of the old dumped data to get the slave in sync.

Without that, you would need some other way to identify what the new data is and generate the queries just for that new data. (If it's an insert-only database with timestamps on all records, it would be simple, but if data is being modified/deleted and doesn't have an audit trail, it might equate to doing a full backup anyway.)

However, if your issue with creating a new full backup is running the process itself, you don't need to worry - assuming you have access to the data files (FRM/IBD/etc) and are in a position to shutdown the MySQL server, I'm fairly sure you can basically just copy the relevant files and clone the entire database with them.

Yep, see the "Cold Backup" section here: - that's for InnoDB tables - if you're using MyISAM and/or other storage engines there may be other steps involved, so worth checking around the docs for more information - searching there reveals a bunch of other relevant pages, and some seem to include example commands (also if you're not on v8 you'll want to alter the version dropdown in case anything has changed).

EDITED: 13 Jul 2021 13:48 by BOUGHTONP
From: ANT_THOMAS13 Jul 2021 15:40
To: Peter (BOUGHTONP) 5 of 7
I think the process of copying the actual DB files as a cold backup may have the same effect on the system drive as a full mysqldump.

Worth a try though for sure.
From: Peter (BOUGHTONP)13 Jul 2021 23:11
To: ANT_THOMAS 6 of 7
Well the cold backup is only an OS/disk-level read of binary data files.

The mysqldump involves the MySQL application reading the same data (not necessarily contiguously), and converting it to a text file of SQL insert statements (which is subsequently zipped, right?) so possibly that process involves temporary files, swapping, etc.

What sort of drive is it and what are the issues it's having?

From: Matt14 Jul 2021 16:58
To: ANT_THOMAS 7 of 7
Never tried the differential approach.

But, you should be able to copy the data between two MySQL instances using just mysqldump and the mysql client by piping the output of the former into the latter. This way it will use the native protocol over the wire and should be quicker than writing to a file, moving/copying it across the network and re-importing it.

If you configure both connections using mysql_config_editor you won't even need all the command line arguments, just enough to tell mysqldump and mysql client which connections to use from the config:
mysqldump --login-path=local database| mysql --login-path=remote
The new server will have to allow connections from remote IP which it doesn't by default (it listens on, but you can make it listen on for all connections)