Everyone knows that they should make regular backups. It also seems that everyone knows that everyone else rarely does so. As human beings, we tend to get a little sloppy about things that aren’t strictly necessary or of an immediate need.

Jamie Zawinski has a pretty good article about backups here, and you should read it.

Of course, everyone’s situation is different. I have a large RAID array, which meant deploying a single external disk for a backup wasn’t possible. I also tend to be a little extra paranoid about my data, so I had the following requirements:

  • Physically Redundant Storage: A copy of the backup must reside in two physical locations, so that if one burns to the ground, all of the data is safe at another.
  • Intensive Integrity Checking: It’s not good enough to just let a backup disk sit spinning and then write the changes to it. There must be a way to frequently check all of the data on the backup disk to ensure that it’s still a good backup when the time comes.
  • Ease of Use: An automated process that will begin backups automatically, without supervision, and then report backup success or failure after.

Problem #1: Physically Redundant Storage

A company called Diskology makes a great product called the Disk Jockey (“DJ”). The DJ allows you to connect two SATA disks to it to make quick on-the-fly disk mirrors, stripes, and also serves as a basic SATA disk dock as well. The version I picked up has USB and eSATA connectors. In the case of backups, I connect two disks of equal size to the DJ, select “mirror” mode, and then the DJ appears to the OS as a single disk. (For example, if there are two 2TB disks connected to the DJ, it shows up as one 2TB disk to the OS in “mirror” mode.)

Whatever writes you make to the DJ will be written to both disks in mirrored mode. Whatever reads you do from the DJ will be read from one. This has some interesting implications that you should be aware of, and I talk about them in greater detail here.

The result of all of this is that I keep one disk off-site at work. I bring home one side of the disk mirror from work every day, then attach it to the DJ along with the other side of the mirror I keep at home. When going to work in the morning, I do the opposite.

The disks can function independent of the Disk Jockey. This is important because if something happens, I need to be able to restore data with just one disk. There’s no special RAID metadata embedded in the disk, it’s a true mirror.

Problem #2: Intensive Integrity Checking

The problem with most “set and forget” backup regimes is that you might need some obscure piece of data from the disk down the road, only to find that the section of the disk where that data is has long gone bad. You don’t know that it’s gone bad because you’ve never tried to read it (in the case of data that rarely changes.) The solution to this is to always read the entirety of your backup disk during every backup cycle, and then report failures immediately.

The default behaviour of rsync is to simply check the modification time and file size, and if there’s a match, it doesn’t read the file on the backup disk at all. Many other backup solutions operate in a similar fashion. If you’re doing ZFS sends, and only 100K of data has changed since your last backup, only 100K of data is written to the disk. You would never know if something in all that old data has gone bad.

I chose to solve this problem by forcing a ZFS scrub on the backup disk after every backup. This forces the entire volume to verify its checksums, which is a time consuming process, but worth it in the name of making sure my backups are still readable and intact.

An alternative to this would be to simply blow away the backup volume, and then do a complete backup on every backup run. There’s a big problem with this approach, though: if something happens during the backup run, you have an incomplete backup. The checksumming method ensures that the data on the backup volume is never erased ahead of time.

Problem #3: Ease of Use

So the backup procedure I have is now very simple:

After work, I attach the drives to the DJ, A backup script runs automatically overnight, I detach the drives from the DJ, keep one at home, and bring one in to work. The script does all of the heavy lifting. It splits my array into easily managable chunks. The backup script has a –info flag that allows me to quickly see the status of all of my backups:

   Last Backup                  Used Free UUID
M  Tue Sep  2 20:00:04 PDT 2014 1.3T  66G 18e61a61-6502-6510-8086-0065d1917f97
S  Wed Sep  3 20:00:05 PDT 2014 1.4T 487G 0cfdeff4-6502-6510-8086-145408f4e658
Tb Sun Sep  7 20:00:06 PDT 2014 2.4T 374G c54d93c9-6502-6510-8086-7395b84d22d7
Z  Mon Sep  8 18:00:04 PDT 2014 627G 291G 57df37aa-6502-6510-8086-a9ac378f85d5
Ta Tue Sep  9 18:00:04 PDT 2014 2.2T 519G 45ff4122-6502-6510-8086-cf0a1f0ea6d7
Y  Tue Sep 16 18:00:14 PDT 2014 1.6T 295G 385afa54-6502-6510-8086-82120fc9d546
G  Wed Sep 17 18:00:03 PDT 2014 1.6T 264G 44d010a8-6502-6510-8086-f1032299ef49
A  Mon Sep 22 18:00:04 PDT 2014 1.5T 393G 08ece419-6502-6510-8086-59d4cb0e617c

The backup runs every day at 6:00pm (formerly 8:00pm – I had to push it back because the backup times were running too long for me to pick it up before work.) Regardless, I find that an hour between quitting time and backup start time is sufficient to connect the drives to the DJ. If I miss a backup day, it’s not such a big deal – you can see in this example, the oldest backup is about three weeks old.

Each letter to the far left represents a logical collection of files on the array. The “T” series is so large that it needs to span two 3TB disk pairs. Each disk contains a simple ext4 volume so that if the worst happens, it’s as simple as mounting it on virtually any Linux rescue boot image, and doing a single “rsync” to get the contents back.

If something goes wrong with a backup, it will be flagged in the status display.

The downside to all of this is that it’s possible for data to go for a long time without a backup (in this example, eight pairs of backup disks means it will take two weeks’ worth of working days before wheeling around to the first disk pair again.) That’s a risk I’m willing to take.

Ultimately it’s up to you how you craft your backup solution, but they should generally all fit the same mold: redundant, stable, and easy to use.