Backups and extundelete

Backups and extundelete

Data recovery is fun, but backups are cooler.

The story

From the beginning of time, I had access to the family NAS, mostly run by my Dad. It has pretty good backups: a two-drive RAID 1 array, a backup drive, and a backup drive in a different location. But, it wasn’t encrypted(until recently), so I decided to make my own NAS. Because, you know, a $35 Raspberry Pi is so much better than dedicated hardware πŸ™‚ .

Anyways, I got 3 old 2TB, USB hard drives, and version 1 of my NAS used two encrypted drives: one for storage, and one for backups. I had a cronjob with a very fancy script, something along the lines of:

rsync -avz /mnt/secure/shared /mnt/backup/shared

I’ll give you a minute to process the sheer complexity of that breathtaking script…

Ok, so at first, the cronjob ran once per day. Then, after about a month, I started thinking that if a drive failed, I could lose an entire days work, so I set the cronjob to an hour. But, eventually, that wasn’t enough for me. So, I moved on and created a RAID 1 array with both drives. Then, I encrypted the resulting volume, and started using that as my NAS(after copying all the files back).

Time passes…

Eventually, I started worrying that even that wasn’t enough(that’s a lot of using that word πŸ™‚ ). Yes, I had redundancy, but still no backup. If I deleted the wrong file, it was gone forever. So, I decided to use my third drive, and create a script that backs up to that once a day(and, once I have the money, buy another 2TB drive and have weekly backups stored in another location). And now, here’s the mistake I made:

The third drive was already in use, and I had copied all the files from an old Windows computer onto it, because Windows likes deleting itself every day(and I accidentally broke it through Linux by deleting some critical files). I decided to copy those files onto my NAS, and then use the drive as a backup. I used rsync so I could see the progress, and after a few minutes went by, I realized I was wasting time. There is no reason to copy the pagefile, and 99% of everything else, just the Users folder. So, I ran trusty rm -rf to delete the version on the NAS, and then I would start over with just the Users folder.

And, wow, it seems to have copied the majority of the files already, maybe I shouldn’t have stopped it. And then, it hit me, I was deleting the original copy!! Then, out of anger or just because I was on a roll of stupidity, I deleted the rest. Great, data recovery time!

extundelete

I’ve used a ton of data recovery tools in the past, and extundelete is the best tool for the job of undeleting a certain folder(s)/file(s). So, I ignored all the warnings and ran it on the mounted drive. Then, I unmounted, and tried again. It started recovering files, but then I cancelled it because there’s no reason to recover everything if I’m just going to copy one folder and then delete the rest. So, here’s the command I ran, in case you do something as stupid as me:

sudo extundelete --restore-directory Windows/Users /dev/sdc1

And, it worked, mostly. I’m sure some things are missing, and I later learned(from somewhere on the internet) that if I used an absolute file path, I may have been able to recover more.

eCryptfs

Since my NAS in encrypted, having the backup in plaintext just won’t cut it. So, of all the options available, I decided to use eCryptfs after an entire 5 minutes of research. The main reason is that I can still use the drive for other things, and only have one folder encrypted. It also makes it much easier to read the data on other Linux machines, as I don’t need to mess with LUKS(what my NAS uses). Anyways, I highly recommend you follow this tutorial on how to set it up.

Here’s the short version:

Install eCryptfs:

sudo apt install ecryptfs-utils

Initialize the directory, or subsequently mount it with:

sudo mount -t ecryptfs

Then, you’ll be prompted for a password. If you are creating an encrypted directory, enter whatever you want the password to be. Then, for simplicity, you can just leave the rest at the default, because you will be asked all the questions each time you mount the directory. Here’s the settings I use:

Select cipher: 
 1) aes: blocksize = 16; min keysize = 16; max keysize = 32
 2) blowfish: blocksize = 8; min keysize = 16; max keysize = 56
 3) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24
 4) twofish: blocksize = 16; min keysize = 16; max keysize = 32
 5) cast6: blocksize = 16; min keysize = 16; max keysize = 32
 6) cast5: blocksize = 8; min keysize = 5; max keysize = 16
Selection [aes]: 
Select key bytes: 
 1) 16
 2) 32
 3) 24
Selection [16]: 32
Enable plaintext passthrough (y/n) [n]: 
Enable filename encryption (y/n) [n]: y
Filename Encryption Key (FNEK) Signature [66775dc389d1d1ed]: 
Attempting to mount with the following options:
  ecryptfs_unlink_sigs
  ecryptfs_fnek_sig=66775dc389d1d1ed
  ecryptfs_key_bytes=32
  ecryptfs_cipher=aes
  ecryptfs_sig=66775dc389d1d1ed

I replaced the signatures just to be on the safe side with openssl rand. I chose to encrypt filenames, but that’s not really necessary(unless you name your files illegal_stuff and password_for_google_abc123.txt). Also, if you’re paranoid that the government can crack AES, use twofish.

Backups, backups, backups

The main thing you should learn from this post and/or story, is that you should always have at least one backup. Redundancy is good to have, but it still cannot replace a backup. It pretty much only protects you from drive failures, nothing else. For example, if you have a NAS with two drives in RAID 1 on a desk, and you accidentally push it off, all your data is gone. Even though you had redundancy, all your data is lost. This is why it’s important to have a backup in a separate location; be it in a vault or in the cloud. If you decide to have your backup in the cloud, be sure to encrypt it before transit. The last thing you want is some FBI agent to see all those free movies you have πŸ™‚ (that’s an example, please don’t pirate movies).

Leave a Reply(Markdown is On)

%d bloggers like this: