A shell script to run Rsync & Restic backups on Linux.
You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
capntack ea317e4ff3 Added exclude file for restic repo password 1 year ago
backup-logs Rebuilt repo, wrote README, cleaned out script 2 years ago
templates Added exclude file for restic repo password 1 year ago
LICENSE.md Rebuilt repo, wrote README, cleaned out script 2 years ago
README.md Added exclude file for restic repo password 1 year ago
ROADMAP.md Added exclude file for restic repo password 1 year ago

README.md

Rsync and Restic Backup Scripts

Thank you for visiting! If you are viewing this repo on GitHub or GitLab, please note that this is just a mirror. Please visit the originating repo for any comments, issues, pull requests, etc. You can sign in with your GitHub or GitLab account via Oauth2.


Disclaimer: As with anything to do with your data, you should read and understand what this script does before applying it. I am not responsible for any mishaps. Even if you only replace the variables and run it as I do, me saying "it works on my machine" should not be sufficient. It wouldn't be for me.


A script to perform incremental backups using rsync and restic. Why both? Isn't that redundant? Well, I wanted to have some of my backups be more quickly retrievable. Like an on prem "cloud" storage. Rsync fits the bill for that. But I also wanted to backup and compress larger swaths of data for longer term storage. And that's where Restic comes in.

This script assumes you are running Linux, and have at least basic working knowledge of it and bash scripting. It "works on my machine" which is currently running Pop!_OS 22.04 LTS.


Installation, Prep, and Configuration

  1. Install rsync, restic, and depencies for the script:
apt install rsync restic moreutils # moreutils installs the `ts` command for timestamping the logs

  1. Create the directory where rsync will backup to:
mkdir /path/to/dir/to/backup/to

  1. Copy the rsync manifest template to the script's root directory, rename it as you like, and then fill it out. This will allow the --include-from option to only backup what you want. There is some comments in the template, but the gist of it is that the file is read in order. The initial include, + */ includes the $RSYNC_SOURCE variable from the script all directories within, recursively. The following lines are where you specify the directories and files you explicitely want to backup. The final line, - * excludes everything that wasn't explicitely included prior. This allows you to choose a higher directory, say $HOME, but pick and choose what you want within it instead of rsyncing the whole thing. The script also includes the --prune-empty-dirs option, which will prevent it from syncing all the empty directory folders within the directoris along the path to what you actually want at the end of it.

  1. Ensure restic is up to date, in case the version from your repos are behind:
restic self-update

  1. Initialize the restic "repo" (what restic calls the backup destination):
restic init --repo /path/to/repo

Create your repo password when prompted. Do not lose this, as you will otherwise be unable to access your backups. I would suggest a password manager. And possibly a physical copy stored in a very safe place.


  1. Verify your repo is at least version 2, in order to support compression:
restic -p $REPO_PASSWORD -r path/to/repo cat config

If it isn't, you may need to revisit step 4 and figure out why your install isn't up to date. Then recreate the repo (you can just delete the faulty repo directory to get rid of it).


  1. Run the first restic backup. This will take a while, depending on how much data you have. 250 GB took me about an hour and a half. Edit, remove, or add to the tags as desired. Tags can be shared between repos in various combinations. They can be used to search for, query, and prune your backups from their various sources. The --exclude-caches option will exclude directories containing the CACHEDIR.TAG file. Which isn't all caches, but it's a happy medium between not excluding any, and having to script/search them all out. Pay attention to lack of trailing slashes.

Note: if you ever run this command as sudo, whether in your terminal or as a cronjob or any other way, you must always run it and other commands against that repo as sudo. So make your choice now.

restic backup --verbose --compression max \
  -p $REPO_PASSWORD \
  -r /path/to/repo \
  --tag $TAG1 --tag $TAG2 \
  --exclude-caches \
  /path/to/source

  1. Verify your backup by first fetching the snapshot ID:
restic -p $REPO_PASSWORD -r /path/to/repo snapshots

Then list the files within to verify eveything is there:

restic ls -p $REPO_PASSWORD -r /path/to/repo --long $SNAPSHOT_ID

Then compare the backup size to the size of the source. This will retrieve the uncompressed size of the repo, and it won't perfectly align. But it should give you an idea.

restic ls -p $REPO_PASSWORD -r /path/to/repo stats $SNAPSHOT_ID

And finally, check the integrity of the repo:

restic -p $REPO_PASSWORD -r /path/to/repo check

  1. Copy the restic password template to the script's root directoy, rename it as you like, and replace all text within it with just the password. Then secure the file:
sudo chmod 600 /path/to/restic/password/.file

  1. Copy the restic excludes template to the script's root directory, rename it as you like, and replace the /path/to/restic/password/.file line with the path to your restic password file. You can also add any other excludes you would like.

  1. Copy the script template to the script's root directory, rename as you like, and then fill out the variables the comments call out. Pay attention to where leading/trailing slashes are omitted. That is on purpose. I find it's best to use absolute paths, that way if you every move the script to a different directory, it won't break. A few notes and definitions above and beyond the comments in the script:

    a, The script dumps a log of its output into a directory of your choosing (the first variable in the script). There's a directory in script's root directory for that, but feel free to put them wherever you like.

    b. The script includes variables and script for both a second rsync and a second restic source/destination. You can add more or remove them as you like. Just note that each rsync really should have a separate source, destination, and manifest. While restic can have multiple sources syncing to the same repo, which also increases the benefit from its deduplication. You can also mix and match tags (tho I would advise against using the exact same set of tags on two different sources). Tho, while you can use the same password for each source, maybe don't?

    c. By default, rsync will backup incrementally, but not track version history. The script gets around this by putting each new backup into its own dated directory, and then hardlinking to the inodes of already backed up files, and only backing up new files. The --delete option in this case simply doesn't backup a file instead of deleting it at the destination. A "latest" folder is also created for both the script to check against and for ease of finding the lastest backup. This leads us to...

    d. The rsync script also allows for days of retention. After which older backup directories are deleted. And, thanks to hardlinking, files that were initially backed up in it are not deleted if they are hardlinked in any subsequent backup. $RSYNC_RETENTION_DAYS variables are calculated thusly: # of days wanted (i.e. 7) + the latest directory (1) + 1. So in this case, to keep 7 days worth of versioning, you would use a 9 for this variable.

    e. The rsync script includes a hacky fix for an issue I ran into rsyncing to an NFS destination. After backing up to the new directory as desired and updating the latest hardlink, the timestamps of both would change to the most recent date for the timestamp of 21:20. I have no idea why. And that would mess with the retention if I ran the backup multiple times in a day. As they would all have the same timestamp. So in between updating the latest hardlink and running the retention policy, the script runs a touch on a timestamp.fix file within the $RSYNC_DEST_PATH, which fixes the timestamps. If you aren't backing up to an NFS destination, you likely don't need this. And if you know why this is happening, please let me know. Or clone the repo, patch it, and do a pull request so that your fix can be tested and included.

    f. Pay attention to the restic tags in the script. When the script runs the forget and prune commands, it will run that against the entire repo. So you want to ensure the tags in that command match the backups you want it to actually affect. I would suggest, after running the initial backup in step 7 and then have the script ready, run it and then run the verification steps from step 8 again. Just to be sure you have it right. And if you have multiple sources going to the same repo, do the same. You can also perform dry runs on removal polices (and on backups too, btw) to sanity check yourself before accidentally nuking your repo. See the disclaimer at the start of this README.

    g. Regarding the compression level of the restic backup, you can choose off, auto, or max. I ran a super scientific one run each on my backup source and got the following results:

    Raw Data: 255 GB to Backup
    All levels of compression also deduplicate files
    Compression Off: 249.5 GB = 97.8% compression
    Compression Auto: 208.4 GB = 81.7% compression
    Compression Max: 206.1 GB = 80.8% compression

    I did not note the time it took, but I want to say it was about an hour when set to off, and about an hour and a half for both auto and max. I personally leave it at the default of auto, as max didn't make much of a difference. But you can decide as you like for your own backups.


  1. Make the script executable:
sudo chmod u+x /path/to/script.sh

  1. At this point, you can decide how you will run this script going forward. Whether just as you remember it (not very reliable), or set reminders for it (more reliable), or automate it in some way (best) like a crontab:
crontab -e

Paste something like the following to the end of your crontab:

0 0 * * * cd /path/to/script/dir/ && ./script.sh

You can avoid having to have crontab cd into the script's directory if you place it somewhere in your path. If you do, I would suggest copying the script you just edited to said path folder. That way you can fiddle with and test it without messing with your production script. Then replace the prod script once you have any tweaks figured out.


Sources, Inspiration, and Further Reading