diff --git a/README.md b/README.md new file mode 100644 index 0000000..c3eb9be --- /dev/null +++ b/README.md @@ -0,0 +1,168 @@ +# zfsbk utils + +This is a minimalistic utility to manage backups for systems using the +outstanding [http://en.wikipedia.org/wiki/ZFS ZFS] filesystem. + +It relies on Snapshots to provide: + +* local backups – they help you recover files from earlier in time. +* remote backups – they help you recover whole datasets for system failures. + + +## Snapshot management + +You run `zfssnap.sh` for taking snapshots, typically from cron: + + vim /etc/crontab ... + # take hourly snapshot, keep 24 of them + @hourly root /usr/local/sbin/zfssnap.sh hour 24 + +This will create, every hour, a new ZFS snapshot tagged 'hour': + + # ls /.zfs/snapshot/ + zbk-hour-20140318-140000 + zbk-hour-20140318-150000 + zbk-hour-20140318-160000 + +Only 24 of these snapshots will be kept (see Rotation below). The name of each +snapshot comes with format `zbk-[tag]-[date]-[time]`. Date is `YYYYMMDD` and +time is `hhmmss`. + +### Snapshot groups + +Each snapshot is tagged: + + # create a snapshot tagged 'foobar'. Maintain 10 at all times + zfssnap.sh foobar 10 + +Snapshots with the same tag make a **snapshot group**. For example, the +`foobar` group will count up to 10 members at all times. + +Multiple groups can exist, just take snapshots with different tags: + + # take 2 'hourly' snaps + zfssnap.sh hourly 10 + # take 2 'daily' snaps + zfssnap.sh daily 10 + # ls /.zfs/snapshot + zfs-hourly-20140318-140001 + zfs-hourly-20140318-140003 + zfs-daily-20140318-140110 + zfs-daily-20140318-140111 + zfs-daily-20140318-140112 + zfs-daily-20140318-140114 + +Neither `zfssnap.sh` nor ZFS put a limit on the number of snaps you can +maintain. The tool was tested with over 200. Bear in mind that it's shell +scripts, so inherent limitations of arguments length could get in your way. + +I recommend staying under 50 snaps per group and 200 snaps total. + +### Snapshot rotation + +`zfssnap.sh` takes a new snapshot every time it's run. When the number of +existing snapshots exceeds the given limit, the oldest snapshot of that group +(tag) is removed, so only so many are kept: + + # take snap xyz, then keep only last 2 from xyz group + zfssnap.sh xyz 2 + +This bounds the number of snapshots for each group to 2. To remove all snaps in a group, simply pass ''i'' as limit: + + # remove all snaps of group xyz + zfssnap.sh xyz 0 + +### Recovering files (local backup) + +Lost a file? Find it under: + + # list content of michele's home at 2pm (1400) + ls /.zfs/snapshot/zbk-hour-140000/home/michele + +Notice that you must look for the `/.zfs` directory at the root of the dataset actually holding it: + + # list content of michele's home, if /home is on zroot/home + ls /home/.zfs/snapshot/zbk-hour-140000/michele + +### Full snapshot management cron example + + # take 15' backups for the last hour + */15 * * * * root /usr/local/sbin/zfssnap.sh qrt 4 + # take hourly backups for the 6 hours + 1 * * * * root /usr/local/sbin/zfssnap.sh hourly 6 + # take 6-hours backups for the last day + 1 */6 * * * root /usr/local/sbin/zfssnap.sh 6hr 4 + # take daily backups for the last week + 1 1 * * * root /usr/local/sbin/zfssnap.sh day 7 + # take weekly backups for the last 2 months + 1 1 * * 1 root /usr/local/sbin/zfssnap.sh week 8 + +### Excluding dataset from backup + +`zfssnap.sh` takes a recursive backup of the `zroot` pool. If you do not intend +to maintain backups for certain datasets, you can exclude them with the +`EXCLUDES` and `EXTRA_EXCLUDES` **environment variables**: + + # exclude only these datasets + EXCLUDES=“/mydataset/foobar" + # exclude these datasets in addition to default exclusions + EXTRA_EXCLUDES=“/mydataset/foobar" + +Notice that these are **dataset names**, not mountpoints! If dataset +`zroot/foo` is at mountpoint `/bar`, specify `/foo` here. + +The following datasets, common for FreeBSD users, are excluded by default: + +* `/usr/ports` +* `/usr/src` +* `/backups` + +If you do not want these excluded, pass an empty `EXCLUDES` envvar. + + +## Generating remote backups + +The `zfsbk.sh` lets you generate backups and upload them to a remote location. + +This takes a snapshot with tag `mybk` and serializes it in file `/backups/zbk-mybk-140000.dump`: + + # generate ZFS streaming package, save to /backups folder + /usr/local/sbin/zfsbk.sh mybk + ls /backups + zfs-mybk-20140318-061900.dump + + +## Incremental backups + +Pass a number to `zfsbk.sh` and it will create incremental snapshots: + + # 1. make full replication if this is the first snap in group + # 2. else make incremental replication wrt latest snap in group + # 3. reset the snap group after 1+9 steps have been made + /usr/local/sbin/zfsbk.sh mybk 10 + +Incremental packages are named after their snapshot endpoints: + + ls /backups + zbk-mybk-20140318-140000--zbk-mybk-20140318-150000.dump + +If the given integer is 1, `zfsbk.sh` sends full replication packages for every +run. + + +## Uploading backups remotely + +`zfsbk.sh` can upload each replication package after generating it, at the end of the run. + +Pass the destination coordinates with the `UPLOAD_PATH` environment variable. +Currently, rsync:// and scp:// are supported: + + # take snap, generate backup, upload it to remote server + UPLOAD_PATH="rsync://user@backup.server.com::server12/" /usr/local/sbin/zfsbk.sh mybk 10 + +`zfsbk.sh` relies on `zfssnap.sh` to take the snapshot to backup. Therefore, you can exclude +different datasets from its backup by passing the respective `EXCLUDES` or `EXTRA_EXCLUDES` +variables: + + # take selective backup + EXTRA_EXCLUDES="/jails/test.dom.com" /usr/local/sbin/zfsbk.sh mybk 1