I personally use the following backup strategy:
- Setup an encrypted ZFS Storage in the network (e.g. TrueNAS - in my case it is Proxmox)
- Enable zfs-auto-snapshot for 15 min snapshots auto rotation (keep 24 daily, etc.)
- NEVER (!) type in the passwords of ZFS Storage permitted users on any client, that could be affected by ransomware
- Provide a user authenticated samba share to store all important data - try to prevent local storage of data
- Sync the ZFS snapshots to an external USB drive every night (I use a tasmota shelly plug and an external usb case to power off the devices if they are not needed)
# create current snapshot
zfs snapshot -r "$NEW_POOL_SNAP"
# first backup
zfs send --raw -R "$SRC_POOL@$NEW_SNAP_NAME" | pv | zfs recv -Fdu "$DST_POOL"
# incremental backup
zfs send --raw -RI "$BACKUP_FROM_SNAPSHOT" "$BACKUP_UNTIL_SNAPSHOT" | pv | zfs recv -Fdu "$DST_POOL"
- On Windows and macOS, backup the OS on an external drive- Use restic to keep an additional copy of the local files and folders somewhere else
- Use a bluray burner to backup the most important stuff as a restic repository or encrypted archive (like very important documents, the best photo collections of you family, Keepass database, etc.) and put it to another location
- If cloud storage is affordable for the amount of data you have, consider using restic to store your stuff in the cloud
- From time to time try to restore a specific file from the backup and check if it worked and try to restore a full system (on an additional harddisk).
This may sound overkill, but ransomware is a pretty bad thing these days, even if you think you are not one of its targets.
Regarding backup scheduler - sometimes companies need to have frequent backups due to their RPOs and RTOs, for example, if they operate in a highly regulated industry. If someone can tolerate the loss of data of two hours, then, they need to have backup performed every 2 hours, if we speak here about 8 hours (working day), so why not to have backups on a daily basis?
Regarding rotations - everything depends on a backup solution, if it provides with immutable backups, so the entire data won't be corrupted. Thus, the faster someone notices the mistake, the faster they can restore their copy. IDF helps more to decide the issue with storage - not to overload it (here also worth mentioning deduplication and compression).
1. How long should you keep backups for - is the content of your backup covered by privacy laws that require you to not have copies of it after a certain period of time? is there a point where the content of your back up is so old that it's the logical equivalent of not having made a back up in the first place?
2. How much does your backup process cost - if it costs more to back up a system than it would cost you if you lost it, then you've got the backup process wrong (interestingly this can be affected by economies of scale)
3. What do you need to restore a backup - does your system requires bespoke hardware that might have been lost in whatever disaster you're trying to recover from?
…but I never delete because the more copies of the same thing there are, the more likely it will survive. If in fact I need it, time spent searching is far shorter than tedious backup procedure.
In addition, if I have to recreate something version 2 will be better because I keep getting better at the things I do.
But that is me not you. Good luck.
The other normal backups are usually managed by someone else, he just does the hardware, most of the time.
His backups are tested by experience.