With IDrive, you can store up to 30 previous versions of a file for an unlimited amount of time. When it comes to file versioning, IDrive is much more robust than Backblaze. For files that you want to keep constantly synced across all your devices, IDrive offers a dedicated sync folder in your file explorer. You also get to set the schedule when these folders are backed up. One thing we like about both the IDrive and Backblaze backup clients is that you get to choose what folders to designate for upload to the cloud. Under the next tab, Storage, We’ve enabled all data reduction options available to us in an effort to reduce our bandwidth requirements.IDrive offers a backup and sync app for desktop and mobile devices (Image credit: iDrive) We want to disable these settings as both operations result in a new backup file being created which means data that needs to be uploaded which is want we are trying to limit. I’ve disabled all storage-level corruption guard checks and ensured that defragmentation and compaction are not enabled. I picked once a month to minimise the amount of data that needs to be stored in B2 and more importantly, uploaded over our WAN link. I picked Saturday because the full backup file is going to be the largest file in the chain which gives it more time to be uploaded over the course of the weekend. Under ‘Advanced Settings’, we ensure that ‘incremental’ is selected and to ensure it’s not a forever incremental job we specify an active full to be created on the first Saturday of each month. In this instance, I’ve selected 28 restore points to keep on disk, this will enable us to meet our 4-week imaginary retention requirement. ![]() We selected the Veeam backup repository that we created earlier followed by defining the number of restore points that we require. We add the VMs that we want to protect into the backup job. ![]() To get started, We create a new Veeam backup job and named it appropriately. We can use synthetic or active fulls here, though I’d just use active full for keeping it simple. Ideally, when designing our backup job, to minimise bandwidth, we should configure a forward incremental backup job with a retention period sufficient enough to meet our requirements while minimising the number of active fulls created. Meaning lots of new files that would need to be synced, this is something we are trying to avoid.īy avoiding unnecessary full backup files it will help us minimise our storage bill and importantly ease congestion on the WAN link. With a backup job, we would need to consider that reverse incremental should not be used, leveraging reverse incremental would result in a new full backup file being created each time the backup runs, it also creates a new reverse incremental file for the previous restore point. With the above in mind, we need to decide whether we should configure backup job (primary backup target) or a backup copy job (secondary backup target). Veeam features such as Storage-Level Corruption Guard and Defrag/Compacting the full backup file should be avoided as this results in a new full backup file being created.The Synology CloudSync package is not aware of when the Veeam backup files are being created/modified by Veeam, this can result in files being uploaded before Veeam has finished, this happens a lot with.This is in contrast to other solutions such as Microsoft StorSimple which leverages a ‘volume-container’ global block-level dedupe which means even if Veeam sends multiple full backups files to a backup repository, the StorSimple will only upload changed/unique blocks due to its block-level dedupe capability. Both Synology CloudSync and Backblaze B2 offer no data deduplication, meaning if we create several full Veeam backups files to our Synology CloudSync backup repository, each Veeam full backup file will be uploaded, in full.Veeam currently does not have native integration with object storage so we need to rely on ‘cloud gateway’ devices such as Synology CloudSync. ![]() We can’t leverage Veeam built-in WAN accelerators since there is no compute available at the object storage ‘B2’ side.Ideally, we want to minimise the amount data that needs to be uploaded to our B2 buckets because we are charged per GB each month & upload bandwidth is typically our biggest bottleneck.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |