Veeam, Cloud Backup and the insanity of periodic full backups

I have been seeing a lot of attention being given to very low cost storage providers like Wasabi, Backblaze, and also public storage providers such as AWS to point Veeam offsite backups. Never do I see mentioned that these solutions require a full backup to be performed periodically, over the internet.

Think about that for a second. A cloud backup solution that requires you to perform full backups regularly? That is, quite frankly, INSANE!

Organizations are creating ever more data, but the hours in the day remain fixed. Even 10 years ago, industry analysts like Forrester Research was advising clients to avoid backup products that require full backups (see report here), and instead recommended “incremental-forever” backup technology – and that was almost 10 years ago!

Full backups across the internet might be just fine if you are a small business with a few hundred gigabytes of data, but even modest size companies are going to find it difficult to efficiently backup even a few terabytes of data on a regular basis.

Veeam best practices say you should not have an incremental chain of backups over 30 incrementals long. This could be 30 days of backups if making a backup once per day. So that means at minimum a full backup should be made once every 30 days – and even at that, a worse case scenario would be the loss of 29 days of backups if the incremental chain is corrupted. If making a backup every 4 hours, then a full backup should be performed every 5 days.

Full backups over the internet are problematic because they can take days to run and consume large amounts of bandwidth. Companies often pay to increase their internet bandwidth to help resolve the issue, which takes away the cost effectiveness of the low cost storage solutions! As data volumes increase, the bandwidth will also need to be increased. Moreover, while the full backup is completing, then no other offsite backups are occuring. So if it takes 3 or 4 days to perform a full backup that means you are going 3 or 4 days without an offsite backup.

To solve this issue you need to run compute on the cloud side to enable “synthetic full backups”, which is a process by which Veeam re-builds the full backup on the cloud side without having to perform an actual full backup. In this configuration you can enable a true “incremental-forever” backup method that will not require a periodic full backup to be made after the first full.  This can be achieved in AWS and Azure where compute resources, such as a Windows server, can be used to process the synthetic full, but then there is the added cost of compute resources and the client has another server to update, monitor, and manage, increasing costs. Low cost storage providers such as Wasabi and Back Blaze, offer no computing resources so there is no option for this on their platforms.  You might be saving in storage costs, but also experiencing other costs such as increased bandwidth requirements and delaying additional offsite backups from occurring.

Veeam Cloud Connect Service Providers eliminate many of these issues as these are specialized services focused on Veeam cloud backups. The service provider performs the synthetic full backup on their side, which avoids the periodic full over the internet. Additionally, a Veeam Cloud Connect Service provider will also have the Veeam expertise and generally better service level agreements around backup and DR services.

Summary:  By leveraging a Veeam Cloud Connect Service provider you get true incremental-forever backups, minimize bandwidth usage, and get better service levels.

Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!


What is VEEAM?


I’ve heard of it, but what is it?

You may have heard of Veeam and have a vague idea of what it does. Veeam is a software company that was founded in 2006 with a great growth story. In 2008, with 10 employees, Veeam released Veeam Backup and Replication – a software application to backup and copy virtual machines on VMware’s increasingly popular virtualization platform. Today, as of 2018, Veeam has 2000 employees and revenue of nearly a billion dollars. The name Veeam has become ubiquitous with easy and reliable VM backup and restore and is widely deployed in companies of all sizes. Veeam has also extended it’s backup capabilities to Microsoft Hyper-V as well as to physical Windows and Linux servers. Veeam has extended into the cloud with Veeam Backup for Office 365.  Thousands of companies around the world trust their data protection to the simplicity and affordability of Veeam software.

Backup and Replication – What’s the difference?

Veeam Backup and Replication, the flag ship product, can be thought of as two functions. “Backup” is one function, related, but separate from “replication”. Backup is what you might think in that Veeam offers backup capabilities, including the ability to see your data back in time weeks, months, years.

However, from a disaster recovery perspective, you need to restore data and be back up and running as quickly as possible and you are most concerned with restoring the most recent data. Unfortunately, the restore process from backups can take many hours to days to restore the data based on how quickly the data can be “restored”. For this reason, using backup for disaster recovery is not ideal.

Replication solves the recovery time problem by keeping your most recent data copied to another location that is ready to be “turned on”. For example, you can have a file server at location A that is fully copied  to location B. If location A becomes unavailable you just need to “turn on” the server at location B and it’s current as of the last replication.  No restore process was needed and recovery can usually be measured in minutes. Regular replications are added to the target in a way that never requires a full restore. By avoiding a restore process the replicated server is available quickly.

Other Notable things about VEEAM

Veeam’s monitoring and management software Veeam Monitor and Veeam Reporter was combined and renamed as Veeam ONE and was first released in 2010.

Veeam developed and released the free VM copy tool call WinSCP in 2007 and was a precursor to Veeam Backup and Replication software.

In 2014 Veeam started the VeeamON annual conference for all things Veeam.

In 2016 Veeam made it into the Gartner Magic Quadrant for Enterprise Backup.

2016 also marked the year in which Veeam delivered backup for Office 365.

Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!



Backup VS Disaster Recovery (DR)


Several years ago your main, and usually only option for disaster recovery was to take backup media and perform a restore onto new hardware. This means copying a large amount of data from backup media back to a server. We call this “Backup Recovery” or “Standard Recovery”.  It’s the process most people are used to and the standard method for decades.

As data volumes have exponentially expanded, combined with an “always on” culture of today’s society, the old means of recovery are no longer adequate for a lot of businesses.

Unfortunately it is sometimes at the worst time, during an actual disaster event, that it is fully realized the current backup solution is no longer meeting the disaster recovery requirements of the business. When management realizes it takes 48 hours to just restore (copy) the data is when the Recovery Time Objective (RTO) is more clearly appreciated! Many times after these events companies update their RTO policies and implement changes.  Obviously it would be better to consider this situation and enact changes prior to the disaster.

For organizations that need quicker recovery time the option is to implement some form of “replication” in which all of the production data is replicated to another system for DR purposes. The replication could be local, and/or could be offsite to a service provider. The key to replication is that you are keeping a copy of your entire server in another place so that if your primary server is compromised in some way, you can “turn on” the DR server and be back up and running within minutes, current as of the last backup.  This typically allows the organization to resume operations much more quickly since the data did not need to be restored, but rather was already in a state to be used almost immediately. Replication is really the only way to minimize the recovery time since it eliminates the restore process.

On the other hand, backup still serves a valuable service that is not well addressed by disaster recovery (replication).  In a disaster situation you are typically most interested in the most recent version of data and not the data from months or years ago.  Organizations with retention policies may want months, or even years of past versions of backup data.  DR does not address this need, however, backup is used for retrieving data that may have changed in the past. For instance, a user may delete a file, but not notice for months. Backup would be used instead of DR for this older data since DR is focused around the most recent version of data. Typically no one wants to do a disaster recovery with 6 month old data!

Backup is also usually much easier to use for granular restores of a few files or directories. DR is focused around an all or nothing approach. In a DR situation you are typically restoring entire servers. Backup makes a granular recovery easier. For this reason, backup is typically more often utilized than DR.

There is often the need for both backup and DR solutions to address both needs. VEEAM Backup and Replication, for instance, gives you the capability of choosing Backup or Replication (DR), or you can elect to do both.  If you can not afford to do both then you will need to contrast the pros and cons of backup and DR and decide on which one best meets your requirements. For example, if you need quick recovery, but do not need long-term retention, it may be possible to use DR as a form of backup. On the other hand, if you do not require quick recovery, but need long-term retention then backup-only may be best.  However, if you need both quick recovery and long-term retention then you need both backup AND Disaster Recovery.

The table below contrasts some of the differences between Backup and Disaster Recovery.


Related Articles:

What is Disaster Recovery as a Service?

The differences between Disaster Recovery, Backup and Business Continuity

Is Cloud Backup Adequate Protection from Ransomware?

I have heard this question many times, so I understand the concern of conscientious IT admins who want to know if they are truly protected from a ransomware infection if using cloud backup. After all, backups are the last line of defense against ransomware and other malware. I understand there are some who really insist on the “air gap” that tape, or other portable media provides. Feel free to augment your data protection methods and implement some level of offsite backup to an air-gaped media once a month and take it offsite. However, even doing offsite media rotation once per week is a cost of it’s own. Portable media introduces lots of other downsides – like trying to get your backups offsite every day, tracking of media, storage of media, encrypting the media etc.


We at Managecast address this concern for an “air gap”, we optionally provide the ability to export backups to air gap media, generally once per month. This allows efficient offsite cloud backup with the added ability to air gap the backups and securely store offsite.


However, most of our clients do not implement the air gap option and strictly use cloud backup with no definable “air” gap. Yet, it is also true that we field a lot of restore requests around ransomware infection.


Ransomware is our #1 reason for restores in the last 24 months!


So should our clients worry if not using the air gap option?


The reality is that it is next to impossible for Ransomware to infect your offsite backups, and we have never seen ransomware leap to a service provider. Let’s say you are backing up to us and get hit with Ransomware. It infects all of your machines and data (knock on wood). Then the backup runs and we backup all the infected data. True enough we will faithfully back it up, but here is what would happen:


Because there was a massive data change because your data got encrypted, we usually see the backups running for a long, long time (the internet bandwidth is usually limited) and 9 times out of 10 we will see this and stop the backups.  Yes, whatever data was backed up could be infected, and the infected data is being stored on the service provider storage.


So does this mean because infected data was backed up it infects the other previously backed up offsite backup? No. Your current backups are just an incremental point in time backup.  There is nothing stopping you from restoring from a previous backup unless you have an unusually short retention policy.  It is possible that if you had a short retention policy of say 2-3 days that your incremental backups could end up overwriting your good data, but it’s rare for clients to have this short of a retention policy and 14 days of backups is usually minimum.  So you would have 14 days to notice you had ransomware. If this isn’t long enough, consider a longer retention policy.


The reason the current backup data does not “infect” the offsite backup data is because it is encrypted at the source and transmitted and stored in encrypted format. The ransomware is encrypted and would have no way to execute on the service provider side, and your past backups would be protected.


To colorfully illustrate the point, I tell people to consider an experiment in which they take the worst ransomware they can find and then ZIP the Ransomware up in a password protected ZIP file (make sure it’s a strong password!). Then email that file (without the password) to every person in your company and see if Ransomware is spread. The answer would be no, because without the password to de-crypt the ZIP file, you have no way to access the Ransomware and it has no way to run or infect anything else.


So, again, I know some people really want an “air gap”, but you are doing so to protect yourself from a non-existent threat while exposing yourself to lots of other downsides of portable media that are real threats! Is it really worth it? If air-gap is really needed I would consider using an air-gap method of backup in addition to automated offsite cloud backup, or leverage the optional Managecast air-gap backup monthly.


In summary, I can tell you that for Managecast the #1 reason for DR restores in the past 2 years is because of Ransomware infections. There has not been one instance those infections affected the offsite backups. We had a client that got hit three times in 1 year with Ransomware and we had to restore them each time! Cloud Backup is a proven, safe, and robust protection against ransomware and other malware.
Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

What is Disaster Recovery as a Service (DRaaS)?

Until the last few years companies wanting quick fail-over to a remote site faced huge costs, time and complexity.  Consequently, only very large companies with deep pockets could afford to implement offsite disaster recovery.

Today, the technology and the internet have enabled highly cost effective methods to provide DR services to organizations that traditionally have not been able to afford such capabilities in the past.  In combination, today’s consumers expect the services they use to always be available, and this drives many companies to look at implementing DR fail-over so business is not interrupted.

The cost of an organization hosting its own disaster recovery site can be cost-prohibitive. Not only in terms of money but also time, and effort. Costs include hosting remote sites, managing servers, managing applications, monitoring the backups and replication, and regular testing. However, it’s possible to offset these costs by utilizing a service provider offering Disaster Recovery as a Service, or DRaaS.

DRaaS is a way for organizations to utilize service providers, like Managecast, who provide protection for virtual servers in a cloud environment by offering infrastructure, software, and management for DR solutions.


Organizations utilizing DRaaS replicate their data continuously or periodically, depending on their desired Recovery Point Objective (RPO), to the service provider. Then, in a DR event, the organization can fail-over all or some of their environment by simply powering on their VMs in the service providers cloud-DR infrastructure and continue to operate.

The organizations have access to failed over replicas through predefined methods. In the event of a partial failover of only some of the organizations servers their local network can be extended to the cloud-DR environment allowing them to access the servers as if they were still hosted locally. Alternatively, in a full failover event an organizations servers can be accessed remotely. E.g. through a web console, VPN or remote desktop services. Service providers and also provide new public IPs to minimize downtime for public facing applications.

An example of a web console used for failover and testing DR.
An example of a web console used for failover and testing DR.

If after the fail-over has been performed the organization is able to get their local infrastructure back up and running, depending on the DR solution, they can also fail back to production. Failing back means replicating any changes made during the fail-over in the DR environment back to the production side.


After replicating to the service provider, it will be necessary to perform regular DR testing to make sure things go smoothly in a DR situation. Most DRaaS providers will allow organizations to perform their own testing which allows them to set test criteria.

Testing can be as simple as logging into the service providers web console, powering on a VM, and verifying application or service functionality.


While not all service providers charge for DRaaS the same, a common model is based on usage per hour. Meaning that the organization will be charged for only what they use.


In some cases the DRaaS provider will offer additional management in terms of the replication process. This can include monitoring the replication, alerting the organization of any potential issues, as well as providing fully-managed service solutions.

While an organization may view DR as an additional cost, for DRaaS service providers providing backup and replication is their sole focus. By using a service provider for DRaaS they gain access to that expertise and can leverage them for any DR needs.

Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

Check Your Veeam Cloud Connect Repository Storage Usage

Here are the steps to check your Managecast Veeam Cloud Connect repository usage through the Veeam Backup and Replication console:

      1. Open the Veeam console and select the ‘BACKUP INFRASTRUCTURE’ tab on the bottom left.
      2. Then, select the ‘Backup Repositories’ group on the top left.
      3. A list of existing repositories will be displayed, select the cloud repository. It will typically be named ‘Managecast Cloud Repository’ and the repository type will be labeled Cloud.
      4. Right click the ‘Managecast Cloud Repository’ and choose ‘Properties.’


      1. The window that is displayed will show the settings for the selected cloud connect repository including: Capacity, Used space, and Free space.


Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

Avoid Memory Issues with Veeam and ReFS

There have been reports of issues that users have run into after incorporating ReFS repositories with Veeam Backup and Replication. Here’s how to best avoid running into them:

According to reports, users are running into issues on Windows 2016 servers using drives formatted with ReFS while running Veeam synthetic operations. So far this issue has been primarily reported by users who have formatted their ReFS volumes using a 4K block size, which is set by default when formatting a new volume. Veeam spokespeople have recommended that users use a 64K block size as the primary method to avoid this issue. 

Check the current allocation unit size of ReFS volumes using:

fsutil fsinfo refsinfo <volume pathname>

Microsoft has also released a patch(KB4013429) as well as a corresponding knowledge base article regarding this issue. The fixes include a patch + registry changes specific to the issue with ReFS. The patch adds the option to create and fine tune the following registry parameters to tweak the ReFS memory consumption:

RefsEnableLargeWorkingSetTrim | DWORD
RefsNumberOfChunksToTrim      | DWORD
RefsEnableInlineTrim          | DWORD

The reason for the errors during the synthetic operations performed during Veeam Backups to ReFS repositories is that Veeam synthetic operations are very I/O intensive. Users have uncovered an issue with the ReFS file system where metadata stored in memory is not released properly. This causes the utilized memory on the system to balloon and can eventually lock up the OS.

Windows SysInternals RAMMap
Image Source

Using the Windows SysInternals Tool: RAMMap, users can monitor the memory usage during synthetic fulls. This will help determine if the metafile is growing and if there will be a potential issue with memory.

Finally, suggestions for avoiding running into this error:

  • Choose 64K Block Size when formatting new ReFS volumes for Veeam repository. Avoid using 4K Block Size for now.

If you are currently using an ReFS volume with 4K Block Size consider migrating the repository to a new volume with 64K Block Size. This post may assist you.

  • Use more than the minimum recommended memory for Veeam Repositories, that is, 4GB plus up to 4GB for each concurrently running job.
  • If you are currently using ReFS with 4K blocks and/or are running into issues with you Veeam repository locking up during synthetic operations, apply the patch + corresponding registry changes. If these do not resolve the issues, try adding more memory to the Veeam repository server.

64K block sizes are already largely recommended as a best practice for Veeam repositories, considering how Veeam works with large files. The issue here being that Windows sets the default allocation unit size at 4K so users may skip past changing it when formatting new volumes. Hopefully future releases of Veeam will be able to detect and warn users against 4K block sizes during the creation of ReFS repositories.

Update 11/3/17: Some users with larger amounts of backup data report issues even when using 64K block size yet.

More current updates of Windows Server 2016 now include additional registry settings to curb some of the issues that users have continued to report. As of the time of this update, it has been suggested that those experiencing these issues set these decimal values for the following registry keys:

HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsEnableLargeWorkingSetTrim = 1
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsNumberOfChunksToTrim = 32
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsDisableCachedPins = 1
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsProcessedDeleteQueueEntryCountThreshold = 512

Some Veeam technicians/architects have also suggested, for backup jobs whose retention requirements are 100 days/restore points or less, to avoid synthetic fulls unless they are specifically necessary.

eg: For a retention policy going back a month, aim for 30 daily incremental restore points, rather than 7 daily and 3 weekly restore points.

This will still provide the benefits of fast cloning from ReFS but avoid the issue of synthetic full merges potentially locking up the storage during the full backup file merge processes.

Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

More Info: Veeam Forums; VeeamLiveMicrosoft KB4016173Microsoft KB4035951

Do you need to backup Office 365 data?

Yes, here is why.

Many people take backup for granted when using Office 365.  They think Microsoft must surely be thoroughly protecting the data on Office 365 – I don’t have to worry about it! Think again.

Office 365 lacks the daily backup and archiving capability that may be needed after data is automatically removed and deleted from the recycle bin, or if a user intentionally deletes their data!

Let’s take a look at a few facts:

Deletions – Maybe it’s accidental, maybe it’s purposeful, maybe it’s malicious, but users delete data. Office 365 has a default retention of 14 days, but can be expanded to 30. After that, the data is gone. Intentional deletions (and deleting from the recycle bin) are unrecoverable. Also, if a user account is deleted – the data is gone.

Ransomware/Malware – While Microsoft does have anti-malware protections in place, it does not guarantee that a user does not corrupt their data if infected by malware/ransomware. Recovery from this scenario could be very painful and time consuming if just using built-in data protection measures, and ultimately there is no guarantee of recovery.

Liability – Microsoft contracts have strict limits on liability. In the case of Office 365 the liability is limited to $5000 total. It would cost more to walk into court, so effectively this is the same as no liability of your data.

Compliance – If you have strict compliance requirements (like keeping backups for 7, 10 or more years) then Microsoft is not providing what you require for data protection. Even private companies without legal obligations often have lengthy retention policies.  Even requiring more than 1 month of history may exceed what Microsoft is providing you.

Industry Analysts Recommend Backup – Organzations such as Gartner, Forrester, ESG and others recommend that clients review their own backup data retention requirements and determine if additional Office 365 backup solutions are needed to meet your objectives.


Ensure you can always easily recover your critical Office 365 emails, files and Sharepoint sites by using a 3rd party backup tool to backup Office 365 data.

Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!




Veeam and AWS Storage

Amazon cloud storage, S3 and Glacier, are top contenders when it comes to storing data offsite and Veeam is a behemoth when it comes to VM backup; you can use the two together but should you? The answer: well, it depends.

Integrating a Veeam Backup and Replication configuration with Amazon storage is done through an AWS storage gateway. An appliance that sits between your Veeam server and the AWS Cloud. You can connect this gateway to the Veeam server as a file server, direct attached storage, or a virtual tape library.

With the amazon storage gateway as a file server or direct attached storage you can backup directly from Veeam to the storage gateway. You can perform incremental backups this way, but in order to avoid the corruption possible with long backup chains periodic fulls will need to be taken. Synthetic operations are possible in this configuration, however without a proxy on the AWS side anytime they are performed Veeam will need to read the full backup and incrementals stored in the AWS storage, effectively downloading it from the Amazon servers, performing the synthetic operation locally, and re-uploading new synthetic data. This causes the synthetic operations to take incredibly long times to complete and is not recommended.

The other option is the virtual tape library(VTL) which allows you to present the storage gateway to Veeam as a tape server. This lets you to use Veeams tape backup jobs to create virtual tape backups on AWS storage. While Veeam tape backups allow for incremental backups, this method also requires periodic fulls to be taken any time a new tape is required. Which may end up happening frequently, depending on backup size and retention, as the maximum tape size for AWS storage gateway is 2.5TB. For restores requiring already archived/removed, it can take up to 24 hours for a tape to be retrieved, for a cost, and made available again in the tape gateway.

Alternatively, Veeam has an offsite backup method built into its distribution that comes in the form of Veeam Cloud Connect. Cloud Connect partners are third party Veeam service providers running Veeam cloud gateways and repositories who supply you with the necessary storage and compute tailored to performing offsite Veeam backups.

End users put in their service providers information and login information and are provided with a ‘Cloud Repository’ in the Veeam console. They can send backups, backup copies, etc. to the cloud repository.

Instead of having to perform periodic full backups, tenants can perform forever forever incremental backups with periodic synthetic fulls. Because the service provider is also running Veeam and has the compute required for synthetic operation, synthetic fulls and merges are able to be performed locally at the remote site—significantly reducing backup windows.

With a pre-configured appliance for cloud connect available from Veeam, Azure may be a better option compared to AWS. This lets you setup and manage your own cloud connect server in the Azure cloud to point offsite backups to. However, this adds another layer of complexity/management as well as costs for not only Azure storage but also hourly compute costs as well as additional Veeam Cloud Connect licensing. It is certainly more feasible to look for an already available Cloud Connect partner who already manages their own offsite Cloud Connect server. A Veeam Cloud Connect partner can provide the specialized backup service, offering true incremental forever backups without the complexity of creating your own service.

As far as using AWS for offsite backup with Veeam it is certainly doable, however due to the fact that full backups are required regularly it could only be recommended for those with less than 1 TB of data to backup, those with low frequency backup who can afford extremely large backup windows, or those with a lot of bandwidth available to the amazon storage servers.

Otherwise it may be more feasible to go with a cloud connect provider who can offer nightly incremental backups with regular synthetic fulls, significantly reducing backup times because they can perform the necessary synthetic operations on the remote repositories. It’s not only more time effective, but also often times more cost effective, and simpler from a management perspective.

Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!

Migrating and Seeding Veeam Backups to ReFS Storage

In order to see any benefits from Veeams integration with the Windows Server 2016 ReFS filesystem all fulls and incrementals will have to have been created on the new filesystem. This means that just moving data over won’t mean you immediately see the benefits of fast cloning and spaceless fulls. There are a few extra steps that need to be taken first:

View this post to see some of the benefits that come along with using Veeam and ReFS.

Update: Make sure to use 64K Block Size when formatting the Veeam repository volumes to avoid issues with 4K Block Size and ReFS. Read this post for more information.

Migrating Existing Backups to ReFS Volumes

The first thing to note here is that only new full and incremental backups created on ReFS will benefit from the fast cloning and spaceless full technology that comes with ReFS on Windows Server 2016. This means that once you move your data over to the new ReFS volume you won’t begin seeing any performance or storage benefits until either an active or synthetic full is created and then all old incremental restore points have been merged or deleted and new increments are all created on the new volume.

This can prove to be troublesome if you are coming from a deduplicating storage where all previous fulls and incrementals were deduplicated. By moving them to the new volume all of the data would become rehydrated and most likely blow up your storage.

As you may not have enough storage for the full rehydrated size of all of your fulls, archived fulls, and incrementals, it would most likely benefit you to keep those archived backups on the deduplicating storage or write them off to tape and only migrate the newest full .vbk, incrementals .vib, and .vbk to the new ReFS volume.

Keep in mind that you won’t see the benefits of ReFS until the most recent full and incrementals were created on the new ReFS volume. This means that you will need to have storage for at least 2 full backups and all incrementals. We recommend scheduling the GFS retention to create a full as soon as possible so that the oldest archived full on the new volume can be deleted and free up as much space as possible.

Once the Synthetic full backup and the new incrementals have all been created on the new storage you can delete the oldest archive point from the ReFS volume and all new backups will see the benefits from the net ReFS filesystem.

Seeding Offsite Backup Copy to ReFS Volumes

Seeding a backup to ReFS still has the benefit of decreasing the initial WAN utilization by preventing the need to ever to a full backup over the internet. However, with ReFS even after seeding the backup all backups will have to have been created on the volume they were imported to. This means that you won’t see the

We have been successful using the following process when seeding backups to ReFS(Keep in mind that you will temporarily need storage for 2 full backups and 2 incremental restore points.):

  1. First perform an initial full backup copy (seed backup) to an external drive at the primary location.
  2. Once the backup is complete ship the drive off to the secondary location and import the .vbm and .vbk file into the target repository.
  3. Rescan the new repository from the primary site and those new backups should be added to the configuration.
  4. From the job used to create the seed edit the target repository and point it to the new repository at the secondary site.
  5. Use ‘Map Backup’ to choose the newly imported backup from the repository.
  6. Using the following method we have been successful forcing the backup copy job to perform a full GFS synthetic full:
    1. In the job settings, change the number of restore points to 2(This is the lowest it will allow you to create.)
    2. Enable GFS retention and change the schedule for weekly backups to occur during the backup following the next backup.
      • For example, if your backups happen nightly and today is Wednesday you would allow the incremental backup to run Wednesday night, then schedule the weekly backup to happen during the backup scheduled for Thursday night. The backup needs to run Wednesday first in order to create an incremental backup so that the job hits its limit of 2 restore points. Then on Thursday, because the job has hit its retention and the synthetic is scheduled it should make the GFS restore point.
  7.   Once the GFS synthetic full has been created you can delete the archived full ‘…_W.vbk’ to free up storage. (You can keep it but this archived full will not benefit from any ReFS spaceless fulls and will utilize your storage until it is deleted from retention.)

Now you can change the retention of the backup job to whatever you would like and any new backups will benefit from the new ReFS filesystem.

Interested in learning how Managecast can help your business with its cloud backup and disaster recovery solutions? Fill out this form for more information!