What is Disaster Recovery as a Service (DRaaS)?

Until the last few years companies wanting quick fail-over to a remote site faced huge costs, time and complexity.  Consequently, only very large companies with deep pockets could afford to implement offsite disaster recovery.

Today, the technology and the internet have enabled highly cost effective methods to provide DR services to organizations that traditionally have not been able to afford such capabilities in the past.  In combination, today’s consumers expect the services they use to always be available, and this drives many companies to look at implementing DR fail-over so business is not interrupted.

The cost of an organization hosting its own disaster recovery site can be cost-prohibitive. Not only in terms of money but also time, and effort. Costs include hosting remote sites, managing servers, managing applications, monitoring the backups and replication, and regular testing. However, it’s possible to offset these costs by utilizing a service provider offering Disaster Recovery as a Service, or DRaaS.

DRaaS is a way for organizations to utilize service providers, like Managecast, who provide protection for virtual servers in a cloud environment by offering infrastructure, software, and management for DR solutions.

Failover

Organizations utilizing DRaaS replicate their data continuously or periodically, depending on their desired Recovery Point Objective (RPO), to the service provider. Then, in a DR event, the organization can fail-over all or some of their environment by simply powering on their VMs in the service providers cloud-DR infrastructure and continue to operate.

The organizations have access to failed over replicas through predefined methods. In the event of a partial failover of only some of the organizations servers their local network can be extended to the cloud-DR environment allowing them to access the servers as if they were still hosted locally. Alternatively, in a full failover event an organizations servers can be accessed remotely. E.g. through a web console, VPN or remote desktop services. Service providers and also provide new public IPs to minimize downtime for public facing applications.

An example of a web console used for failover and testing DR.
An example of a web console used for failover and testing DR.

If after the fail-over has been performed the organization is able to get their local infrastructure back up and running, depending on the DR solution, they can also fail back to production. Failing back means replicating any changes made during the fail-over in the DR environment back to the production side.

Testing

After replicating to the service provider, it will be necessary to perform regular DR testing to make sure things go smoothly in a DR situation. Most DRaaS providers will allow organizations to perform their own testing which allows them to set test criteria.

Testing can be as simple as logging into the service providers web console, powering on a VM, and verifying application or service functionality.

Costs

While not all service providers charge for DRaaS the same, a common model is based on usage per hour. Meaning that the organization will be charged for only what they use.

Management

In some cases the DRaaS provider will offer additional management in terms of the replication process. This can include monitoring the replication, alerting the organization of any potential issues, as well as providing fully-managed service solutions.

While an organization may view DR as an additional cost, for DRaaS service providers providing backup and replication is their sole focus. By using a service provider for DRaaS they gain access to that expertise and can leverage them for any DR needs.

Check Your Veeam Cloud Connect Repository Storage Usage

Here are the steps to check your Managecast Veeam Cloud Connect repository usage through the Veeam Backup and Replication console:

    1. Open the Veeam console and select the ‘BACKUP INFRASTRUCTURE’ tab on the bottom left.
    2. Then, select the ‘Backup Repositories’ group on the top left.
    3. A list of existing repositories will be displayed, select the cloud repository. It will typically be named ‘Managecast Cloud Repository’ and the repository type will be labeled Cloud.
    4. Right click the ‘Managecast Cloud Repository’ and choose ‘Properties.’
    5. VEEAM CC REPO USAGE

    6. The window that is displayed will show the settings for the selected cloud connect repository including: Capacity, Used space, and Free space.

    VEEAM CC REPO USAGE PROPERTIES

Avoid Memory Issues with Veeam and ReFS

There have been reports of issues that users have run into after incorporating ReFS repositories with Veeam Backup and Replication. Here’s how to best avoid running into them:

According to reports, users are running into issues on Windows 2016 servers using drives formatted with ReFS while running Veeam synthetic operations. So far this issue has been primarily reported by users who have formatted their ReFS volumes using a 4K block size, which is set by default when formatting a new volume. Veeam spokespeople have recommended that users use a 64K block size as the primary method to avoid this issue. 

Check the current allocation unit size of ReFS volumes using:

fsutil fsinfo refsinfo <volume pathname>

Microsoft has also released a patch(KB4013429) as well as a corresponding knowledge base article regarding this issue. The fixes include a patch + registry changes specific to the issue with ReFS. The patch adds the option to create and fine tune the following registry parameters to tweak the ReFS memory consumption:

RefsEnableLargeWorkingSetTrim | DWORD
RefsNumberOfChunksToTrim      | DWORD
RefsEnableInlineTrim          | DWORD

The reason for the errors during the synthetic operations performed during Veeam Backups to ReFS repositories is that Veeam synthetic operations are very I/O intensive. Users have uncovered an issue with the ReFS file system where metadata stored in memory is not released properly. This causes the utilized memory on the system to balloon and can eventually lock up the OS.

Windows SysInternals RAMMap
Image Source

Using the Windows SysInternals Tool: RAMMap, users can monitor the memory usage during synthetic fulls. This will help determine if the metafile is growing and if there will be a potential issue with memory.

Finally, suggestions for avoiding running into this error:

  • Choose 64K Block Size when formatting new ReFS volumes for Veeam repository. Avoid using 4K Block Size for now.

If you are currently using an ReFS volume with 4K Block Size consider migrating the repository to a new volume with 64K Block Size. This post may assist you.

  • Use more than the minimum recommended memory for Veeam Repositories, that is, 4GB plus up to 4GB for each concurrently running job.
  • If you are currently using ReFS with 4K blocks and/or are running into issues with you Veeam repository locking up during synthetic operations, apply the patch + corresponding registry changes. If these do not resolve the issues, try adding more memory to the Veeam repository server.

64K block sizes are already largely recommended as a best practice for Veeam repositories, considering how Veeam works with large files. The issue here being that Windows sets the default allocation unit size at 4K so users may skip past changing it when formatting new volumes. Hopefully future releases of Veeam will be able to detect and warn users against 4K block sizes during the creation of ReFS repositories.

 

More Info: Veeam Forums; VeeamLiveMicrosoft Support

Veeam and AWS Storage

Amazon cloud storage, S3 and Glacier, are top contenders when it comes to storing data offsite and Veeam is a behemoth when it comes to VM backup; you can use the two together but should you? The answer: well, it depends.

Integrating a Veeam Backup and Replication configuration with Amazon storage is done through an AWS storage gateway. An appliance that sits between your Veeam server and the AWS Cloud. You can connect this gateway to the Veeam server as a file server, direct attached storage, or a virtual tape library.

With the amazon storage gateway as a file server or direct attached storage you can backup directly from Veeam to the storage gateway. You can perform incremental backups this way, but in order to avoid the corruption possible with long backup chains periodic fulls will need to be taken. Synthetic operations are possible in this configuration, however without a proxy on the AWS side anytime they are performed Veeam will need to read the full backup and incrementals stored in the AWS storage, effectively downloading it from the Amazon servers, performing the synthetic operation locally, and re-uploading new synthetic data. This causes the synthetic operations to take incredibly long times to complete and is not recommended.

The other option is the virtual tape library(VTL) which allows you to present the storage gateway to Veeam as a tape server. This lets you to use Veeams tape backup jobs to create virtual tape backups on AWS storage. While Veeam tape backups allow for incremental backups, this method also requires periodic fulls to be taken any time a new tape is required. Which may end up happening frequently, depending on backup size and retention, as the maximum tape size for AWS storage gateway is 2.5TB. For restores requiring already archived/removed, it can take up to 24 hours for a tape to be retrieved, for a cost, and made available again in the tape gateway.

Alternatively, Veeam has an offsite backup method built into its distribution that comes in the form of Veeam Cloud Connect. Cloud Connect partners are third party Veeam service providers running Veeam cloud gateways and repositories who supply you with the necessary storage and compute tailored to performing offsite Veeam backups.

End users put in their service providers information and login information and are provided with a ‘Cloud Repository’ in the Veeam console. They can send backups, backup copies, etc. to the cloud repository.

Instead of having to perform periodic full backups, tenants can perform forever forever incremental backups with periodic synthetic fulls. Because the service provider is also running Veeam and has the compute required for synthetic operation, synthetic fulls and merges are able to be performed locally at the remote site—significantly reducing backup windows.

With a pre-configured appliance for cloud connect available from Veeam, Azure may be a better option compared to AWS. This lets you setup and manage your own cloud connect server in the Azure cloud to point offsite backups to. However, this adds another layer of complexity/management as well as costs for not only Azure storage but also hourly compute costs as well as additional Veeam Cloud Connect licensing. It is certainly more feasible to look for an already available Cloud Connect partner who already manages their own offsite Cloud Connect server. A Veeam Cloud Connect partner can provide the specialized backup service, offering true incremental forever backups without the complexity of creating your own service.

As far as using AWS for offsite backup with Veeam it is certainly doable, however due to the fact that full backups are required regularly it could only be recommended for those with less than 1 TB of data to backup, those with low frequency backup who can afford extremely large backup windows, or those with a lot of bandwidth available to the amazon storage servers.

Otherwise it may be more feasible to go with a cloud connect provider who can offer nightly incremental backups with regular synthetic fulls, significantly reducing backup times because they can perform the necessary synthetic operations on the remote repositories. It’s not only more time effective, but also often times more cost effective, and simpler from a management perspective.

Migrating and Seeding Veeam Backups to ReFS Storage

In order to see any benefits from Veeams integration with the Windows Server 2016 ReFS filesystem all fulls and incrementals will have to have been created on the new filesystem. This means that just moving data over won’t mean you immediately see the benefits of fast cloning and spaceless fulls. There are a few extra steps that need to be taken first:

View this post to see some of the benefits that come along with using Veeam and ReFS.

Update: Make sure to use 64K Block Size when formatting the Veeam repository volumes to avoid issues with 4K Block Size and ReFS. Read this post for more information.

Migrating Existing Backups to ReFS Volumes

The first thing to note here is that only new full and incremental backups created on ReFS will benefit from the fast cloning and spaceless full technology that comes with ReFS on Windows Server 2016. This means that once you move your data over to the new ReFS volume you won’t begin seeing any performance or storage benefits until either an active or synthetic full is created and then all old incremental restore points have been merged or deleted and new increments are all created on the new volume.

This can prove to be troublesome if you are coming from a deduplicating storage where all previous fulls and incrementals were deduplicated. By moving them to the new volume all of the data would become rehydrated and most likely blow up your storage.

As you may not have enough storage for the full rehydrated size of all of your fulls, archived fulls, and incrementals, it would most likely benefit you to keep those archived backups on the deduplicating storage or write them off to tape and only migrate the newest full .vbk, incrementals .vib, and .vbk to the new ReFS volume.

Keep in mind that you won’t see the benefits of ReFS until the most recent full and incrementals were created on the new ReFS volume. This means that you will need to have storage for at least 2 full backups and all incrementals. We recommend scheduling the GFS retention to create a full as soon as possible so that the oldest archived full on the new volume can be deleted and free up as much space as possible.

Once the Synthetic full backup and the new incrementals have all been created on the new storage you can delete the oldest archive point from the ReFS volume and all new backups will see the benefits from the net ReFS filesystem.

Seeding Offsite Backup Copy to ReFS Volumes

Seeding a backup to ReFS still has the benefit of decreasing the initial WAN utilization by preventing the need to ever to a full backup over the internet. However, with ReFS even after seeding the backup all backups will have to have been created on the volume they were imported to. This means that you won’t see the

We have been successful using the following process when seeding backups to ReFS(Keep in mind that you will temporarily need storage for 2 full backups and 2 incremental restore points.):

  1. First perform an initial full backup copy (seed backup) to an external drive at the primary location.
  2. Once the backup is complete ship the drive off to the secondary location and import the .vbm and .vbk file into the target repository.
  3. Rescan the new repository from the primary site and those new backups should be added to the configuration.
  4. From the job used to create the seed edit the target repository and point it to the new repository at the secondary site.
  5. Use ‘Map Backup’ to choose the newly imported backup from the repository.
  6. Using the following method we have been successful forcing the backup copy job to perform a full GFS synthetic full:
    1. In the job settings, change the number of restore points to 2(This is the lowest it will allow you to create.)
    2. Enable GFS retention and change the schedule for weekly backups to occur during the backup following the next backup.
      • For example, if your backups happen nightly and today is Wednesday you would allow the incremental backup to run Wednesday night, then schedule the weekly backup to happen during the backup scheduled for Thursday night. The backup needs to run Wednesday first in order to create an incremental backup so that the job hits its limit of 2 restore points. Then on Thursday, because the job has hit its retention and the synthetic is scheduled it should make the GFS restore point.
  7.   Once the GFS synthetic full has been created you can delete the archived full ‘…_W.vbk’ to free up storage. (You can keep it but this archived full will not benefit from any ReFS spaceless fulls and will utilize your storage until it is deleted from retention.)

Now you can change the retention of the backup job to whatever you would like and any new backups will benefit from the new ReFS filesystem.

Veeam 9.5 Issues Seeding Backups

After upgrading to Veeam 9.5 we had a customer needing to seed new backups to our cloud repository. We created a new backup copy job and backed up to a temporary seed repository pointing to an external drive. Once the backup completed the drive was shipped back to us and we imported the data to our cloud repository and re-scanned it on the customer side.

After mapping the job to the imported data we ran the job. At this point it should have continued from the already backed up data and started an incremental backup. However, it was doing a full backup. It was creating duplicate entries for the VMs in the backup data.

Veeam 9.5 update 1 was released 1 week prior to this incident and our policy is to wait at least 30 days before applying new releases. After reading through the fixes that this update would apply we were unable to verify that it would resolve our errors, however Veeam lists this update as non-breaking and after some confirmations with Veeam support we applied the update. We then started the process of re-importing the data.

Instead of removing all of the data and then re-importing it from the seed drive we were able to re-import just the seeded .vbm file and leave the already imported .vbk file then re-scan from the customer side. Veeam showed 1 backup as ‘updated’ during the re-scan.

Once the update was applied and the backup was re-imported, the backup continued incremental backup from the seed data as expected.

VEEAM 9.5 ReFS Integration

Veeam v9.5 was recently released and with it came a large number of improvements and added features. Namely, the seamless integration of Microsoft Server 2016’s new ReFSv3.1 filesystem. Veeam’s integration with this version of ReFS adds the fast cloning ability and spaceless full technology, meaning merges and synthetic operations require less resources and time and synthetic full backups take up significantly less storage.

Veeam 9.5 ReFS Benefits

Veeam 9.5 integration with Windows server 2016 ReFS comes with 2 significant benefits to synthetic and merge operations: Fast Cloning and Spaceless Full Technology.

Both of these rely on Veeam v9.5’s integration with ReFS allowing Veeam to utilize ReFS block cloning. This allows Veeam backups to copy data blocks within files or even from one file to another very quickly. When data is copied, the file system will not create new copies of existing data blocks and instead creates pointers to the locations of existing data.

–Fast Cloning with ReFS ensures that because new full backups do not need to copy existing data and instead use pointers to existing backup data it significantly reduces the time required to perform synthetic operations which can normally be very resource intensive.

–Space-less Full Backups are possible due to the fact that new synthetic backups are primarily made up of pointers to existing full backup data, significantly reducing the space required to store the backups.

One of the downsides of space-less fulls when compared to a deduplicating storage device is that there is no global ded uplication. Space-less fulls with ReFS will only reduce storage usage between copies of the same full backup file. Even still however, the space savings are tremendous:

CaptureReFSSize

 

In the picture above we moved a customer with close to 1TB native backup size from an NTFS backup repository to an ReFS repository. After a month of weekly and monthly GFS backup copies the utilized storage is less than half of the native file size. The customer already had full backups, and as those older GFS restore points get removed due to retention and replaced with ReFS spaceless fulls the utilized storage will continue to decrease.

–Encryption is also possible with ReFS space-less fulls. One of the downsides of deduplication is that backup files cant be encrypted otherwise you lose the benefits of deduplication. Because the space-less fulls and encryption are both transparent to Veeam, it’s able to both encrypt the data while still providing the space saving benefits of space-less full backups on ReFS.

Adding ReFS Volumes as Veeam Repositories

In order to see the benefits of ReFS with Veeam older repositories will need to be attached to a Windows Server 2016 server and formatted as ReFS. If you had previously added a Windows Server 2016 ReFS volume as a repository, it will need to be readded after upgrading to Veeam v9.5 in order for Veeam to recognize it as an appropriate ReFS volume and allow the new features to be utilized.

Important: Veeam’s fast cloning and spaceless full technology only supports ReFS volumes created on Windows Server 2016, volumes formatted as ReFS in Windows Server 2012 will not see the benefits because Server 2012 uses an older version of ReFS.

Any restore points created prior to the v9.5 upgrade will not see the new benefits. In order to utilize fast cloning and spaceless fulls, all full and incremental backups involved in synthetic operations will need to have been created using Veeam v9.5 with a Windows Server 2016 ReFS repository. This means the benefits of fast cloning and spaceless fulls will not apply when you first copy older backup data into the new ReFS repository. Therefore, in order for existing backup or backup copy chains to begin seeing these benefits, either an active or synthetic full(including backup copy Synthetic GFS) will need to be performed. Then the next time a synthetic operation is performed the [fast clone] tag will be displayed next to the synthetic operation in the job activity logs, as well as a corresponding increase in the speed of the operation.

Update: Make sure to use 64K Block Size when formatting the Veeam repository volumes to avoid issues with 4K Block Size and ReFS. Read this post for more information.

Issues Running Backups or Rescanning Repository After Veeam Upgrade

Just recently, right after upgrading to Veeam 9.5, we ran into an error with one of our customers that would show up whenever backups started to run and when we tried to rescan the Veeam repository. The errors messages were:

Warning Failed to synchronize Backup Repository Details: Failed to synchronize 2 backups Failed to synchronize 2 backups
Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Incorrect file system item link received: BackupJobName

Based on the errors it looked like there were issues with the database entries for the backup job mentioned in the error. As a troubleshooting step we tried removing the backup files from the GUI under Backup & Replication>Backups by right clicking on the backup and selecting ‘Remove from Configuration.’ However, this ended up giving us the same error in a popup dialogue:

Incorrect file system item link received: BackupJobName

After opening a ticket with Veeam they informed us that this is a known issue caused by unexpected backup entries in the VeeamBackup SQL database. Namely, the issue backup jobs were listed in the database with a zero entry for the job ID or repository ID causing Veeam to not be able to locate the backup files.

Because this involves the Veeam SQL database we are going to be making changes to the database so it’s best to back it up before the next steps. Here’s a knowledge base article from Veeam that shows the suggested methods to backing up the database.

To determine whether there are any errant backup entries in the SQL database run the following query:

SELECT TOP 1000 [id]
 ,[job_id]
 ,[job_name]
 ,[repository_id]
 FROM [VeeamBackup].[dbo].[Backup.Model.Backups]

Under ‘repository_id’ you should see one or more of the backup jobs showing ‘00000000-0000-0000-0000-000000000000’ or ‘NULL’ as the id for job or repository. Any entries with this issue will need to be removed from the database in order to resolve the error.

It’s best practice to save a backup of the SQL database before making any changes. If you’re unsure of how to back up the SQL database, follow Veeam’s knowledge base article. 

After backing up the SQL database run the following query for each job with ‘00000000-0000-0000-0000-000000000000’ as the repository_id:

EXEC [dbo].[Backup.Model.DeleteBackupChildEntities] 'REPLACE_WITH_ID’
EXEC [dbo].[Backup.Model.DeleteBackup] 'REPLACE_WITH_ID’

Replacing the ‘job_id’ with that of any backups with the unexpected ‘repository_id’ found with the previous query.
After that issues with the local backup server was resolved.

However, we were still seeing errors when trying to connect the backup server to our cloud connect repository to do backup copies. We were still getting the following errors:

Warning Failed to synchronize Backup Repository Details: Failed to synchronize 2 backups Failed to synchronize 2 backups
Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Incorrect file system item link received: BackupJobName

In order to resolve this we had to remove the entries for the problem jobs from the cloud connect server database. For those using a cloud connect service provider you will have to have them make changes to the SQL database.

We had 2 VMs that were giving the ‘Incorrect file system item link received: JobName” error. So we had to remove any entries for those jobs from the SQL db.

We ran the following query to get the Job ID of both jobs mentioned in the errors:

SELECT TOP 1000 [id]
 ,[job_id]
 ,[job_name]
 FROM [VeeamBackup].[dbo].[Backup.Model.Backups]

Then we ran the same delete query as before using the new job_id’s:

EXEC [dbo].[Backup.Model.DeleteBackupChildEntities] 'REPLACE_WITH_ID’
EXEC [dbo].[Backup.Model.DeleteBackup] 'REPLACE_WITH_ID’

After those entries were deleted we were able to rescan the repository.

Lastly, once we rescanned our cloud repository and imported the existing backups we started getting the following error message:

Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Path E:|PATH|TO|BACKUP|FILES|BackupJobName2016-11-21T220000.vib is absolute and cannot be added to other path

This error indicates that there is a problem where the backup chain isn’t accurately resolving the location of the GFS restore points. In order to resolve this we had to manually remove the Imported backups from the cloud connect server by going to Backup & Replication>Backups and selecting the backup job and choosing ‘Remove from Configuration’ making sure to check ‘Include archived full backups.’ After the backups had been removed from the cloud connect repository we were able to manually rescan the repository on the local backup server and the backup files were imported again successfully.

Update:  After deleting these affected rows using the commands above you may get the following error message:

Unable to map tenant cache ID 'ID-GOES-HERE' to service provider side ID 'ID-GOES-HERE' because it is already mapped to ID 'ID-GOES-HERE'

If you do see this error a solution can be found in this Veeam KB article. Essentially, the data for these deleted rows still exists in the Veeam DB cache table and needs to be removed. In order to do this run the following query on the VeeamBackup database:

delete from [cachedobjectsidmapping]

This will delete the DB cache table. In order to re-populate it you will need to rescan the service provider and the Cloud Repository.

Veeam Backups with Long Term Retention

One of the features of Veeam Backup and Replication is the ability to not only perform an incremental backup job to local disk but then schedule a job to copy those increments to another location. These backup copy jobs are forever forward incremental meaning that after the first full backup is created only incremental data from that point forward are copied. Once the incremental chain hits the set limit of restore points, after any new restore points is copied, the oldest restore point is automatically synthetically merged into the full backup file. The default number of incremental restore points is 7 but this number can be increased. However, the longer the incremental chain is, the more data will be lost if the incremental chain is broken due to corrupt restore points.

Managecast increases this to 14 daily incrementals by default so that 2 weeks of daily increments are ready for restore.

For longer retention, Veeam recommends using a backup copy jobs built in GFS retention to keep X number of weekly, monthly, quarterly, and yearly full generations of the backup files.  This means that the backup copy job will create a copy of the most recent full backup file and archive it according to the retention policy. Because these GFS restore points are individual full backup files they do not rely the incremental change and will not be lost if the incremental chain is broken.

Unfortunately, this method can utilize a large amount of storage very quickly especially when it comes to long term retention. For example, if a backup copy job is set to keep 7 incremental restore points, 4 weekly, and 12 monthly backups it would need enough storage for the 7 incrementals, the current full backup file as well as 15 additional full backup files.

A useful tool to figure out how much storage is needed is this restore point simulator.

One option to cut down on the amount of storage used is to use deduplication on the target backup repository. Because the GFS restore points, or full backup copies, are copies of the same backup files they deduplicate extremely well. Keep in mind that if the current full backup file is deduplicated it will severely slow down the process of merging the oldest incremental restore point and the current full backup file. To get around this Managecast only deduplicates files older than a 7 days. Meaning that the GFS restore points are only deduplicated a week after they are copied from the current full. This way the backup repository is storing the daily incrementals, the current full backup file, and all of the deduplicated GFS full restore points. This takes up a little more than 2 times the full backup size plus the size of the incrementals.

Another important thing to note is that Deduplication will not work on the backup files if they are encrypted. Because encryption is changing the individual files, deduplication would no longer see the different backup files as similar data and would not deduplicate that data. Depending on how much storage the backups take up this means that there needs to be a choice between either encrypting the backup files and a high cost long term retention policy, or keeping the long retention policy and being able to deduplicate the full backup files.

In summary, VEEAM is a great product, but for long-term retention requirements it can really explode the size of backup storage required. Managecast will be reviewing the new VEEAM v9.5 in combination with Windows 2016 and the advanced ReFS file system to see if new de-duplication efficiency (with combined encryption) will help solve these issues. Stay tuned!

Update: Check out our post on Veeam 9.5 ReFS Integration for longer term retention!

 

Managecast is now a proud member of the VEEAM Cloud Connect Partner Program

 

VCSP_silver_logo

Managecast is a featured partner on VEEAM’s list of service providers that offer the VEEAM Cloud Connect services. VEEAM Cloud Connect enables you to quickly and efficiently get your VEEAM backups offsite, safe and secure, so you can always recover your data no matter what!

Our services powered by VEEAM allow for:

  • Full integration with VEEAM
  • Secure, encrypted backup, replication, and restore
  • Full-site failover
  • Full and partial failback
  • Quick site failover testing
  • Fast recovery

Managecast is offering 30 day, no-oligation, free trials and enabling your existing VEEAM installation could not be easier. Get your existing VEEAM backups offsite using the familiar VEEAM management console. Managecast can also provide updated VEEAM software and licensing if required.

Our Cloud-based disaster recovery and offsite backup powered by VEEAM can now be easily used to provide offsite disaster recovery capabilities for your organization. Contact us for a free trial.

image001managecast-Logo1-small