Backup VS Disaster Recovery (DR)

Recovery

Several years ago your main, and usually only option for disaster recovery was to take backup media and perform a restore onto new hardware. This means copying a large amount of data from backup media back to a server. We call this “Backup Recovery” or “Standard Recovery”.  It’s the process most people are used to and the standard method for decades.

As data volumes have exponentially expanded, combined with an “always on” culture of today’s society, the old means of recovery are no longer adequate for a lot of businesses.

Unfortunately it is sometimes at the worst time, during an actual disaster event, that it is fully realized the current backup solution is no longer meeting the disaster recovery requirements of the business. When management realizes it takes 48 hours to just restore (copy) the data is when the Recovery Time Objective (RTO) is more clearly appreciated! Many times after these events companies update their RTO policies and implement changes.  Obviously it would be better to consider this situation and enact changes prior to the disaster.

For organizations that need quicker recovery time the option is to implement some form of “replication” in which all of the production data is replicated to another system for DR purposes. The replication could be local, and/or could be offsite to a service provider. The key to replication is that you are keeping a copy of your entire server in another place so that if your primary server is compromised in some way, you can “turn on” the DR server and be back up and running within minutes, current as of the last backup.  This typically allows the organization to resume operations much more quickly since the data did not need to be restored, but rather was already in a state to be used almost immediately. Replication is really the only way to minimize the recovery time since it eliminates the restore process.

On the other hand, backup still serves a valuable service that is not well addressed by disaster recovery (replication).  In a disaster situation you are typically most interested in the most recent version of data and not the data from months or years ago.  Organizations with retention policies may want months, or even years of past versions of backup data.  DR does not address this need, however, backup is used for retrieving data that may have changed in the past. For instance, a user may delete a file, but not notice for months. Backup would be used instead of DR for this older data since DR is focused around the most recent version of data. Typically no one wants to do a disaster recovery with 6 month old data!

Backup is also usually much easier to use for granular restores of a few files or directories. DR is focused around an all or nothing approach. In a DR situation you are typically restoring entire servers. Backup makes a granular recovery easier. For this reason, backup is typically more often utilized than DR.

There is often the need for both backup and DR solutions to address both needs. VEEAM Backup and Replication, for instance, gives you the capability of choosing Backup or Replication (DR), or you can elect to do both.  If you can not afford to do both then you will need to contrast the pros and cons of backup and DR and decide on which one best meets your requirements. For example, if you need quick recovery, but do not need long-term retention, it may be possible to use DR as a form of backup. On the other hand, if you do not require quick recovery, but need long-term retention then backup-only may be best.  However, if you need both quick recovery and long-term retention then you need both backup AND Disaster Recovery.

The table below contrasts some of the differences between Backup and Disaster Recovery.

Backup-vs-DR

Related Articles:

What is Disaster Recovery as a Service?

The differences between Disaster Recovery, Backup and Business Continuity

Is Cloud Backup Adequate Protection from Ransomware?

I have heard this question many times, so I understand the concern of conscientious IT admins who want to know if they are truly protected from a ransomware infection if using cloud backup. After all, backups are the last line of defense against ransomware and other malware. I understand there are some who really insist on the “air gap” that tape, or other portable media provides. Feel free to augment your data protection methods and implement some level of offsite backup to an air-gaped media once a month and take it offsite. However, even doing offsite media rotation once per week is a cost of it’s own. Portable media introduces lots of other downsides – like trying to get your backups offsite every day, tracking of media, storage of media, encrypting the media etc.

We at Managecast address this concern for an “air gap”, we optionally provide the ability to export backups to air gap media, generally once per month. This allows efficient offsite cloud backup with the added ability to air gap the backups and securely store offsite.

However, most of our clients do not implement the air gap option and strictly use cloud backup with no definable “air” gap. Yet, it is also true that we field a lot of restore requests around ransomware infection.

Ransomware is our #1 reason for restores in the last 24 months!

So should our clients worry if not using the air gap option?

The reality is that it is next to impossible for Ransomware to infect your offsite backups, and we have never seen ransomware leap to a service provider. Let’s say you are backing up to us and get hit with Ransomware. It infects all of your machines and data (knock on wood). Then the backup runs and we backup all the infected data. True enough we will faithfully back it up, but here is what would happen:

Because there was a massive data change because your data got encrypted, we usually see the backups running for a long, long time (the internet bandwidth is usually limited) and 9 times out of 10 we will see this and stop the backups.  Yes, whatever data was backed up could be infected, and the infected data is being stored on the service provider storage.

So does this mean because infected data was backed up it infects the other previously backed up offsite backup? No. Your current backups are just an incremental point in time backup.  There is nothing stopping you from restoring from a previous backup unless you have an unusually short retention policy.  It is possible that if you had a short retention policy of say 2-3 days that your incremental backups could end up overwriting your good data, but it’s rare for clients to have this short of a retention policy and 14 days of backups is usually minimum.  So you would have 14 days to notice you had ransomware. If this isn’t long enough, consider a longer retention policy.

The reason the current backup data does not “infect” the offsite backup data is because it is encrypted at the source and transmitted and stored in encrypted format. The ransomware is encrypted and would have no way to execute on the service provider side, and your past backups would be protected.

To colorfully illustrate the point, I tell people to consider an experiment in which they take the worst ransomware they can find and then ZIP the Ransomware up in a password protected ZIP file (make sure it’s a strong password!). Then email that file (without the password) to every person in your company and see if Ransomware is spread. The answer would be no, because without the password to de-crypt the ZIP file, you have no way to access the Ransomware and it has no way to run or infect anything else.

So, again, I know some people really want an “air gap”, but you are doing so to protect yourself from a non-existent threat while exposing yourself to lots of other downsides of portable media that are real threats! Is it really worth it? If air-gap is really needed I would consider using an air-gap method of backup in addition to automated offsite cloud backup, or leverage the optional Managecast air-gap backup monthly.

In summary, I can tell you that for Managecast the #1 reason for DR restores in the past 2 years is because of Ransomware infections. There has not been one instance those infections affected the offsite backups. We had a client that got hit three times in 1 year with Ransomware and we had to restore them each time! Cloud Backup is a proven, safe, and robust protection against ransomware and other malware.

What is Disaster Recovery as a Service (DRaaS)?

Until the last few years companies wanting quick fail-over to a remote site faced huge costs, time and complexity.  Consequently, only very large companies with deep pockets could afford to implement offsite disaster recovery.

Today, the technology and the internet have enabled highly cost effective methods to provide DR services to organizations that traditionally have not been able to afford such capabilities in the past.  In combination, today’s consumers expect the services they use to always be available, and this drives many companies to look at implementing DR fail-over so business is not interrupted.

The cost of an organization hosting its own disaster recovery site can be cost-prohibitive. Not only in terms of money but also time, and effort. Costs include hosting remote sites, managing servers, managing applications, monitoring the backups and replication, and regular testing. However, it’s possible to offset these costs by utilizing a service provider offering Disaster Recovery as a Service, or DRaaS.

DRaaS is a way for organizations to utilize service providers, like Managecast, who provide protection for virtual servers in a cloud environment by offering infrastructure, software, and management for DR solutions.

Failover

Organizations utilizing DRaaS replicate their data continuously or periodically, depending on their desired Recovery Point Objective (RPO), to the service provider. Then, in a DR event, the organization can fail-over all or some of their environment by simply powering on their VMs in the service providers cloud-DR infrastructure and continue to operate.

The organizations have access to failed over replicas through predefined methods. In the event of a partial failover of only some of the organizations servers their local network can be extended to the cloud-DR environment allowing them to access the servers as if they were still hosted locally. Alternatively, in a full failover event an organizations servers can be accessed remotely. E.g. through a web console, VPN or remote desktop services. Service providers and also provide new public IPs to minimize downtime for public facing applications.

An example of a web console used for failover and testing DR.
An example of a web console used for failover and testing DR.

If after the fail-over has been performed the organization is able to get their local infrastructure back up and running, depending on the DR solution, they can also fail back to production. Failing back means replicating any changes made during the fail-over in the DR environment back to the production side.

Testing

After replicating to the service provider, it will be necessary to perform regular DR testing to make sure things go smoothly in a DR situation. Most DRaaS providers will allow organizations to perform their own testing which allows them to set test criteria.

Testing can be as simple as logging into the service providers web console, powering on a VM, and verifying application or service functionality.

Costs

While not all service providers charge for DRaaS the same, a common model is based on usage per hour. Meaning that the organization will be charged for only what they use.

Management

In some cases the DRaaS provider will offer additional management in terms of the replication process. This can include monitoring the replication, alerting the organization of any potential issues, as well as providing fully-managed service solutions.

While an organization may view DR as an additional cost, for DRaaS service providers providing backup and replication is their sole focus. By using a service provider for DRaaS they gain access to that expertise and can leverage them for any DR needs.

Check Your Veeam Cloud Connect Repository Storage Usage

Here are the steps to check your Managecast Veeam Cloud Connect repository usage through the Veeam Backup and Replication console:

    1. Open the Veeam console and select the ‘BACKUP INFRASTRUCTURE’ tab on the bottom left.
    2. Then, select the ‘Backup Repositories’ group on the top left.
    3. A list of existing repositories will be displayed, select the cloud repository. It will typically be named ‘Managecast Cloud Repository’ and the repository type will be labeled Cloud.
    4. Right click the ‘Managecast Cloud Repository’ and choose ‘Properties.’
    5. VEEAM CC REPO USAGE

    6. The window that is displayed will show the settings for the selected cloud connect repository including: Capacity, Used space, and Free space.

    VEEAM CC REPO USAGE PROPERTIES

Avoid Memory Issues with Veeam and ReFS

There have been reports of issues that users have run into after incorporating ReFS repositories with Veeam Backup and Replication. Here’s how to best avoid running into them:

According to reports, users are running into issues on Windows 2016 servers using drives formatted with ReFS while running Veeam synthetic operations. So far this issue has been primarily reported by users who have formatted their ReFS volumes using a 4K block size, which is set by default when formatting a new volume. Veeam spokespeople have recommended that users use a 64K block size as the primary method to avoid this issue. 

Check the current allocation unit size of ReFS volumes using:

fsutil fsinfo refsinfo <volume pathname>

Microsoft has also released a patch(KB4013429) as well as a corresponding knowledge base article regarding this issue. The fixes include a patch + registry changes specific to the issue with ReFS. The patch adds the option to create and fine tune the following registry parameters to tweak the ReFS memory consumption:

RefsEnableLargeWorkingSetTrim | DWORD
RefsNumberOfChunksToTrim      | DWORD
RefsEnableInlineTrim          | DWORD

The reason for the errors during the synthetic operations performed during Veeam Backups to ReFS repositories is that Veeam synthetic operations are very I/O intensive. Users have uncovered an issue with the ReFS file system where metadata stored in memory is not released properly. This causes the utilized memory on the system to balloon and can eventually lock up the OS.

Windows SysInternals RAMMap
Image Source

Using the Windows SysInternals Tool: RAMMap, users can monitor the memory usage during synthetic fulls. This will help determine if the metafile is growing and if there will be a potential issue with memory.

Finally, suggestions for avoiding running into this error:

  • Choose 64K Block Size when formatting new ReFS volumes for Veeam repository. Avoid using 4K Block Size for now.

If you are currently using an ReFS volume with 4K Block Size consider migrating the repository to a new volume with 64K Block Size. This post may assist you.

  • Use more than the minimum recommended memory for Veeam Repositories, that is, 4GB plus up to 4GB for each concurrently running job.
  • If you are currently using ReFS with 4K blocks and/or are running into issues with you Veeam repository locking up during synthetic operations, apply the patch + corresponding registry changes. If these do not resolve the issues, try adding more memory to the Veeam repository server.

64K block sizes are already largely recommended as a best practice for Veeam repositories, considering how Veeam works with large files. The issue here being that Windows sets the default allocation unit size at 4K so users may skip past changing it when formatting new volumes. Hopefully future releases of Veeam will be able to detect and warn users against 4K block sizes during the creation of ReFS repositories.

Update 11/3/17: Some users with larger amounts of backup data report issues even when using 64K block size yet.

More current updates of Windows Server 2016 now include additional registry settings to curb some of the issues that users have continued to report. As of the time of this update, it has been suggested that those experiencing these issues set these decimal values for the following registry keys:

HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsEnableLargeWorkingSetTrim = 1
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsNumberOfChunksToTrim = 32
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsDisableCachedPins = 1
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\RefsProcessedDeleteQueueEntryCountThreshold = 512

Some Veeam technicians/architects have also suggested, for backup jobs whose retention requirements are 100 days/restore points or less, to avoid synthetic fulls unless they are specifically necessary.

eg: For a retention policy going back a month, aim for 30 daily incremental restore points, rather than 7 daily and 3 weekly restore points.

This will still provide the benefits of fast cloning from ReFS but avoid the issue of synthetic full merges potentially locking up the storage during the full backup file merge processes.

 

More Info: Veeam Forums; VeeamLiveMicrosoft KB4016173Microsoft KB4035951

Veeam and AWS Storage

Amazon cloud storage, S3 and Glacier, are top contenders when it comes to storing data offsite and Veeam is a behemoth when it comes to VM backup; you can use the two together but should you? The answer: well, it depends.

Integrating a Veeam Backup and Replication configuration with Amazon storage is done through an AWS storage gateway. An appliance that sits between your Veeam server and the AWS Cloud. You can connect this gateway to the Veeam server as a file server, direct attached storage, or a virtual tape library.

With the amazon storage gateway as a file server or direct attached storage you can backup directly from Veeam to the storage gateway. You can perform incremental backups this way, but in order to avoid the corruption possible with long backup chains periodic fulls will need to be taken. Synthetic operations are possible in this configuration, however without a proxy on the AWS side anytime they are performed Veeam will need to read the full backup and incrementals stored in the AWS storage, effectively downloading it from the Amazon servers, performing the synthetic operation locally, and re-uploading new synthetic data. This causes the synthetic operations to take incredibly long times to complete and is not recommended.

The other option is the virtual tape library(VTL) which allows you to present the storage gateway to Veeam as a tape server. This lets you to use Veeams tape backup jobs to create virtual tape backups on AWS storage. While Veeam tape backups allow for incremental backups, this method also requires periodic fulls to be taken any time a new tape is required. Which may end up happening frequently, depending on backup size and retention, as the maximum tape size for AWS storage gateway is 2.5TB. For restores requiring already archived/removed, it can take up to 24 hours for a tape to be retrieved, for a cost, and made available again in the tape gateway.

Alternatively, Veeam has an offsite backup method built into its distribution that comes in the form of Veeam Cloud Connect. Cloud Connect partners are third party Veeam service providers running Veeam cloud gateways and repositories who supply you with the necessary storage and compute tailored to performing offsite Veeam backups.

End users put in their service providers information and login information and are provided with a ‘Cloud Repository’ in the Veeam console. They can send backups, backup copies, etc. to the cloud repository.

Instead of having to perform periodic full backups, tenants can perform forever forever incremental backups with periodic synthetic fulls. Because the service provider is also running Veeam and has the compute required for synthetic operation, synthetic fulls and merges are able to be performed locally at the remote site—significantly reducing backup windows.

With a pre-configured appliance for cloud connect available from Veeam, Azure may be a better option compared to AWS. This lets you setup and manage your own cloud connect server in the Azure cloud to point offsite backups to. However, this adds another layer of complexity/management as well as costs for not only Azure storage but also hourly compute costs as well as additional Veeam Cloud Connect licensing. It is certainly more feasible to look for an already available Cloud Connect partner who already manages their own offsite Cloud Connect server. A Veeam Cloud Connect partner can provide the specialized backup service, offering true incremental forever backups without the complexity of creating your own service.

As far as using AWS for offsite backup with Veeam it is certainly doable, however due to the fact that full backups are required regularly it could only be recommended for those with less than 1 TB of data to backup, those with low frequency backup who can afford extremely large backup windows, or those with a lot of bandwidth available to the amazon storage servers.

Otherwise it may be more feasible to go with a cloud connect provider who can offer nightly incremental backups with regular synthetic fulls, significantly reducing backup times because they can perform the necessary synthetic operations on the remote repositories. It’s not only more time effective, but also often times more cost effective, and simpler from a management perspective.

Migrating and Seeding Veeam Backups to ReFS Storage

In order to see any benefits from Veeams integration with the Windows Server 2016 ReFS filesystem all fulls and incrementals will have to have been created on the new filesystem. This means that just moving data over won’t mean you immediately see the benefits of fast cloning and spaceless fulls. There are a few extra steps that need to be taken first:

View this post to see some of the benefits that come along with using Veeam and ReFS.

Update: Make sure to use 64K Block Size when formatting the Veeam repository volumes to avoid issues with 4K Block Size and ReFS. Read this post for more information.

Migrating Existing Backups to ReFS Volumes

The first thing to note here is that only new full and incremental backups created on ReFS will benefit from the fast cloning and spaceless full technology that comes with ReFS on Windows Server 2016. This means that once you move your data over to the new ReFS volume you won’t begin seeing any performance or storage benefits until either an active or synthetic full is created and then all old incremental restore points have been merged or deleted and new increments are all created on the new volume.

This can prove to be troublesome if you are coming from a deduplicating storage where all previous fulls and incrementals were deduplicated. By moving them to the new volume all of the data would become rehydrated and most likely blow up your storage.

As you may not have enough storage for the full rehydrated size of all of your fulls, archived fulls, and incrementals, it would most likely benefit you to keep those archived backups on the deduplicating storage or write them off to tape and only migrate the newest full .vbk, incrementals .vib, and .vbk to the new ReFS volume.

Keep in mind that you won’t see the benefits of ReFS until the most recent full and incrementals were created on the new ReFS volume. This means that you will need to have storage for at least 2 full backups and all incrementals. We recommend scheduling the GFS retention to create a full as soon as possible so that the oldest archived full on the new volume can be deleted and free up as much space as possible.

Once the Synthetic full backup and the new incrementals have all been created on the new storage you can delete the oldest archive point from the ReFS volume and all new backups will see the benefits from the net ReFS filesystem.

Seeding Offsite Backup Copy to ReFS Volumes

Seeding a backup to ReFS still has the benefit of decreasing the initial WAN utilization by preventing the need to ever to a full backup over the internet. However, with ReFS even after seeding the backup all backups will have to have been created on the volume they were imported to. This means that you won’t see the

We have been successful using the following process when seeding backups to ReFS(Keep in mind that you will temporarily need storage for 2 full backups and 2 incremental restore points.):

  1. First perform an initial full backup copy (seed backup) to an external drive at the primary location.
  2. Once the backup is complete ship the drive off to the secondary location and import the .vbm and .vbk file into the target repository.
  3. Rescan the new repository from the primary site and those new backups should be added to the configuration.
  4. From the job used to create the seed edit the target repository and point it to the new repository at the secondary site.
  5. Use ‘Map Backup’ to choose the newly imported backup from the repository.
  6. Using the following method we have been successful forcing the backup copy job to perform a full GFS synthetic full:
    1. In the job settings, change the number of restore points to 2(This is the lowest it will allow you to create.)
    2. Enable GFS retention and change the schedule for weekly backups to occur during the backup following the next backup.
      • For example, if your backups happen nightly and today is Wednesday you would allow the incremental backup to run Wednesday night, then schedule the weekly backup to happen during the backup scheduled for Thursday night. The backup needs to run Wednesday first in order to create an incremental backup so that the job hits its limit of 2 restore points. Then on Thursday, because the job has hit its retention and the synthetic is scheduled it should make the GFS restore point.
  7.   Once the GFS synthetic full has been created you can delete the archived full ‘…_W.vbk’ to free up storage. (You can keep it but this archived full will not benefit from any ReFS spaceless fulls and will utilize your storage until it is deleted from retention.)

Now you can change the retention of the backup job to whatever you would like and any new backups will benefit from the new ReFS filesystem.

Veeam 9.5 Issues Seeding Backups

After upgrading to Veeam 9.5 we had a customer needing to seed new backups to our cloud repository. We created a new backup copy job and backed up to a temporary seed repository pointing to an external drive. Once the backup completed the drive was shipped back to us and we imported the data to our cloud repository and re-scanned it on the customer side.

After mapping the job to the imported data we ran the job. At this point it should have continued from the already backed up data and started an incremental backup. However, it was doing a full backup. It was creating duplicate entries for the VMs in the backup data.

Veeam 9.5 update 1 was released 1 week prior to this incident and our policy is to wait at least 30 days before applying new releases. After reading through the fixes that this update would apply we were unable to verify that it would resolve our errors, however Veeam lists this update as non-breaking and after some confirmations with Veeam support we applied the update. We then started the process of re-importing the data.

Instead of removing all of the data and then re-importing it from the seed drive we were able to re-import just the seeded .vbm file and leave the already imported .vbk file then re-scan from the customer side. Veeam showed 1 backup as ‘updated’ during the re-scan.

Once the update was applied and the backup was re-imported, the backup continued incremental backup from the seed data as expected.

Considering a low cost cloud backup solution?

Ouch, Carbonite is not having a good day.  I see some people choose these low cost cloud backup providers without realizing they are not the same as enterprise-class backup providers like Managecast. It would seem you get what you pay for.

Carbonite Forces Password Reset After Password Reuse Attack!

https://www.databreaches.net/carbonite-forces-password-reset-after-password-reuse-attack/

 

Top 5 Cloud Backup Myths

Hand with marker writing the word Facts MythsBeing a cloud backup managed service provider we run into common myths surrounding cloud backup.  We hope we can dispel some of these more pervasive, and incorrect, perspections of cloud backup.

 

1. Cloud backup is “Not secure.”

One of the biggest concerns with cloud backup is around security and privacy of data. This is understandable in today’s world where data breach headlines are everywhere. Ironically, with the right cloud backup solution, your data is most likely many times more secure using an enterprise-grade cloud backup solution than with more traditional backup methods.  When we encounter folks who think cloud is not secure, it is usually quickly apparent their existing backup solution is far less secure. Traditional, customer managed backup systems, struggle with getting data offsite quickly, securely, which also includes managing media rotations, encryption and other best practices that are not strictly adhered to. Security is a top priority for a cloud backup service provider where these security issues are easily handled.

Summary: Cloud backup providers have gone to great lengths to make sure their managed services are extremely secure. We offer highly encrypted services (AES 256-bit, FIPS 140-2 certified encryption) with the client’s control of the encryption key. In other words, we do NOT store the encryption key unless you request us to and therefore we do not have access to customer data.

2. “Restoring from the cloud takes too long.”Cloud Data Security

Most enterprise cloud backup systems have the option to store data locally as well as having multiple copies offsite. 99% of the time recoveries are made from local storage at LAN speed.  It is rare that restoring data from cloud is required.

In the rare event of a site disaster, in which the local backup has been compromised or destroyed, most business-class cloud backup service providers will provide the ability to ship your data on portable media (fully encrypted) within 24-48 hours. If that sounds like a long time, consider whether you will also have the spare equipment on hand to restore your data to. Some cloud backup service providers will also provide the ability to spin up recovered servers in the cloud for quick recovery.

Summary: Restoring massive amounts of backup data from the cloud is rare. Cloud backup service providers have a number of alternative methods to provide for quick recovery.

3. “Too much data to backup.

While this statement is sometimes true, it rarely is. Backup administrators are used to the legacy backup systems in which a full backup is made daily, or weekly and they think full backups are required for cloud backup. Actually, repeated full backups to cloud is not a good practice, and impractical in most circumstances.

A business-class cloud backup system will support “Incremental Forever”, which means that after the first full backup only incremental backups are made. Incremental backups only send the changed data (at the block level) since the last full backup. This drastically reduces the amount of data needed to backup.

In addition, the first full backup is typically performed on mobile media such as a USB drive and shipped (encrypted) to the data center instead of sent over the internet. This avoids the large transfer of data over the internet.

A general rule of thumb we provide to clients is that for every 1TB of customer data you need 1 T-1 (or 1.55Mbps). A 20Mbps internet connection could support a 12TB environment.

[Related: Use Archiving to reduce cloud backup costs]

Summary: Having too much data is rarely a concern for the right cloud backup solution.

4. “Incremental forever means hundreds of restores.”

Related to #3, people think “incremental forever” means lots of little restores. They think if they have a years worth of backups they will have to restore the first backup, and then 364 other backups. This could not be further from the truth. Incremental backup software has the intelligence built in to assemble the data to any point in time. Restoring data can easily be accomplished, to any point in time, with just a few mouse clicks and a SINGLE restore operation.

Summary: Incremental forever does NOT means many restores. Rather a restore operation, to any point in time, can be made in one step.

5. “Too costly.”

Nothing is more costly than losing your business or data. Our solution is based on the size of the backups, not the number of devices/servers being backed up. The storage size is also after de-duplication and compression, which will lower costs.

Older archived data can be stored at lower cost as well. Which allows you to align the cost of the backup with the value of the data. In many cases we can drastically reduce costs by moving older data to lower cost tiers of backup storage.

In addition to the backups, you are receiving expert management, monitoring and support services from the service provider. For many clients utilizing an “unmanaged service” many backups go without being properly monitored, tested, and restored. Our services allow full expert support and monitoring at a much lower cost to you, without the worry of losing any data.

Summary:  When you look at all aspects of backup and recovery, the costs can be easily justified.

managecast-Logo1-small