Do you need to backup Office 365 data?

Yes, here is why.

Many people take backup for granted when using Office 365.  They think Microsoft must surely be thoroughly protecting the data on Office 365 – I don’t have to worry about it! Think again.

Office 365 lacks the daily backup and archiving capability that may be needed after data is automatically removed and deleted from the recycle bin, or if a user intentionally deletes their data!

Let’s take a look at a few facts:

Deletions – Maybe it’s accidental, maybe it’s purposeful, maybe it’s malicious, but users delete data. Office 365 has a default retention of 14 days, but can be expanded to 30. After that, the data is gone. Intentional deletions (and deleting from the recycle bin) are unrecoverable. Also, if a user account is deleted – the data is gone.

Ransomware/Malware – While Microsoft does have anti-malware protections in place, it does not guarantee that a user does not corrupt their data if infected by malware/ransomware. Recovery from this scenario could be very painful and time consuming if just using built-in data protection measures, and ultimately there is no guarantee of recovery.

Liability – Microsoft contracts have strict limits on liability. In the case of Office 365 the liability is limited to $5000 total. It would cost more to walk into court, so effectively this is the same as no liability of your data.

Compliance – If you have strict compliance requirements (like keeping backups for 7, 10 or more years) then Microsoft is not providing what you require for data protection. Even private companies without legal obligations often have lengthy retention policies.  Even requiring more than 1 month of history may exceed what Microsoft is providing you.

Industry Analysts Recommend Backup – Organzations such as Gartner, Forrester, ESG and others recommend that clients review their own backup data retention requirements and determine if additional Office 365 backup solutions are needed to meet your objectives.

Recommendation:

Ensure you can always easily recover your critical Office 365 emails, files and Sharepoint sites by using a 3rd party backup tool to backup Office 365 data.

 

 

 

 

VEEAM 9.5 ReFS Integration

Veeam v9.5 was recently released and with it came a large number of improvements and added features. Namely, the seamless integration of Microsoft Server 2016’s new ReFSv3.1 filesystem. Veeam’s integration with this version of ReFS adds the fast cloning ability and spaceless full technology, meaning merges and synthetic operations require less resources and time and synthetic full backups take up significantly less storage.

Veeam 9.5 ReFS Benefits

Veeam 9.5 integration with Windows server 2016 ReFS comes with 2 significant benefits to synthetic and merge operations: Fast Cloning and Spaceless Full Technology.

Both of these rely on Veeam v9.5’s integration with ReFS allowing Veeam to utilize ReFS block cloning. This allows Veeam backups to copy data blocks within files or even from one file to another very quickly. When data is copied, the file system will not create new copies of existing data blocks and instead creates pointers to the locations of existing data.

–Fast Cloning with ReFS ensures that because new full backups do not need to copy existing data and instead use pointers to existing backup data it significantly reduces the time required to perform synthetic operations which can normally be very resource intensive.

–Space-less Full Backups are possible due to the fact that new synthetic backups are primarily made up of pointers to existing full backup data, significantly reducing the space required to store the backups.

One of the downsides of space-less fulls when compared to a deduplicating storage device is that there is no global ded uplication. Space-less fulls with ReFS will only reduce storage usage between copies of the same full backup file. Even still however, the space savings are tremendous:

CaptureReFSSize

 

In the picture above we moved a customer with close to 1TB native backup size from an NTFS backup repository to an ReFS repository. After a month of weekly and monthly GFS backup copies the utilized storage is less than half of the native file size. The customer already had full backups, and as those older GFS restore points get removed due to retention and replaced with ReFS spaceless fulls the utilized storage will continue to decrease.

–Encryption is also possible with ReFS space-less fulls. One of the downsides of deduplication is that backup files cant be encrypted otherwise you lose the benefits of deduplication. Because the space-less fulls and encryption are both transparent to Veeam, it’s able to both encrypt the data while still providing the space saving benefits of space-less full backups on ReFS.

Adding ReFS Volumes as Veeam Repositories

In order to see the benefits of ReFS with Veeam older repositories will need to be attached to a Windows Server 2016 server and formatted as ReFS. If you had previously added a Windows Server 2016 ReFS volume as a repository, it will need to be readded after upgrading to Veeam v9.5 in order for Veeam to recognize it as an appropriate ReFS volume and allow the new features to be utilized.

Important: Veeam’s fast cloning and spaceless full technology only supports ReFS volumes created on Windows Server 2016, volumes formatted as ReFS in Windows Server 2012 will not see the benefits because Server 2012 uses an older version of ReFS.

Any restore points created prior to the v9.5 upgrade will not see the new benefits. In order to utilize fast cloning and spaceless fulls, all full and incremental backups involved in synthetic operations will need to have been created using Veeam v9.5 with a Windows Server 2016 ReFS repository. This means the benefits of fast cloning and spaceless fulls will not apply when you first copy older backup data into the new ReFS repository. Therefore, in order for existing backup or backup copy chains to begin seeing these benefits, either an active or synthetic full(including backup copy Synthetic GFS) will need to be performed. Then the next time a synthetic operation is performed the [fast clone] tag will be displayed next to the synthetic operation in the job activity logs, as well as a corresponding increase in the speed of the operation.

Issues Running Backups or Rescanning Repository After Veeam Upgrade

Just recently, right after upgrading to Veeam 9.5, we ran into an error with one of our customers that would show up whenever backups started to run and when we tried to rescan the Veeam repository. The errors messages were:

Warning Failed to synchronize Backup Repository Details: Failed to synchronize 2 backups Failed to synchronize 2 backups
Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Incorrect file system item link received: BackupJobName

Based on the errors it looked like there were issues with the database entries for the backup job mentioned in the error. As a troubleshooting step we tried removing the backup files from the GUI under Backup & Replication>Backups by right clicking on the backup and selecting ‘Remove from Configuration.’ However, this ended up giving us the same error in a popup dialogue:

Incorrect file system item link received: BackupJobName

After opening a ticket with Veeam they informed us that this is a known issue caused by unexpected backup entries in the VeeamBackup SQL database. Namely, the issue backup jobs were listed in the database with a zero entry for the job ID or repository ID causing Veeam to not be able to locate the backup files.

Because this involves the Veeam SQL database we are going to be making changes to the database so it’s best to back it up before the next steps. Here’s a knowledge base article from Veeam that shows the suggested methods to backing up the database.

To determine whether there are any errant backup entries in the SQL database run the following query:

SELECT TOP 1000 [id]
 ,[job_id]
 ,[job_name]
 ,[repository_id]
 FROM [VeeamBackup].[dbo].[Backup.Model.Backups]

Under ‘repository_id’ you should see one or more of the backup jobs showing ‘00000000-0000-0000-0000-000000000000’ or ‘NULL’ as the id for job or repository. Any entries with this issue will need to be removed from the database in order to resolve the error.

It’s best practice to save a backup of the SQL database before making any changes. If you’re unsure of how to back up the SQL database, follow Veeam’s knowledge base article. 

After backing up the SQL database run the following query for each job with ‘00000000-0000-0000-0000-000000000000’ as the repository_id:

EXEC [dbo].[Backup.Model.DeleteBackupChildEntities] 'REPLACE_WITH_ID’
EXEC [dbo].[Backup.Model.DeleteBackup] 'REPLACE_WITH_ID’

Replacing the ‘job_id’ with that of any backups with the unexpected ‘repository_id’ found with the previous query.
After that issues with the local backup server was resolved.

However, we were still seeing errors when trying to connect the backup server to our cloud connect repository to do backup copies. We were still getting the following errors:

Warning Failed to synchronize Backup Repository Details: Failed to synchronize 2 backups Failed to synchronize 2 backups
Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Incorrect file system item link received: BackupJobName

In order to resolve this we had to remove the entries for the problem jobs from the cloud connect server database. For those using a cloud connect service provider you will have to have them make changes to the SQL database.

We had 2 VMs that were giving the ‘Incorrect file system item link received: JobName” error. So we had to remove any entries for those jobs from the SQL db.

We ran the following query to get the Job ID of both jobs mentioned in the errors:

SELECT TOP 1000 [id]
 ,[job_id]
 ,[job_name]
 FROM [VeeamBackup].[dbo].[Backup.Model.Backups]

Then we ran the same delete query as before using the new job_id’s:

EXEC [dbo].[Backup.Model.DeleteBackupChildEntities] 'REPLACE_WITH_ID’
EXEC [dbo].[Backup.Model.DeleteBackup] 'REPLACE_WITH_ID’

After those entries were deleted we were able to rescan the repository.

Lastly, once we rescanned our cloud repository and imported the existing backups we started getting the following error message:

Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Path E:|PATH|TO|BACKUP|FILES|BackupJobName2016-11-21T220000.vib is absolute and cannot be added to other path

This error indicates that there is a problem where the backup chain isn’t accurately resolving the location of the GFS restore points. In order to resolve this we had to manually remove the Imported backups from the cloud connect server by going to Backup & Replication>Backups and selecting the backup job and choosing ‘Remove from Configuration’ making sure to check ‘Include archived full backups.’ After the backups had been removed from the cloud connect repository we were able to manually rescan the repository on the local backup server and the backup files were imported again successfully.

Update:  After deleting these affected rows using the commands above you may get the following error message:

Unable to map tenant cache ID 'ID-GOES-HERE' to service provider side ID 'ID-GOES-HERE' because it is already mapped to ID 'ID-GOES-HERE'

If you do see this error a solution can be found in this Veeam KB article. Essentially, the data for these deleted rows still exists in the Veeam DB cache table and needs to be removed. In order to do this run the following query on the VeeamBackup database:

delete from [cachedobjectsidmapping]

This will delete the DB cache table. In order to re-populate it you will need to rescan the service provider and the Cloud Repository.

Veeam Backups with Long Term Retention

One of the features of Veeam Backup and Replication is the ability to not only perform an incremental backup job to local disk but then schedule a job to copy those increments to another location. These backup copy jobs are forever forward incremental meaning that after the first full backup is created only incremental data from that point forward are copied. Once the incremental chain hits the set limit of restore points, after any new restore points is copied, the oldest restore point is automatically synthetically merged into the full backup file. The default number of incremental restore points is 7 but this number can be increased. However, the longer the incremental chain is, the more data will be lost if the incremental chain is broken due to corrupt restore points.

Managecast increases this to 14 daily incrementals by default so that 2 weeks of daily increments are ready for restore.

For longer retention, Veeam recommends using a backup copy jobs built in GFS retention to keep X number of weekly, monthly, quarterly, and yearly full generations of the backup files.  This means that the backup copy job will create a copy of the most recent full backup file and archive it according to the retention policy. Because these GFS restore points are individual full backup files they do not rely the incremental change and will not be lost if the incremental chain is broken.

Unfortunately, this method can utilize a large amount of storage very quickly especially when it comes to long term retention. For example, if a backup copy job is set to keep 7 incremental restore points, 4 weekly, and 12 monthly backups it would need enough storage for the 7 incrementals, the current full backup file as well as 15 additional full backup files.

A useful tool to figure out how much storage is needed is this restore point simulator.

One option to cut down on the amount of storage used is to use deduplication on the target backup repository. Because the GFS restore points, or full backup copies, are copies of the same backup files they deduplicate extremely well. Keep in mind that if the current full backup file is deduplicated it will severely slow down the process of merging the oldest incremental restore point and the current full backup file. To get around this Managecast only deduplicates files older than a 7 days. Meaning that the GFS restore points are only deduplicated a week after they are copied from the current full. This way the backup repository is storing the daily incrementals, the current full backup file, and all of the deduplicated GFS full restore points. This takes up a little more than 2 times the full backup size plus the size of the incrementals.

Another important thing to note is that Deduplication will not work on the backup files if they are encrypted. Because encryption is changing the individual files, deduplication would no longer see the different backup files as similar data and would not deduplicate that data. Depending on how much storage the backups take up this means that there needs to be a choice between either encrypting the backup files and a high cost long term retention policy, or keeping the long retention policy and being able to deduplicate the full backup files.

In summary, VEEAM is a great product, but for long-term retention requirements it can really explode the size of backup storage required. Managecast will be reviewing the new VEEAM v9.5 in combination with Windows 2016 and the advanced ReFS file system to see if new de-duplication efficiency (with combined encryption) will help solve these issues. Stay tuned!

Update: Check out our post on Veeam 9.5 ReFS Integration for longer term retention!

 

Considering a low cost cloud backup solution?

Ouch, Carbonite is not having a good day.  I see some people choose these low cost cloud backup providers without realizing they are not the same as enterprise-class backup providers like Managecast. It would seem you get what you pay for.

Carbonite Forces Password Reset After Password Reuse Attack!

https://www.databreaches.net/carbonite-forces-password-reset-after-password-reuse-attack/

 

Top 5 Cloud Backup Myths

Hand with marker writing the word Facts MythsBeing a cloud backup managed service provider we run into common myths surrounding cloud backup.  We hope we can dispel some of these more pervasive, and incorrect, perspections of cloud backup.

 

1. Cloud backup is “Not secure.”

One of the biggest concerns with cloud backup is around security and privacy of data. This is understandable in today’s world where data breach headlines are everywhere. Ironically, with the right cloud backup solution, your data is most likely many times more secure using an enterprise-grade cloud backup solution than with more traditional backup methods.  When we encounter folks who think cloud is not secure, it is usually quickly apparent their existing backup solution is far less secure. Traditional, customer managed backup systems, struggle with getting data offsite quickly, securely, which also includes managing media rotations, encryption and other best practices that are not strictly adhered to. Security is a top priority for a cloud backup service provider where these security issues are easily handled.

Summary: Cloud backup providers have gone to great lengths to make sure their managed services are extremely secure. We offer highly encrypted services (AES 256-bit, FIPS 140-2 certified encryption) with the client’s control of the encryption key. In other words, we do NOT store the encryption key unless you request us to and therefore we do not have access to customer data.

2. “Restoring from the cloud takes too long.”Cloud Data Security

Most enterprise cloud backup systems have the option to store data locally as well as having multiple copies offsite. 99% of the time recoveries are made from local storage at LAN speed.  It is rare that restoring data from cloud is required.

In the rare event of a site disaster, in which the local backup has been compromised or destroyed, most business-class cloud backup service providers will provide the ability to ship your data on portable media (fully encrypted) within 24-48 hours. If that sounds like a long time, consider whether you will also have the spare equipment on hand to restore your data to. Some cloud backup service providers will also provide the ability to spin up recovered servers in the cloud for quick recovery.

Summary: Restoring massive amounts of backup data from the cloud is rare. Cloud backup service providers have a number of alternative methods to provide for quick recovery.

3. “Too much data to backup.

While this statement is sometimes true, it rarely is. Backup administrators are used to the legacy backup systems in which a full backup is made daily, or weekly and they think full backups are required for cloud backup. Actually, repeated full backups to cloud is not a good practice, and impractical in most circumstances.

A business-class cloud backup system will support “Incremental Forever”, which means that after the first full backup only incremental backups are made. Incremental backups only send the changed data (at the block level) since the last full backup. This drastically reduces the amount of data needed to backup.

In addition, the first full backup is typically performed on mobile media such as a USB drive and shipped (encrypted) to the data center instead of sent over the internet. This avoids the large transfer of data over the internet.

A general rule of thumb we provide to clients is that for every 1TB of customer data you need 1 T-1 (or 1.55Mbps). A 20Mbps internet connection could support a 12TB environment.

[Related: Use Archiving to reduce cloud backup costs]

Summary: Having too much data is rarely a concern for the right cloud backup solution.

4. “Incremental forever means hundreds of restores.”

Related to #3, people think “incremental forever” means lots of little restores. They think if they have a years worth of backups they will have to restore the first backup, and then 364 other backups. This could not be further from the truth. Incremental backup software has the intelligence built in to assemble the data to any point in time. Restoring data can easily be accomplished, to any point in time, with just a few mouse clicks and a SINGLE restore operation.

Summary: Incremental forever does NOT means many restores. Rather a restore operation, to any point in time, can be made in one step.

5. “Too costly.”

Nothing is more costly than losing your business or data. Our solution is based on the size of the backups, not the number of devices/servers being backed up. The storage size is also after de-duplication and compression, which will lower costs.

Older archived data can be stored at lower cost as well. Which allows you to align the cost of the backup with the value of the data. In many cases we can drastically reduce costs by moving older data to lower cost tiers of backup storage.

In addition to the backups, you are receiving expert management, monitoring and support services from the service provider. For many clients utilizing an “unmanaged service” many backups go without being properly monitored, tested, and restored. Our services allow full expert support and monitoring at a much lower cost to you, without the worry of losing any data.

Summary:  When you look at all aspects of backup and recovery, the costs can be easily justified.

managecast-Logo1-small

Managecast is now a proud member of the VEEAM Cloud Connect Partner Program

 

VCSP_silver_logo

Managecast is a featured partner on VEEAM’s list of service providers that offer the VEEAM Cloud Connect services. VEEAM Cloud Connect enables you to quickly and efficiently get your VEEAM backups offsite, safe and secure, so you can always recover your data no matter what!

Our services powered by VEEAM allow for:

  • Full integration with VEEAM
  • Secure, encrypted backup, replication, and restore
  • Full-site failover
  • Full and partial failback
  • Quick site failover testing
  • Fast recovery

Managecast is offering 30 day, no-oligation, free trials and enabling your existing VEEAM installation could not be easier. Get your existing VEEAM backups offsite using the familiar VEEAM management console. Managecast can also provide updated VEEAM software and licensing if required.

Our Cloud-based disaster recovery and offsite backup powered by VEEAM can now be easily used to provide offsite disaster recovery capabilities for your organization. Contact us for a free trial.

image001managecast-Logo1-small

Tape is not dead, and why I finally bought a tape library

Backup tapesBeing the “Cloud Backup Guy” I’ve made a living off replacing tape. Tape is that legacy media right? It’s true that for most small to medium businesses, tape is hard to manage, expensive to rotate offsite, and has virtually been replaced by disk-to-disk (or disk-to-disk-to-cloud) technologies. However, I am finally willing to say tape definitely has it’s place.

Related article: Is Tape Dead?

Given that I have been so anti-tape for many years, I thought it was news to share when I finally decided that tape had it’s place. Don’t get me wrong. I’ve had nearly 30 years of IT consulting experience. In the old days I used nothing but tape as it was the only real option for data protection. I’ve also had my share of bad experiences with tape (mostly the old 4mm and 8mm drives and tapes). I hated the stuff and never wanted to rely on it. Like many seasoned IT professionals of the past, many of us had nightmares to tell about tape backup. When I got into the Cloud Backup business, the passion I had for disliking tape really helped me convince folks not to use it.

Now don’t get me wrong, I think for most SMB’s tape is dead. However, as your data volume grows, and I am talking 50TB+ of data, you can not ignore the efficiency and cost effectiveness of good old tape. Tape has also come a long, long way over the years. Gone are the days of 4mm and 8mm DAT tapes.  LTO, the clear tape standard for the modern era, boasts LTO-7, now with a native capacity of 6TB+ (15TB compressed) per tape cartridge. LTO offers a reliable and cost effective method to store huge quantities of data at a much lower cost than disk storage technology.

What brought about this decision to finally embrace tape? Backup Blue Marker

The decision to choose tape became apparent as we were gobbling up more and more disk space for cloud backups. Our growth rate has been significant and trying to keep up with backup growth meant buying more and more disk. It’s not just the cost of disk we had to buy, but the rack space, the power, cooling, and other costs associated with hundreds of spinning disks, plus the cost of replicating the data to another data center with more spinning disks!  A significant segment of our backup storage was consumed by long-term archival storage of older data that continued to grow rapidly as data ages.

Related: Archiving – Align the value of your data with the cost to protect it

Our cloud backup solution allows tiering of the data so that older, less frequently used data could be pushed to longer-term archival storage. Once I faced the decision to have to buy even more disk versus the cost of a tape solution to store the ever growing mountain of archive data, it became a no-brainer. Tape was the clear winner in that type of scenario.

Allow me to stress that I am not a proponent of tape except for all but the largest of companies or others who required long-term archive of a large amount of data. It still introduces manual labor to swap and store tapes, take them offsite, etc. For near and medium term data, we still keep everything stored on disk for quick and easy access. However, for the long-term archival data, we are using tape and love the stuff. The nice thing is that our customers still don’t have to worry about using tape as we manage everything for them.

managecast-Logo1-small

The requested operation could not be completed due to a file system limitation (Asigra)

On trying to backup an Exchange database using Asigra we were seeing the message “The requested operation could not be completed due to a file system limitation” after about 4 hours of backing up. This was an Exchange database backup (non VSS), and it was copying the database to the DS-Client buffer.  The Exchange database was 1TB+.  The DS-Client was running on Windows 8.1.

The message:

The requested operation could not be completed due to a file system limitation  (d:\buffer\buf\366\1\Microsoft Information Store\database1\database1.edb)

Solution:

There is a default limitation with the Windows file system for supporting large files. We had to reformat the buffer drive on the DS-Client using the command:

format d: /fs:ntfs /L /Q

After making this change we no longer experienced the error message and backups completed successfully.

Why You Need a Backup/Disaster Recovery MSP

From the one-person office, to the largest enterprise and anywhere in between, every company and individual has information that needs to be managed, backed-up, and stored. Most corporations and small companies are turning to a MSP (Managed Service Provider) to handle their backup and DR (Disaster Recovery) needs. This raises a question asked for businesses of any size: Why do YOU need a backup/disaster recovery MSP?

The summed up response to the question is simple: ExpertiseService Provider AbstractThe experts agree and believe that DR planning requires “complex preparation and flawless execution.” This task may not be possible to be executed by an individual or company without the proper MSP. It is the MSP’s responsibility to handle customer needs and monitor DR plans to minimize RTO (recovery time objectives). We are all aware that disasters happen around us, whether it is a natural disaster or a service outage. It is the MSP’s job to make sure that the companies are prepared. Protecting company data is a vital task in the digital world of today.

For most companies without a MSP, backing up data and disaster recovery can easily be neglected. The process of backup and disaster recovery is usually a part-time job within the company. For instance, there is seldom an individual with the sole task of data backup and recovery in an enterprise. Regardless of size, companies need full-time monitoring and planning for their backup/disaster recovery needs.

Other projects at work can easily sidetrack backups. Without a MSP, many people tend to neglect the company’s backups due to other projects they are focused on at the office. The task may be put on the back burner for other company needs. With the help of an MSP, backups will never be neglected.

In some occasions, a company may designate a person with the least experience to manage backups. They may underestimate the importance of consistent monitoring of their data so they pass the task down to “the new guy/girl.” Without the proper expertise provided by a MSP, the company could be at risk of losing data.

A backup administrator is not an expert. It is crucial that backup and disaster recovery planning and monitoring is handled by a true expert for the sake of the company’s data. Lack of knowledge leads to inefficient or problematic backups. Also, system restores are rarely practiced unless they are handled by a MSP.

How Managecast fixes these issues:

Managecast Technologies covers the company’s backup and disaster recovery needs. We provide enterprise-class, industrial-grade backup to businesses of all sizes. Managecast uses top software providers and partners including Asigra, Veeam, and Zerto to ensure that data is fully monitored and stored. They provide the proper expertise to execute proper DR planning and business continuity. Instead of putting your company’s backup/disaster recovery plan on the side, turn to the experts at Managecast Technologies to fix these issues. We assist with all aspects of setup of the backups from retention rules, schedules, and how to best protect the data in the most cost effective manner.managecast-Logo1-small