Issues Running Backups or Rescanning Repository After Veeam Upgrade

Just recently, right after upgrading to Veeam 9.5, we ran into an error with one of our customers that would show up whenever backups started to run and when we tried to rescan the Veeam repository. The errors messages were:

Warning Failed to synchronize Backup Repository Details: Failed to synchronize 2 backups Failed to synchronize 2 backups
Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Incorrect file system item link received: BackupJobName

Based on the errors it looked like there were issues with the database entries for the backup job mentioned in the error. As a troubleshooting step we tried removing the backup files from the GUI under Backup & Replication>Backups by right clicking on the backup and selecting ‘Remove from Configuration.’ However, this ended up giving us the same error in a popup dialogue:

Incorrect file system item link received: BackupJobName

After opening a ticket with Veeam they informed us that this is a known issue caused by unexpected backup entries in the VeeamBackup SQL database. Namely, the issue backup jobs were listed in the database with a zero entry for the job ID or repository ID causing Veeam to not be able to locate the backup files.

Because this involves the Veeam SQL database we are going to be making changes to the database so it’s best to back it up before the next steps. Here’s a knowledge base article from Veeam that shows the suggested methods to backing up the database.

To determine whether there are any errant backup entries in the SQL database run the following query:

SELECT TOP 1000 [id]
 ,[job_id]
 ,[job_name]
 ,[repository_id]
 FROM [VeeamBackup].[dbo].[Backup.Model.Backups]

Under ‘repository_id’ you should see one or more of the backup jobs showing ‘00000000-0000-0000-0000-000000000000’ or ‘NULL’ as the id for job or repository. Any entries with this issue will need to be removed from the database in order to resolve the error.

It’s best practice to save a backup of the SQL database before making any changes. If you’re unsure of how to back up the SQL database, follow Veeam’s knowledge base article. 

After backing up the SQL database run the following query for each job with ‘00000000-0000-0000-0000-000000000000’ as the repository_id:

EXEC [dbo].[Backup.Model.DeleteBackupChildEntities] 'REPLACE_WITH_ID’
EXEC [dbo].[Backup.Model.DeleteBackup] 'REPLACE_WITH_ID’

Replacing the ‘job_id’ with that of any backups with the unexpected ‘repository_id’ found with the previous query.
After that issues with the local backup server was resolved.

However, we were still seeing errors when trying to connect the backup server to our cloud connect repository to do backup copies. We were still getting the following errors:

Warning Failed to synchronize Backup Repository Details: Failed to synchronize 2 backups Failed to synchronize 2 backups
Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Incorrect file system item link received: BackupJobName

In order to resolve this we had to remove the entries for the problem jobs from the cloud connect server database. For those using a cloud connect service provider you will have to have them make changes to the SQL database.

We had 2 VMs that were giving the ‘Incorrect file system item link received: JobName” error. So we had to remove any entries for those jobs from the SQL db.

We ran the following query to get the Job ID of both jobs mentioned in the errors:

SELECT TOP 1000 [id]
 ,[job_id]
 ,[job_name]
 FROM [VeeamBackup].[dbo].[Backup.Model.Backups]

Then we ran the same delete query as before using the new job_id’s:

EXEC [dbo].[Backup.Model.DeleteBackupChildEntities] 'REPLACE_WITH_ID’
EXEC [dbo].[Backup.Model.DeleteBackup] 'REPLACE_WITH_ID’

After those entries were deleted we were able to rescan the repository.

Lastly, once we rescanned our cloud repository and imported the existing backups we started getting the following error message:

Warning Failed to import backup path E:|PATH|TO|BACKUP|FILES|BackupJobName.vbm Details: Path E:|PATH|TO|BACKUP|FILES|BackupJobName2016-11-21T220000.vib is absolute and cannot be added to other path

This error indicates that there is a problem where the backup chain isn’t accurately resolving the location of the GFS restore points. In order to resolve this we had to manually remove the Imported backups from the cloud connect server by going to Backup & Replication>Backups and selecting the backup job and choosing ‘Remove from Configuration’ making sure to check ‘Include archived full backups.’ After the backups had been removed from the cloud connect repository we were able to manually rescan the repository on the local backup server and the backup files were imported again successfully.

Update:  After deleting these affected rows using the commands above you may get the following error message:

Unable to map tenant cache ID 'ID-GOES-HERE' to service provider side ID 'ID-GOES-HERE' because it is already mapped to ID 'ID-GOES-HERE'

If you do see this error a solution can be found in this Veeam KB article. Essentially, the data for these deleted rows still exists in the Veeam DB cache table and needs to be removed. In order to do this run the following query on the VeeamBackup database:

delete from [cachedobjectsidmapping]

This will delete the DB cache table. In order to re-populate it you will need to rescan the service provider and the Cloud Repository.

Veeam Backups with Long Term Retention

One of the features of Veeam Backup and Replication is the ability to not only perform an incremental backup job to local disk but then schedule a job to copy those increments to another location. These backup copy jobs are forever forward incremental meaning that after the first full backup is created only incremental data from that point forward are copied. Once the incremental chain hits the set limit of restore points, after any new restore points is copied, the oldest restore point is automatically synthetically merged into the full backup file. The default number of incremental restore points is 7 but this number can be increased. However, the longer the incremental chain is, the more data will be lost if the incremental chain is broken due to corrupt restore points.

Managecast increases this to 14 daily incrementals by default so that 2 weeks of daily increments are ready for restore.

For longer retention, Veeam recommends using a backup copy jobs built in GFS retention to keep X number of weekly, monthly, quarterly, and yearly full generations of the backup files.  This means that the backup copy job will create a copy of the most recent full backup file and archive it according to the retention policy. Because these GFS restore points are individual full backup files they do not rely the incremental change and will not be lost if the incremental chain is broken.

Unfortunately, this method can utilize a large amount of storage very quickly especially when it comes to long term retention. For example, if a backup copy job is set to keep 7 incremental restore points, 4 weekly, and 12 monthly backups it would need enough storage for the 7 incrementals, the current full backup file as well as 15 additional full backup files.

A useful tool to figure out how much storage is needed is this restore point simulator.

One option to cut down on the amount of storage used is to use deduplication on the target backup repository. Because the GFS restore points, or full backup copies, are copies of the same backup files they deduplicate extremely well. Keep in mind that if the current full backup file is deduplicated it will severely slow down the process of merging the oldest incremental restore point and the current full backup file. To get around this Managecast only deduplicates files older than a 7 days. Meaning that the GFS restore points are only deduplicated a week after they are copied from the current full. This way the backup repository is storing the daily incrementals, the current full backup file, and all of the deduplicated GFS full restore points. This takes up a little more than 2 times the full backup size plus the size of the incrementals.

Another important thing to note is that Deduplication will not work on the backup files if they are encrypted. Because encryption is changing the individual files, deduplication would no longer see the different backup files as similar data and would not deduplicate that data. Depending on how much storage the backups take up this means that there needs to be a choice between either encrypting the backup files and a high cost long term retention policy, or keeping the long retention policy and being able to deduplicate the full backup files.

In summary, VEEAM is a great product, but for long-term retention requirements it can really explode the size of backup storage required. Managecast will be reviewing the new VEEAM v9.5 in combination with Windows 2016 and the advanced ReFS file system to see if new de-duplication efficiency (with combined encryption) will help solve these issues. Stay tuned!

Update: Check out our post on Veeam 9.5 ReFS Integration for longer term retention!

 

Considering a low cost cloud backup solution?

Ouch, Carbonite is not having a good day.  I see some people choose these low cost cloud backup providers without realizing they are not the same as enterprise-class backup providers like Managecast. It would seem you get what you pay for.

Carbonite Forces Password Reset After Password Reuse Attack!

https://www.databreaches.net/carbonite-forces-password-reset-after-password-reuse-attack/

 

Top 5 Cloud Backup Myths

Hand with marker writing the word Facts MythsBeing a cloud backup managed service provider we run into common myths surrounding cloud backup.  We hope we can dispel some of these more pervasive, and incorrect, perspections of cloud backup.

 

1. Cloud backup is “Not secure.”

One of the biggest concerns with cloud backup is around security and privacy of data. This is understandable in today’s world where data breach headlines are everywhere. Ironically, with the right cloud backup solution, your data is most likely many times more secure using an enterprise-grade cloud backup solution than with more traditional backup methods.  When we encounter folks who think cloud is not secure, it is usually quickly apparent their existing backup solution is far less secure. Traditional, customer managed backup systems, struggle with getting data offsite quickly, securely, which also includes managing media rotations, encryption and other best practices that are not strictly adhered to. Security is a top priority for a cloud backup service provider where these security issues are easily handled.

Summary: Cloud backup providers have gone to great lengths to make sure their managed services are extremely secure. We offer highly encrypted services (AES 256-bit, FIPS 140-2 certified encryption) with the client’s control of the encryption key. In other words, we do NOT store the encryption key unless you request us to and therefore we do not have access to customer data.

2. “Restoring from the cloud takes too long.”Cloud Data Security

Most enterprise cloud backup systems have the option to store data locally as well as having multiple copies offsite. 99% of the time recoveries are made from local storage at LAN speed.  It is rare that restoring data from cloud is required.

In the rare event of a site disaster, in which the local backup has been compromised or destroyed, most business-class cloud backup service providers will provide the ability to ship your data on portable media (fully encrypted) within 24-48 hours. If that sounds like a long time, consider whether you will also have the spare equipment on hand to restore your data to. Some cloud backup service providers will also provide the ability to spin up recovered servers in the cloud for quick recovery.

Summary: Restoring massive amounts of backup data from the cloud is rare. Cloud backup service providers have a number of alternative methods to provide for quick recovery.

3. “Too much data to backup.

While this statement is sometimes true, it rarely is. Backup administrators are used to the legacy backup systems in which a full backup is made daily, or weekly and they think full backups are required for cloud backup. Actually, repeated full backups to cloud is not a good practice, and impractical in most circumstances.

A business-class cloud backup system will support “Incremental Forever”, which means that after the first full backup only incremental backups are made. Incremental backups only send the changed data (at the block level) since the last full backup. This drastically reduces the amount of data needed to backup.

In addition, the first full backup is typically performed on mobile media such as a USB drive and shipped (encrypted) to the data center instead of sent over the internet. This avoids the large transfer of data over the internet.

A general rule of thumb we provide to clients is that for every 1TB of customer data you need 1 T-1 (or 1.55Mbps). A 20Mbps internet connection could support a 12TB environment.

[Related: Use Archiving to reduce cloud backup costs]

Summary: Having too much data is rarely a concern for the right cloud backup solution.

4. “Incremental forever means hundreds of restores.”

Related to #3, people think “incremental forever” means lots of little restores. They think if they have a years worth of backups they will have to restore the first backup, and then 364 other backups. This could not be further from the truth. Incremental backup software has the intelligence built in to assemble the data to any point in time. Restoring data can easily be accomplished, to any point in time, with just a few mouse clicks and a SINGLE restore operation.

Summary: Incremental forever does NOT means many restores. Rather a restore operation, to any point in time, can be made in one step.

5. “Too costly.”

Nothing is more costly than losing your business or data. Our solution is based on the size of the backups, not the number of devices/servers being backed up. The storage size is also after de-duplication and compression, which will lower costs.

Older archived data can be stored at lower cost as well. Which allows you to align the cost of the backup with the value of the data. In many cases we can drastically reduce costs by moving older data to lower cost tiers of backup storage.

In addition to the backups, you are receiving expert management, monitoring and support services from the service provider. For many clients utilizing an “unmanaged service” many backups go without being properly monitored, tested, and restored. Our services allow full expert support and monitoring at a much lower cost to you, without the worry of losing any data.

Summary:  When you look at all aspects of backup and recovery, the costs can be easily justified.

managecast-Logo1-small

Managecast is now a proud member of the VEEAM Cloud Connect Partner Program

 

VCSP_silver_logo

Managecast is a featured partner on VEEAM’s list of service providers that offer the VEEAM Cloud Connect services. VEEAM Cloud Connect enables you to quickly and efficiently get your VEEAM backups offsite, safe and secure, so you can always recover your data no matter what!

Our services powered by VEEAM allow for:

  • Full integration with VEEAM
  • Secure, encrypted backup, replication, and restore
  • Full-site failover
  • Full and partial failback
  • Quick site failover testing
  • Fast recovery

Managecast is offering 30 day, no-oligation, free trials and enabling your existing VEEAM installation could not be easier. Get your existing VEEAM backups offsite using the familiar VEEAM management console. Managecast can also provide updated VEEAM software and licensing if required.

Our Cloud-based disaster recovery and offsite backup powered by VEEAM can now be easily used to provide offsite disaster recovery capabilities for your organization. Contact us for a free trial.

image001managecast-Logo1-small

Tape is not dead, and why I finally bought a tape library

Backup tapesBeing the “Cloud Backup Guy” I’ve made a living off replacing tape. Tape is that legacy media right? It’s true that for most small to medium businesses, tape is hard to manage, expensive to rotate offsite, and has virtually been replaced by disk-to-disk (or disk-to-disk-to-cloud) technologies. However, I am finally willing to say tape definitely has it’s place.

Related article: Is Tape Dead?

Given that I have been so anti-tape for many years, I thought it was news to share when I finally decided that tape had it’s place. Don’t get me wrong. I’ve had nearly 30 years of IT consulting experience. In the old days I used nothing but tape as it was the only real option for data protection. I’ve also had my share of bad experiences with tape (mostly the old 4mm and 8mm drives and tapes). I hated the stuff and never wanted to rely on it. Like many seasoned IT professionals of the past, many of us had nightmares to tell about tape backup. When I got into the Cloud Backup business, the passion I had for disliking tape really helped me convince folks not to use it.

Now don’t get me wrong, I think for most SMB’s tape is dead. However, as your data volume grows, and I am talking 50TB+ of data, you can not ignore the efficiency and cost effectiveness of good old tape. Tape has also come a long, long way over the years. Gone are the days of 4mm and 8mm DAT tapes.  LTO, the clear tape standard for the modern era, boasts LTO-7, now with a native capacity of 6TB+ (15TB compressed) per tape cartridge. LTO offers a reliable and cost effective method to store huge quantities of data at a much lower cost than disk storage technology.

What brought about this decision to finally embrace tape? Backup Blue Marker

The decision to choose tape became apparent as we were gobbling up more and more disk space for cloud backups. Our growth rate has been significant and trying to keep up with backup growth meant buying more and more disk. It’s not just the cost of disk we had to buy, but the rack space, the power, cooling, and other costs associated with hundreds of spinning disks, plus the cost of replicating the data to another data center with more spinning disks!  A significant segment of our backup storage was consumed by long-term archival storage of older data that continued to grow rapidly as data ages.

Related: Archiving – Align the value of your data with the cost to protect it

Our cloud backup solution allows tiering of the data so that older, less frequently used data could be pushed to longer-term archival storage. Once I faced the decision to have to buy even more disk versus the cost of a tape solution to store the ever growing mountain of archive data, it became a no-brainer. Tape was the clear winner in that type of scenario.

Allow me to stress that I am not a proponent of tape except for all but the largest of companies or others who required long-term archive of a large amount of data. It still introduces manual labor to swap and store tapes, take them offsite, etc. For near and medium term data, we still keep everything stored on disk for quick and easy access. However, for the long-term archival data, we are using tape and love the stuff. The nice thing is that our customers still don’t have to worry about using tape as we manage everything for them.

managecast-Logo1-small

The requested operation could not be completed due to a file system limitation (Asigra)

On trying to backup an Exchange database using Asigra we were seeing the message “The requested operation could not be completed due to a file system limitation” after about 4 hours of backing up. This was an Exchange database backup (non VSS), and it was copying the database to the DS-Client buffer.  The Exchange database was 1TB+.  The DS-Client was running on Windows 8.1.

The message:

The requested operation could not be completed due to a file system limitation  (d:\buffer\buf\366\1\Microsoft Information Store\database1\database1.edb)

Solution:

There is a default limitation with the Windows file system for supporting large files. We had to reformat the buffer drive on the DS-Client using the command:

format d: /fs:ntfs /L /Q

After making this change we no longer experienced the error message and backups completed successfully.

Why You Need a Backup/Disaster Recovery MSP

From the one-person office, to the largest enterprise and anywhere in between, every company and individual has information that needs to be managed, backed-up, and stored. Most corporations and small companies are turning to a MSP (Managed Service Provider) to handle their backup and DR (Disaster Recovery) needs. This raises a question asked for businesses of any size: Why do YOU need a backup/disaster recovery MSP?

The summed up response to the question is simple: ExpertiseService Provider AbstractThe experts agree and believe that DR planning requires “complex preparation and flawless execution.” This task may not be possible to be executed by an individual or company without the proper MSP. It is the MSP’s responsibility to handle customer needs and monitor DR plans to minimize RTO (recovery time objectives). We are all aware that disasters happen around us, whether it is a natural disaster or a service outage. It is the MSP’s job to make sure that the companies are prepared. Protecting company data is a vital task in the digital world of today.

For most companies without a MSP, backing up data and disaster recovery can easily be neglected. The process of backup and disaster recovery is usually a part-time job within the company. For instance, there is seldom an individual with the sole task of data backup and recovery in an enterprise. Regardless of size, companies need full-time monitoring and planning for their backup/disaster recovery needs.

Other projects at work can easily sidetrack backups. Without a MSP, many people tend to neglect the company’s backups due to other projects they are focused on at the office. The task may be put on the back burner for other company needs. With the help of an MSP, backups will never be neglected.

In some occasions, a company may designate a person with the least experience to manage backups. They may underestimate the importance of consistent monitoring of their data so they pass the task down to “the new guy/girl.” Without the proper expertise provided by a MSP, the company could be at risk of losing data.

A backup administrator is not an expert. It is crucial that backup and disaster recovery planning and monitoring is handled by a true expert for the sake of the company’s data. Lack of knowledge leads to inefficient or problematic backups. Also, system restores are rarely practiced unless they are handled by a MSP.

How Managecast fixes these issues:

Managecast Technologies covers the company’s backup and disaster recovery needs. We provide enterprise-class, industrial-grade backup to businesses of all sizes. Managecast uses top software providers and partners including Asigra, Veeam, and Zerto to ensure that data is fully monitored and stored. They provide the proper expertise to execute proper DR planning and business continuity. Instead of putting your company’s backup/disaster recovery plan on the side, turn to the experts at Managecast Technologies to fix these issues. We assist with all aspects of setup of the backups from retention rules, schedules, and how to best protect the data in the most cost effective manner.managecast-Logo1-small

Is Backup Tape Dead?

I just had someone contact me and ask my opinion if I thought backup tape is dead.

Maybe 6 years ago I would have enthusiastically said “Yes!”, and did so many times. However, after spending the last 6 years dedicated to cloud backup and immersed in the backup industry, my views have evolved on tape.

Instead of asking “Is tape dead?”, the proper question is “Has the use of tape changed?”. While tape is far from dead and very much alive, it’s use has substantially changed over the past 5 to 10 to 15 years. In the past, tape was the go-to medium for backups of all types. However, disk has certainly displaced a lot of tape when it comes to near line backup storage of recently created data. Many modern backup environments consist of disk-to-disk backup and then backup data is written to tape after some period of time for longer-term storage and archive.

Disk storage is significantly higher cost than tape storage, but for near term backup data the advantages of disk outweigh the cost penalty. For long-term archive of older data, where quick access is not needed, tape is the clear winner.

[Read about aligning the cost of data protection vs the value of the data]

In my experience, many SMBs have shifted to a disk-to-disk-to-cloud solution with no tape. So, in the SMB one could argue that tape has largely died (or at least diminished greatly). However, at the enterprise-level or those organizations who require long-term retention of backup data, there is no better alternative to storing large amounts of data on tape, and this will probably be the case for the next 10 years or beyond. So, no, tape is not dead, but it’s use has changed.

 

 

 

 

Asigra reporting “cannot allocate memory” during seed import

We have DS-Systems running on Linux and we connect the Windows Seed backups to a Windows 7/8.1 machine and then use CIFS to mount the Windows share to Linux. The command we use on Linux to mount the Windows share is:

mount -t cifs //<ipaddress of windows machine>/<sharename> -o username=administrator,password=xxxxxx /mnt/seed

We were importing some large backup sets with millions of files and started noticing “cannot allocate memory” errors during the seed import process. When the import would complete it would indicate that not all files were imported.

At first we thought this was an Asigra issue, but after much troubleshooting we found this was an issue with the Windows machine we were using and was related to using the CIFS protocol with Linux.

A sample link to the issue we were seeing is: http://linuxtecsun.blogspot.ca/2014/12/cifs-failed-to-allocate-memory.html

That link indicates to make the following changes on the Windows machine:

regedit:

HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\MemoryManagement\LargeSystemCache (set to 1)

HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size (set to 3)

Alternatively, start Command Prompt in Admin Mode and execute the following:

reg add “HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management” /v “LargeSystemCache” /t REG_DWORD /d 1 /f

reg add “HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters” /v “Size” /t REG_DWORD /d 3 /f

Do one of the following for the settings to take effect:

Restart Windows

Restart the Server service via services.msc or from the Command Prompt run: ‘net stop lanmanserver’ and ‘net start lanmanserver’ – The server may automatically restart after stopping it.

After we made these changes the memory errors were resolved!