Asigra reporting “cannot allocate memory” during seed import

We have DS-Systems running on Linux and we connect the Windows Seed backups to a Windows 7/8.1 machine and then use CIFS to mount the Windows share to Linux. The command we use on Linux to mount the Windows share is:

mount -t cifs //<ipaddress of windows machine>/<sharename> -o username=administrator,password=xxxxxx /mnt/seed

We were importing some large backup sets with millions of files and started noticing “cannot allocate memory” errors during the seed import process. When the import would complete it would indicate that not all files were imported.

At first we thought this was an Asigra issue, but after much troubleshooting we found this was an issue with the Windows machine we were using and was related to using the CIFS protocol with Linux.

A sample link to the issue we were seeing is:

That link indicates to make the following changes on the Windows machine:


HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\MemoryManagement\LargeSystemCache (set to 1)

HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size (set to 3)

Alternatively, start Command Prompt in Admin Mode and execute the following:

reg add “HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management” /v “LargeSystemCache” /t REG_DWORD /d 1 /f

reg add “HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters” /v “Size” /t REG_DWORD /d 3 /f

Do one of the following for the settings to take effect:

Restart Windows

Restart the Server service via services.msc or from the Command Prompt run: ‘net stop lanmanserver’ and ‘net start lanmanserver’ – The server may automatically restart after stopping it.

After we made these changes the memory errors were resolved!

Asigra slow seed import

We recently discovered that Asigra DS-System v13.0.0.5 seems to have a serious problem with importing seed backups. This problem exposed itself as we attempted to import 5.5TB of seed data. We then performed additional testing by backing up a small Windows 2008 server. The seed backup was a little under 3GB. On v13.0.0.5 the seed import took 55 minutes. On the same infrastructure, the same server seed backup imported into a v12.2.1 DS-System in less than 3 minutes.

In addition we are also seeing the error “cannot allocate memory” during the seed import process even though we have tons of free RAM and disk space.

We have notified Asigra and they are attempting to reproduce the problem.

Update 12/4/2015

In testing, and working with Asigra, we have found that if you create the seed backup without using the metadata encryption option then the seed import speed is acceptable and imports quickly.

Update 12/8/2015

Asigra released DS-System v13.0.0.10 to address this issue. Testing shows it does indeed solve the speed issue. Thanks Asigra!

Zerto backup fails unexpectedly

We had a recent issue with Zerto backups that took some time to remedy. There was a combination of issues that exposed the problem, and here is a run down of what happened.

We had a customer with about 2TB of VM’s replicating via Zerto. We wanted to provide backup copies using the Zerto backup capability. Keep in mind Zerto is primarily a disaster recovery product and not a backup product (read more about that here: Zerto Backup Overview). The replication piece worked flawlessly, but we were trying to create longer-term backups of virtual machines using Zerto’s backup mechanism which is different from Zerto replication.

Zerto performs a backup by writing all of the VM’s within a VPG to a disk target. It’s a full copy, not incremental, so it’s a large backup every time it runs, especially if it’s a VPG holding a lot of VMs. We originally used a 1Gigabit network to transfer this data, but quickly learned we need to upgrade to 10Gigabit to accommodate these frequent large transfers.

However, we found that most of the time the backup would randomly fail. The failure message was:

“Backup Protection Group ‘VPG Name’. Failure. Failed: Either a user or the system aborted the job.”

To resolve the issue we had opened up several support cases with Zerto, upgraded from version 3.5 to v4, implemented 10Gigabit, put the backup repository directly on the Zerto Manager server.

After opening several cases with Zerto we finally had a Zerto support engineer thoroughly review the Zerto logs. They found there were frequent disconnection events. With this information we explored the site-to-site VPN configuration and found there were minor mismatches in the IPSEC configurations on each side of the VPN which were causing very brief disconnections. These disconnections were causing the backup to fail. Lesson learned: It’s important to ensure the VPN end-points are 100% the same. We use VMware vShield to establish the VPN connections and vShield doesn’t provide a lot of flexibility to change VPN settings, so we had to change the customer’s VPN configuration to match the vShield configuration.

Even though we seemed to have solved the issue by fixing the VPN settings, we asked Zerto if there was any way to make sure the backup process ran even if there was a connection problem. They shared with us a tidbit of information that has enabled us to achieve 100% backup success:

There is a tweak that can be implemented in the ZVM which will allow the backup to continue in the event of a disconnection, but there’s a drawback to this in that the ZVM’s will remain disconnected until the backup completes. As of now, there’s no way to both let the backup continue and the ZVM’s reconnect. So there is a drawback, but for this customer it was acceptable to risk a window of time that replication would stop to make a good backup. In our case we made the backup on Sunday when RPO wasn’t as critical, and even then the replication only halts if there is a disconnection between the sites which became even more rare since we fixed the VPN configuration.

The tweak:

On the Recovery (target) ZVM, open the file C:\Program Files (x86)\Zerto\Zerto Virtual Replication\tweaks.txt (may be in another drive, depending on install)
In that file, insert the following string (on a new line if the file is not empty)
t_skipClearBlockingLine = 1
Save and close the file, then restart the Zerto Virtual Manager and Zerto Virtual Backup Appliance services

Now, when you run a backup, either scheduled or manual, any ZVM <-> ZVM disconnection events should not cause the backup to stop.

I hope this helps someone else!

Zerto Backup Overview

Zerto is primarily a disaster recovery solution that relies on a relatively short-term journal that retains data for a maximum of 5 days (at great expense in disk storage). Many Zerto installations only have a 4-hour journal to minimize the storage needed for the journal. Zerto is a great disaster recovery solution, but not as great as a backup solution.  Many customers will augment Zerto with a backup solution for long-term retention of past data.

Long-term retention is the ability to go back to previous versions of data, which is often needed for compliance reasons. Think about the ability to go back weeks, months, and even years to past versions of data. Even if not driven by compliance, the need to go back in time to view past versions of data is very useful in situations such as:

  • Cryptolocker type ransom-ware corrupts your data and is replicated to the DR site
  • Legal discovery – for example, reviewing email systems as they were months or even years ago.
  • Inadvertent overwriting of critical data such as a report that is updated quarterly. Clicking “Save” instead as “Save As” is a good example of how this can happen.
  • Unexpected deletion of data that takes time to recognize.

For reference and further clarification, check out the differences between disaster recovery, backup and business continuity.

Even though Zerto is primarily a disaster recovery product, it does have some backup functions.

Zerto backup functionality involves making an entire copy of all of the VM’s within a VPG. We sometimes break up VPG’s with the goal to facilitate efficient backups. One big VPG can result in making one big backup which can take many hours (or days) to complete. Since it’s an entire copy of the VPG it can take a significant amount of time and storage space to store the copy. Each backup is a full backup and currently no incremental/differential backup capability exists within Zerto.

It is also advisable to write the backups to a location which support de-duplication, such as Windows 2012 Server. It still takes time to write the backup, but the de-duplication will dramatically lower the required storage footprint for backing up Zerto VPG’s. Without de-duplication on the backup storage you will see a large amount of storage consumed by each full backup of the VPGs.

Zerto supports the typical grandfather-father-son backup with daily, weekly and monthly backups for 1 year. Zerto currently does not support backups past 1 year, so even with Zerto backups, the long-term retention of data is not as good as with other products designed to be backup products. However, Zerto really shines as a disaster recovery tool when you need quick access to the latest version of your servers. It’s backup capabilities will get better with time.





The difference between disaster recovery, backup and business continuity

I sometimes see words like backup and disaster recovery used interchangeably, and sometimes in the wrong context. I see a customers asking for a DR solution when they need backup and DR. Some refer to it in the industry as BDR (Backup & Disaster Recovery).

So what’s the difference? Why should you care?

Disaster Recovery

Disaster recovery is about restoring critical IT functions quickly after a disaster. Disasters can be small from a critical server failing, to natural disasters like fire, flooding, tornado, hurricane to manmade disasters such as construction accidents, theft, sabotage, chemical spills, that may render your entire site unusable. The idea of DR is to restore critical IT services as quickly as possible after a disaster. Obviously this can encompass much more than just data recovery. A comprehensive DR plan might include alternate sites, spare hardware, etc.


Backup, on the other hand, can include the ability to perform a rapid recovery – yes Disaster Recovery, but it can also provide you access to the past history of your backed up data. That is a big distinction between backup and disaster recovery. There are some really great disaster recovery products that provide a very quick recovery to a very recent copy of a server in the case of disaster, but were never designed to provide data even 2 weeks ago, much less 6 months ago or even years ago.

In addition to restoring the most recent file, backup allows access to older, past versions of files. Older versions of files allow to recover from data loss that occurred in the past, but is noticed in the present. A cryptolocker ransomware type infection is a good example in that the latest backups may be of infected files and a restore is required from before the infection. It’s also very easy to bring up a monthly report in Word and select “Save” instead of “Save As”, overwriting the original document. Without keeping copies of past versions, we could potentially lose valuable data.

Some organizations are mandated by law to keep copies of their older data as well. Think medical providers who need to keep past patient data for years.

Backup data can be current data used for DR, but it’s also the past versions of data and being able to reproduce data as it was back to a certain point in time can be of enormous value.

Business Continuity

Business Continuity is generally defined as the process by which an organization can continue essential business functions despite a disaster. A comprehensive business continuity plan is far more than just restoring servers and data, and often includes things that are not IT related at all.

The business operation needs of every organization can be different. Some business are highly dependent on phone service to take calls from customers for instance, while some businesses require specialized equipment that is not easily replaced, or replaced quickly. Who are the critical employees and what functions do they perform? Where will employees work if the office is unavailable? There are many, many questions to ask in order to create an effective business continuity plan, and data recovery is only one of many areas of concern.

Asigra BLM Archiving – Align the value of your data with the cost to protect it

Years ago, we treated all data as being equal. All data originated on one type of storage and stayed there until it was deleted. We now understand that not all data is created equal. Some types of data are more important than others, or accessed more frequently than others. Backup Lifecycle Management (BLM), defines the BLM concept where data is created on one storage system, then migrated to less expensive storage systems as it ages.

Asigra Backup Tiers


For example:

Data that is 2 minutes old is highly valued.
Data that is 2 months old may be of interest but is not as highly valued.
Data that is 2 years old may be needed for records but it is not critical to the daily functioning of the company.
DS-System – Primary Storage-Business-Critical Operational Data

Business Critical Operation Data contains the files, databases, email systems, etc., that are needed for operations on a day-to-day basis. All data that is critical to business operations data should be stored in the DS-System Tier.

BLM Archiver – Policy based retention of older data

Large file servers or other large repositories of potentially older data can be moved to BLM, Cost savings are the primary benefit by allowing storage of older data and automatic retention policies that move aged data into the lower cost tier. BLM Archiver can also be leveraged to provide storage of past generations of data while keeping the most recent version in business critical DS-System.

Managecast will help with analyzing your data to determine a protection method to best suit your recovery requirements and budget. There are many options to protect business data by strategically identifying the value and aligning the cost to protect it.

BLM Cloud Storage – For Low-Cost, Rarely Retrieved Files

Typically for files older than 1 year, BLM Cloud Storage is a method to cost effectively protect large data sets that are still needed for reference, compliance, and infrequent restores.

Files older than a specified age can be selected to move to long-term cloud storage and are generally grouped in large chucks from 250GB on up to multiple terabytes and then copied to long-term archive on disk.

Customers can utilize Amazon S3 cloud storage or use Managecast Enterprise Cloud Storage

Veeam v 8 Certificate Error When Upgrading (Authentication failed because the remote party has closed the stream)

We were setting up Veeam cloud connect infrastructure in order to provide Veeam Cloud Backup – which many of our customers were asking us to provide. Everything was going well with the installation and we started out with a self signed certificate for testing.  We then applied a certificate from a well-known Certificate Authority and it still worked fine.  We then got a notifacation from Veeam about an available update (v8 Update 3).  It is import to be on the same version or higher as clients so we went to update right away.

Upon updating to Update 3, Clients could no longer connect, getting the following error when trying to connect:

“Error: Authentication failed because the remote party has closed the stream”

This was happening immediatly when connecting and Veeam wouldnt let editing the Cloud repository continue because it did not have a certificate.

First, we tried reapplying the certificate on the service provider side. Although it completed successfully, the clients were still getting the same error.

We then tried just creating a new Self-signed certificate and that didnt work.

We thought maybe the client had to be on the same version, So I upgraded a client to version 3 and still got the same error.

Before updating Veeam, we had taken snapshots of all the Veeam components (Backup and Replication server, Cloud Gateway, WAN Accelerator, and Repository).  We reverted to before the upgrade and the clients could connect whether it was a self-signed certificate or from the Certificate Authority.

There were previous updates available, so we tried with Update 2 as well. Same results.

At this point, we opened a support ticket with Veeam. We uploaded Logs from every component and from the client side.  After they inspected the logs they came back and had us try installing update 2b, sending them logs from before and after the upgrade.  We still had the same results!

After they inspected those logs they sent me a process to try that ultimately worked.

They had us first apply a Self-Signed Certificate on the base installation of Version 8 and then upgrade to 2b, and if the self signed certificate still worked, to then apply the one from the Certificate Authority from the pfx file.

It worked!

We still had Update 3 to get to so I took another set of snapshots and upgraded to Update 3 and everything stayed working.

Hopefully this can save you some time, As I didn’t see this error documented in Veeam’s KB articles or documentation about installing certificates.

Reducing backup cost with Asigra


Data growth continues to expand at an explosive pace. More and more files are created every day. The files often get much larger over time as well. Compliance and other government mandates also require longer retention of past data which means backup data can grow even more.

Fortunately we have a lot of options when it comes to managing the size of offsite backups. This article will help you keep your backups to a reasonable size to ensure you are protecting your valuable data in the most cost effective way possible.

This information assumes you are using Asigra Cloud Backup, but it may also benefit you if using another capacity based backup system.

Not only will fixing ineffeciencies help lower your costs for offsite backup, but can also speed up the time it takes to make backups and reduce bandwidth usage!

Identify largest backup sets

Typically the largest backup sets are the backup sets that typically need the most attention from a space consumption standpoint. It does not do much good to focus our energy on small backup sets.

Run a backup set report from DS-Client (Reports menu, Backup Sets, Print or Print Preview) to determine which backup sets are consuming the most space.

Identify and make a list of the top backup sets based on “Stored Size” Column. Sometimes it’s one large backup set that, or the top 3.  A client with 100 backup sets may have a larger number of top backup sets than a customer with 3 backup sets, but generally your top 20% of your backups will consume 80% of the total space (the good ole 80/20 rule!)

Review backup logs on the largest backup sets

So, how did the largest backup sets become the largest?  Large backup sets can sometimes grow larger than expected, especially over time, because data is being backed up that is not needed. Examples could include (but not limited to):

  • Large SQL or other database dumps
  • Reorganizing file system data that is then detected as new backup data. For example, copying data from old directories to new directories.
  • Daily backup of antivirus signature updates (and old updates never removed)
  • Users copying large amounts of data to the network (family pictures, videos, backups of PC’s)
  • Backups made onto the server be backed up – nothing like backing up backups!

A great way to detect unneeded data is to review the detailed backup logs and review the list of files being backed up. Perform the following:

  1. From the DS-Client, click Logs, Activity Logs
  2. Change the From field date to several months ago to see the past activity
  3. Change the Activity field to “Backup” to only see the backup activity
  4. Change the Node/Set to the largest backup set to see only that activity
  5. Click on Find to see all backups for that specific node
  6. Once you see the list of backups, click on the “Transferred Amount” column twice to put the largest transfers at the top of the list. The list will be listed in order of the largest backup transfers over a given period with the largest transfers listed a the top.
  7. Select the largest backup (based on transfer size) in the list – this should be listed at the very top. Take note of how much data got transferred, the length of the backup and how many files got backed up.
  8. With the largest backup selected, click on the “Detailed log” button.
  9. On the next screen, click on Find to see all of the files that got backed up in that session.
  10. Click on the Size column to put the largest files on top
  11. Review the path and filename that got backed up and verify this is data that needed to be backed up.

Notes: If there are no extra large backups over a given range of dates consider reviewing “average” backups and look for data that doesn’t need to be backed up.

Managecast is highly experienced in reviewing backups. You may always choose to engage Managecast to get assistance with these items as we perform these functions frequently and know what to look for. However, you know your data best, so your intimate knowledge of your environment can also be valuable in determining if you are backing up data efficiently.

Review all backup sets and ask yourself “Does all of this data REALLY need to be offsite?”

For instance, antivirus or other utility servers may be important, but replaceable and do not contain any unique data that needs to be protected offsite. By eliminating offsite backups of these types of backup sets you may be able to reduce the total offsite data.

Consider using “local-ONLY” backup sets for data that needs to be backed up, but not critical enough to justify the cost of off-site backups. However, this may impact your recovery time objective in a significant disaster so make sure you know the pros and cons!

Some other things to check:

  1. Are you backing up Anti-virus definitions/software? Does this data need to be offsite or can you use local-only backups?
  2. Are recycle bin and temp files/folders being backed up? They can probably be excluded.
  3. Consider excluding the “System Volume Information” folder at the root of each disk drive being backed up. This is unneeded data.
  4. Are you excluding certain file extensions such as *.tmp, *.bak, *.mp3, and other file extensions that may represent non-critical data?
  5. Check the retention rules:
    1. Do all of the backup sets have a retention policy assigned?
    2. Is a schedule set to run the retention?
    3. How are you handling removal of deleted files from the backups? Check the retention for handling of removing deleted files from backups.

Are you fully leveraging archive storage?

Asigra provides for 4 different storage tiers to allow you to align the value of your data with the cost to protect it. Archiving can dramatically lower cloud backup storage costs.

Operating system data and applications can be replaced, so by using local-only backup for this type of data you can lower your overall costs.

In addition, older, static, rarely used data can be archive to dramatically reduce costs. To learn more about archiving and the different backup storage tiers, click here.