Considering a low cost cloud backup solution?

Ouch, Carbonite is not having a good day.  I see some people choose these low cost cloud backup providers without realizing they are not the same as enterprise-class backup providers like Managecast. It would seem you get what you pay for.

Carbonite Forces Password Reset After Password Reuse Attack!

https://www.databreaches.net/carbonite-forces-password-reset-after-password-reuse-attack/

 

Top 5 Cloud Backup Myths

Hand with marker writing the word Facts MythsBeing a cloud backup managed service provider we run into common myths surrounding cloud backup.  We hope we can dispel some of these more pervasive, and incorrect, perspections of cloud backup.

 

1. Cloud backup is “Not secure.”

One of the biggest concerns with cloud backup is around security and privacy of data. This is understandable in today’s world where data breach headlines are everywhere. Ironically, with the right cloud backup solution, your data is most likely many times more secure using an enterprise-grade cloud backup solution than with more traditional backup methods.  When we encounter folks who think cloud is not secure, it is usually quickly apparent their existing backup solution is far less secure. Traditional, customer managed backup systems, struggle with getting data offsite quickly, securely, which also includes managing media rotations, encryption and other best practices that are not strictly adhered to. Security is a top priority for a cloud backup service provider where these security issues are easily handled.

Summary: Cloud backup providers have gone to great lengths to make sure their managed services are extremely secure. We offer highly encrypted services (AES 256-bit, FIPS 140-2 certified encryption) with the client’s control of the encryption key. In other words, we do NOT store the encryption key unless you request us to and therefore we do not have access to customer data.

2. “Restoring from the cloud takes too long.”Cloud Data Security

Most enterprise cloud backup systems have the option to store data locally as well as having multiple copies offsite. 99% of the time recoveries are made from local storage at LAN speed.  It is rare that restoring data from cloud is required.

In the rare event of a site disaster, in which the local backup has been compromised or destroyed, most business-class cloud backup service providers will provide the ability to ship your data on portable media (fully encrypted) within 24-48 hours. If that sounds like a long time, consider whether you will also have the spare equipment on hand to restore your data to. Some cloud backup service providers will also provide the ability to spin up recovered servers in the cloud for quick recovery.

Summary: Restoring massive amounts of backup data from the cloud is rare. Cloud backup service providers have a number of alternative methods to provide for quick recovery.

3. “Too much data to backup.

While this statement is sometimes true, it rarely is. Backup administrators are used to the legacy backup systems in which a full backup is made daily, or weekly and they think full backups are required for cloud backup. Actually, repeated full backups to cloud is not a good practice, and impractical in most circumstances.

A business-class cloud backup system will support “Incremental Forever”, which means that after the first full backup only incremental backups are made. Incremental backups only send the changed data (at the block level) since the last full backup. This drastically reduces the amount of data needed to backup.

In addition, the first full backup is typically performed on mobile media such as a USB drive and shipped (encrypted) to the data center instead of sent over the internet. This avoids the large transfer of data over the internet.

A general rule of thumb we provide to clients is that for every 1TB of customer data you need 1 T-1 (or 1.55Mbps). A 20Mbps internet connection could support a 12TB environment.

[Related: Use Archiving to reduce cloud backup costs]

Summary: Having too much data is rarely a concern for the right cloud backup solution.

4. “Incremental forever means hundreds of restores.”

Related to #3, people think “incremental forever” means lots of little restores. They think if they have a years worth of backups they will have to restore the first backup, and then 364 other backups. This could not be further from the truth. Incremental backup software has the intelligence built in to assemble the data to any point in time. Restoring data can easily be accomplished, to any point in time, with just a few mouse clicks and a SINGLE restore operation.

Summary: Incremental forever does NOT means many restores. Rather a restore operation, to any point in time, can be made in one step.

5. “Too costly.”

Nothing is more costly than losing your business or data. Our solution is based on the size of the backups, not the number of devices/servers being backed up. The storage size is also after de-duplication and compression, which will lower costs.

Older archived data can be stored at lower cost as well. Which allows you to align the cost of the backup with the value of the data. In many cases we can drastically reduce costs by moving older data to lower cost tiers of backup storage.

In addition to the backups, you are receiving expert management, monitoring and support services from the service provider. For many clients utilizing an “unmanaged service” many backups go without being properly monitored, tested, and restored. Our services allow full expert support and monitoring at a much lower cost to you, without the worry of losing any data.

Summary:  When you look at all aspects of backup and recovery, the costs can be easily justified.

managecast-Logo1-small

Managecast is now a proud member of the VEEAM Cloud Connect Partner Program

 

VCSP_silver_logo

Managecast is a featured partner on VEEAM’s list of service providers that offer the VEEAM Cloud Connect services. VEEAM Cloud Connect enables you to quickly and efficiently get your VEEAM backups offsite, safe and secure, so you can always recover your data no matter what!

Our services powered by VEEAM allow for:

  • Full integration with VEEAM
  • Secure, encrypted backup, replication, and restore
  • Full-site failover
  • Full and partial failback
  • Quick site failover testing
  • Fast recovery

Managecast is offering 30 day, no-oligation, free trials and enabling your existing VEEAM installation could not be easier. Get your existing VEEAM backups offsite using the familiar VEEAM management console. Managecast can also provide updated VEEAM software and licensing if required.

Our Cloud-based disaster recovery and offsite backup powered by VEEAM can now be easily used to provide offsite disaster recovery capabilities for your organization. Contact us for a free trial.

image001managecast-Logo1-small

Tape is not dead, and why I finally bought a tape library

Backup tapesBeing the “Cloud Backup Guy” I’ve made a living off replacing tape. Tape is that legacy media right? It’s true that for most small to medium businesses, tape is hard to manage, expensive to rotate offsite, and has virtually been replaced by disk-to-disk (or disk-to-disk-to-cloud) technologies. However, I am finally willing to say tape definitely has it’s place.

Related article: Is Tape Dead?

Given that I have been so anti-tape for many years, I thought it was news to share when I finally decided that tape had it’s place. Don’t get me wrong. I’ve had nearly 30 years of IT consulting experience. In the old days I used nothing but tape as it was the only real option for data protection. I’ve also had my share of bad experiences with tape (mostly the old 4mm and 8mm drives and tapes). I hated the stuff and never wanted to rely on it. Like many seasoned IT professionals of the past, many of us had nightmares to tell about tape backup. When I got into the Cloud Backup business, the passion I had for disliking tape really helped me convince folks not to use it.

Now don’t get me wrong, I think for most SMB’s tape is dead. However, as your data volume grows, and I am talking 50TB+ of data, you can not ignore the efficiency and cost effectiveness of good old tape. Tape has also come a long, long way over the years. Gone are the days of 4mm and 8mm DAT tapes.  LTO, the clear tape standard for the modern era, boasts LTO-7, now with a native capacity of 6TB+ (15TB compressed) per tape cartridge. LTO offers a reliable and cost effective method to store huge quantities of data at a much lower cost than disk storage technology.

What brought about this decision to finally embrace tape? Backup Blue Marker

The decision to choose tape became apparent as we were gobbling up more and more disk space for cloud backups. Our growth rate has been significant and trying to keep up with backup growth meant buying more and more disk. It’s not just the cost of disk we had to buy, but the rack space, the power, cooling, and other costs associated with hundreds of spinning disks, plus the cost of replicating the data to another data center with more spinning disks!  A significant segment of our backup storage was consumed by long-term archival storage of older data that continued to grow rapidly as data ages.

Related: Archiving – Align the value of your data with the cost to protect it

Our cloud backup solution allows tiering of the data so that older, less frequently used data could be pushed to longer-term archival storage. Once I faced the decision to have to buy even more disk versus the cost of a tape solution to store the ever growing mountain of archive data, it became a no-brainer. Tape was the clear winner in that type of scenario.

Allow me to stress that I am not a proponent of tape except for all but the largest of companies or others who required long-term archive of a large amount of data. It still introduces manual labor to swap and store tapes, take them offsite, etc. For near and medium term data, we still keep everything stored on disk for quick and easy access. However, for the long-term archival data, we are using tape and love the stuff. The nice thing is that our customers still don’t have to worry about using tape as we manage everything for them.

managecast-Logo1-small

The requested operation could not be completed due to a file system limitation (Asigra)

On trying to backup an Exchange database using Asigra we were seeing the message “The requested operation could not be completed due to a file system limitation” after about 4 hours of backing up. This was an Exchange database backup (non VSS), and it was copying the database to the DS-Client buffer.  The Exchange database was 1TB+.  The DS-Client was running on Windows 8.1.

The message:

The requested operation could not be completed due to a file system limitation  (d:\buffer\buf\366\1\Microsoft Information Store\database1\database1.edb)

Solution:

There is a default limitation with the Windows file system for supporting large files. We had to reformat the buffer drive on the DS-Client using the command:

format d: /fs:ntfs /L /Q

After making this change we no longer experienced the error message and backups completed successfully.

Why You Need a Backup/Disaster Recovery MSP

From the one-person office, to the largest enterprise and anywhere in between, every company and individual has information that needs to be managed, backed-up, and stored. Most corporations and small companies are turning to a MSP (Managed Service Provider) to handle their backup and DR (Disaster Recovery) needs. This raises a question asked for businesses of any size: Why do YOU need a backup/disaster recovery MSP?

The summed up response to the question is simple: ExpertiseService Provider AbstractThe experts agree and believe that DR planning requires “complex preparation and flawless execution.” This task may not be possible to be executed by an individual or company without the proper MSP. It is the MSP’s responsibility to handle customer needs and monitor DR plans to minimize RTO (recovery time objectives). We are all aware that disasters happen around us, whether it is a natural disaster or a service outage. It is the MSP’s job to make sure that the companies are prepared. Protecting company data is a vital task in the digital world of today.

For most companies without a MSP, backing up data and disaster recovery can easily be neglected. The process of backup and disaster recovery is usually a part-time job within the company. For instance, there is seldom an individual with the sole task of data backup and recovery in an enterprise. Regardless of size, companies need full-time monitoring and planning for their backup/disaster recovery needs.

Other projects at work can easily sidetrack backups. Without a MSP, many people tend to neglect the company’s backups due to other projects they are focused on at the office. The task may be put on the back burner for other company needs. With the help of an MSP, backups will never be neglected.

In some occasions, a company may designate a person with the least experience to manage backups. They may underestimate the importance of consistent monitoring of their data so they pass the task down to “the new guy/girl.” Without the proper expertise provided by a MSP, the company could be at risk of losing data.

A backup administrator is not an expert. It is crucial that backup and disaster recovery planning and monitoring is handled by a true expert for the sake of the company’s data. Lack of knowledge leads to inefficient or problematic backups. Also, system restores are rarely practiced unless they are handled by a MSP.

How Managecast fixes these issues:

Managecast Technologies covers the company’s backup and disaster recovery needs. We provide enterprise-class, industrial-grade backup to businesses of all sizes. Managecast uses top software providers and partners including Asigra, Veeam, and Zerto to ensure that data is fully monitored and stored. They provide the proper expertise to execute proper DR planning and business continuity. Instead of putting your company’s backup/disaster recovery plan on the side, turn to the experts at Managecast Technologies to fix these issues. We assist with all aspects of setup of the backups from retention rules, schedules, and how to best protect the data in the most cost effective manner.managecast-Logo1-small

Is Backup Tape Dead?

I just had someone contact me and ask my opinion if I thought backup tape is dead.

Maybe 6 years ago I would have enthusiastically said “Yes!”, and did so many times. However, after spending the last 6 years dedicated to cloud backup and immersed in the backup industry, my views have evolved on tape.

Instead of asking “Is tape dead?”, the proper question is “Has the use of tape changed?”. While tape is far from dead and very much alive, it’s use has substantially changed over the past 5 to 10 to 15 years. In the past, tape was the go-to medium for backups of all types. However, disk has certainly displaced a lot of tape when it comes to near line backup storage of recently created data. Many modern backup environments consist of disk-to-disk backup and then backup data is written to tape after some period of time for longer-term storage and archive.

Disk storage is significantly higher cost than tape storage, but for near term backup data the advantages of disk outweigh the cost penalty. For long-term archive of older data, where quick access is not needed, tape is the clear winner.

[Read about aligning the cost of data protection vs the value of the data]

In my experience, many SMBs have shifted to a disk-to-disk-to-cloud solution with no tape. So, in the SMB one could argue that tape has largely died (or at least diminished greatly). However, at the enterprise-level or those organizations who require long-term retention of backup data, there is no better alternative to storing large amounts of data on tape, and this will probably be the case for the next 10 years or beyond. So, no, tape is not dead, but it’s use has changed.

 

 

 

 

Asigra reporting “cannot allocate memory” during seed import

We have DS-Systems running on Linux and we connect the Windows Seed backups to a Windows 7/8.1 machine and then use CIFS to mount the Windows share to Linux. The command we use on Linux to mount the Windows share is:

mount -t cifs //<ipaddress of windows machine>/<sharename> -o username=administrator,password=xxxxxx /mnt/seed

We were importing some large backup sets with millions of files and started noticing “cannot allocate memory” errors during the seed import process. When the import would complete it would indicate that not all files were imported.

At first we thought this was an Asigra issue, but after much troubleshooting we found this was an issue with the Windows machine we were using and was related to using the CIFS protocol with Linux.

A sample link to the issue we were seeing is: http://linuxtecsun.blogspot.ca/2014/12/cifs-failed-to-allocate-memory.html

That link indicates to make the following changes on the Windows machine:

regedit:

HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\MemoryManagement\LargeSystemCache (set to 1)

HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size (set to 3)

Alternatively, start Command Prompt in Admin Mode and execute the following:

reg add “HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management” /v “LargeSystemCache” /t REG_DWORD /d 1 /f

reg add “HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters” /v “Size” /t REG_DWORD /d 3 /f

Do one of the following for the settings to take effect:

Restart Windows

Restart the Server service via services.msc or from the Command Prompt run: ‘net stop lanmanserver’ and ‘net start lanmanserver’ – The server may automatically restart after stopping it.

After we made these changes the memory errors were resolved!

Asigra slow seed import

We recently discovered that Asigra DS-System v13.0.0.5 seems to have a serious problem with importing seed backups. This problem exposed itself as we attempted to import 5.5TB of seed data. We then performed additional testing by backing up a small Windows 2008 server. The seed backup was a little under 3GB. On v13.0.0.5 the seed import took 55 minutes. On the same infrastructure, the same server seed backup imported into a v12.2.1 DS-System in less than 3 minutes.

In addition we are also seeing the error “cannot allocate memory” during the seed import process even though we have tons of free RAM and disk space.

We have notified Asigra and they are attempting to reproduce the problem.

Update 12/4/2015

In testing, and working with Asigra, we have found that if you create the seed backup without using the metadata encryption option then the seed import speed is acceptable and imports quickly.

Update 12/8/2015

Asigra released DS-System v13.0.0.10 to address this issue. Testing shows it does indeed solve the speed issue. Thanks Asigra!

Zerto backup fails unexpectedly

We had a recent issue with Zerto backups that took some time to remedy. There was a combination of issues that exposed the problem, and here is a run down of what happened.

We had a customer with about 2TB of VM’s replicating via Zerto. We wanted to provide backup copies using the Zerto backup capability. Keep in mind Zerto is primarily a disaster recovery product and not a backup product (read more about that here: Zerto Backup Overview). The replication piece worked flawlessly, but we were trying to create longer-term backups of virtual machines using Zerto’s backup mechanism which is different from Zerto replication.

Zerto performs a backup by writing all of the VM’s within a VPG to a disk target. It’s a full copy, not incremental, so it’s a large backup every time it runs, especially if it’s a VPG holding a lot of VMs. We originally used a 1Gigabit network to transfer this data, but quickly learned we need to upgrade to 10Gigabit to accommodate these frequent large transfers.

However, we found that most of the time the backup would randomly fail. The failure message was:

“Backup Protection Group ‘VPG Name’. Failure. Failed: Either a user or the system aborted the job.”

To resolve the issue we had opened up several support cases with Zerto, upgraded from version 3.5 to v4, implemented 10Gigabit, put the backup repository directly on the Zerto Manager server.

After opening several cases with Zerto we finally had a Zerto support engineer thoroughly review the Zerto logs. They found there were frequent disconnection events. With this information we explored the site-to-site VPN configuration and found there were minor mismatches in the IPSEC configurations on each side of the VPN which were causing very brief disconnections. These disconnections were causing the backup to fail. Lesson learned: It’s important to ensure the VPN end-points are 100% the same. We use VMware vShield to establish the VPN connections and vShield doesn’t provide a lot of flexibility to change VPN settings, so we had to change the customer’s VPN configuration to match the vShield configuration.

Even though we seemed to have solved the issue by fixing the VPN settings, we asked Zerto if there was any way to make sure the backup process ran even if there was a connection problem. They shared with us a tidbit of information that has enabled us to achieve 100% backup success:

There is a tweak that can be implemented in the ZVM which will allow the backup to continue in the event of a disconnection, but there’s a drawback to this in that the ZVM’s will remain disconnected until the backup completes. As of now, there’s no way to both let the backup continue and the ZVM’s reconnect. So there is a drawback, but for this customer it was acceptable to risk a window of time that replication would stop to make a good backup. In our case we made the backup on Sunday when RPO wasn’t as critical, and even then the replication only halts if there is a disconnection between the sites which became even more rare since we fixed the VPN configuration.

The tweak:

On the Recovery (target) ZVM, open the file C:\Program Files (x86)\Zerto\Zerto Virtual Replication\tweaks.txt (may be in another drive, depending on install)
In that file, insert the following string (on a new line if the file is not empty)
t_skipClearBlockingLine = 1
Save and close the file, then restart the Zerto Virtual Manager and Zerto Virtual Backup Appliance services

Now, when you run a backup, either scheduled or manual, any ZVM <-> ZVM disconnection events should not cause the backup to stop.

I hope this helps someone else!