top of page
  • Writer's pictureAndy Jonak

Rethink your Backups and Storage


For as long as we’ve been collecting, using, and processing data, there have been solutions to store and back it up. Data gets created; therefore that data is valuable, so said data needs to be protected and backed up. The roots of that go back many decades. It’s for that very reason your data is your crown jewel, the value of your company. Think of what would happen if your data was no longer available. Think of how many firms would no longer be around and go out of business without their data. That would be the majority of them. The challenge has been that firms have been using the same methods to back up and store data for a very long time. Backups have been siloed and either departmentally separated or segregated by technology. Then bring the specific workloads and applications into play, and it complicates it even further. It’s time for firms to rethink backup.


As I mentioned above, backup has a long and storied history. For a time data was stored on punch cards (remember those?), then tape, then the miraculous invention called the hard drive. Firms set up complete backup infrastructures to protect their data—and it makes sense and is necessary. You had your servers, storage, and you backed it up to tape, and you used one of the “traditional” backup solutions such as TSM, Avamar, NetBackup, Backup Exec, etc. First, it was full backups, then incremental backups came along. The goal was to make copies of your data to be restored in case something happens, and you still have your data. However, the backup wasn’t what was important; it was the restore. Think of how many times you’ve heard of firms suffering a disaster or data loss and needed to recover—but couldn’t. Many times the restore didn’t work.


These backup solutions required a lot of infrastructure and people to run them. It was—and still is in many cases—a separate solution, independent of other IT infrastructure. Many of these solutions are/were big, complex, needed specialized people to run them, and are expensive.


Fast forward, and a few things happened: virtualization came along and now your entire workloads could be saved, not just your data. Take your VMDK VM’s, and save them. Now restores were easier, much quicker and not necessarily dependent on using specific hardware. You didn’t need to rebuild your systems; you just reinstalled your hypervisor and reloaded your VM’s, and you were good to go.


Right about this time, SAN-based central storage became more affordable. SAN storage was always faster than tape, but a lot more expensive. As SAN storage became more affordable, many firms invested in it and realized it could be used for backup as well. So much so that manufacturers declared tape is dead and no need for it. Well, don’t tell that to IBM, who’s tape business continues to grow to this day and is extremely profitable.


Many firms ditched tape in place of disk-based backups. However, disks fail and don’t have the longevity of tape, so many implemented disk-based backups and then created a secondary backup to tape just in case. Many also did (and continue to do) hot/hot, hot/warm, hot/cold sites to protect data even further. Firms had production sites, DR sites, and then a backup to a third site as a tertiary solution.


Then providers such as Commvault and Veeam released much more straightforward and easy to use backup solutions for both physical and virtual environments. Less training, easier to use, and less infrastructure needed to run them--a big step. Appliance versions of traditional backup solutions followed like StoreSERVER, that bundled solutions such as TSM to make it simple and easy to run. We also needed to categorize data by importance, so HSM (hierarchical storage management) became viable that would help you put data where it needs to go based upon category, important or frequency of use. Mission-critical workloads got the fastest storage, rarely use got pushed to tape.


Then cloud solutions came along such as Datto, EVault, and AWS with its Glacier solution. These were great solutions that took data from your on-prem data and workloads and backed them up into the cloud. Your data was protected and offsite, which is critical. This also caused a change in firms moving towards secondary storage. Expensive, fast storage was used for mission-critical applications and cheaper, slower storage for those that are not. Secondary storage saved firms lots of money on storage investment, which as we all know is expensive.


So where are we now? Well, we have more data than ever, and it’s in lots of different locations—in multi-cloud environments. Data is in many sites, used by many users and applications, and its importance frequently changes.


In between all of this Hyperconverged Infrastructure (HCI) came along, giving you an appliance approach to infrastructure. HCI initially found its niche in VDI and then general compute infrastructure, but it was enviable that firms would take HCI and create storage and backup solutions using its approach. Firms such as Cohesity and Rubric did this and HCSS (Hyperconverged Secondary Storage) was born.


The approach with HCSS is elegant and straightforward. Continue to use your fast storage for your mission-critical applications, irrespective where they sit, be it on-prem or in the cloud. However, for your applications and workloads that are not mission critical, but are important, use less expensive storage that is “applianced” but includes mission critical features such as DR, replication, snaps, compression, and deduplication, while being much cheaper than tier 1 storage. Oh, and it’s also much easier to use and will manage all of your important, non-mission critical data no matter where it sits, be it on-prem or in the cloud, and it will do it seamlessly. We see in the firms we work with who are adopting its the ease of use and how they no longer require dedicated storage administrators to manage it. That's a big deal.


One of the biggest things I believe this has done is changed firms thinking from just backing up data to thinking about how to use data to improve the business. That’s a big switch. Backup is so easy to perform and is just part of the solution, what’s next? How that data is used. That’s innovation and is exciting.


These changes in storage and backups are significant in their approach and results, and I firmly believe it’s a great thing. I am intrigued by where this is all going. We see and work with lots of firms that are adopting this approach towards secondary storage and, I have to tell you, the acceptance and use have been overwhelmingly positive. It’s made it easier to manage data—no matter where it sits—and less expensive. However, it’s not the expense that has been driving this; it’s been ease of use and the management capabilities. Firms and people want to make things easier.


The irony is that most firms we know would pay more money to simplify backup, but yet these new solutions are less expensive previous ones. It’s a win-win: cheaper and simplified.


Firms don’t want to be in the business of managing their data; they want to be using their data to drive business and innovation. Yes, data must be protected—that’s a given of course—but it’s not where firms have wanted to focus, it’s where they had to focus. Now, these new solutions help in this way and are a big step forward. As virtualization and cloud made less of a need for certain infrastructure personnel, these new solutions will change these dynamics as well, and while many won’t like it, it’s a good thing. Innovation always is.


So it’s time for firms to rethink their backups and storage. HSCC and how it’s changing backups and storage is worth a look. The downside is minimal, but the upside can help transform how you manage and use your data to help you innovate and grow and expand your business. That’s something we all want.

Andy

bottom of page