A Brief History Of Point-In-Time Backup and Recovery

Backing up data sounds like a fairly easy process. But anyone who’s ever had to shop around for backup software has probably been surprised by the wide range of features and methodologies offered by different providers.

In order to get a better understanding of how modern backup solutions work, you’ll need to go back in time and review the evolution of data protection. image

In the early days of computing, backing up was relatively simple. Just copy all of the data to a secondary storage device (usually tape, because it was cheap) then truck it off-site.

Of course, as technology evolved, so did the threats that businesses had to face. Soon, companies needed to worry about things like corrupted data, overwritten files and viruses. Obviously, if you copy invalid data to your backups, then you also make it impossible to restore.

Another major threat came from the fact that tape backups would sometimes fail during the recovery process, causing the business to lose all of its data.

In order to fix this, they had to start protecting their files using a rotating series of “grandfather” backups. Every day, for 7 days, they would use a new tape to perform the backups. Then, after 7 days, they record the latest backup over the oldest version. After this, they just keep rotating the tapes.

In the event of data corruption or backup failure, this allowed them to go back in time up to 7 days and recover their critical business data. Today, these are more commonly known as “point-in-time versions”.

But this approach also had its limitations. As computers began to produce more data, backups would also start to take longer. In some cases, the time required to back up would exceed the time between the company’s closing and opening hours.

Also, much of this work was redundant. On any given day, less than 1% of the total data might have changed, yet all of the unchanged files still needed to be copied.

That’s when software companies began offering backup systems with “incremental” backup policies.

With these systems, the IT manager would only perform a single full backup once a month, and then they would only copy the files that had changed since that monthly master copy.

If you ever needed to recover, you would simply load the master backup from the beginning of the month, and combine it with the latest changes from the incremental backups.

As companies got access to faster and more reliable network connections, many began to see this as an opportunity to backup up their data remotely. Instead of paying 5 employees to back up 5 different offices, a single IT administrator could protect all of the remote sites from a single server.

But once again, size became a problem.

Although these incremental uploads were smaller than the full daily backups of the past, they could still sometimes cause the network to slow down during transfer. And the monthly full backups were simply too large to even contemplate sending over a network connection.

Once again, new technologies were developed to solve these 2 problems.

In order to eliminate the need for full monthly uploads, backup companies found new ways to artificially “recompile” the original full version using software. This way, you’d only ever need to perform a single full backup when you first install the software. This is often called “incremental forever” technology, since you’ll never need to perform the full backup again.

And in order to further minimize the size of the incremental uploads, backup companies found a way to analyze individual files and isolate only the changed portion of each file.

For example: If you’d changed a single slide in a PowerPoint presentation, it would only upload that one slide instead of the entire file.

These were called “block-level” incremental uploads.

Without all of these innovative technologies working together, businesses would not be able to safely operate at the rapid pace that they do today… and critical data loss incidents would certainly be an everyday occurrence.

Consider that a typical business user today might have about 5GB of critical business data on their local hard drive. In order to get 30 days worth of protection using a traditional backup methodology, they would have to copy over 150GB of data EACH MONTH!

But thanks to new developments such as “incremental forever” and “block-level” backups, you would only need to generate about 0.5GB to get the same result. That’s the equivalent of 2 or 3 few YouTube videos per day.

A Guest Post by Storagepipe Solutions

Storagepipe Solutions has been an innovating pioneer in the online backup and backup software industry since 2001.

Related Posts with Thumbnails

6 comments

  1. […] A Brief History Of Point-In-Time Backup and Recovery | Open Source … […]

  2. […] A Brief History Of Point-In-Time Backup and Recovery | Open Source Technology Blog […]

  3. […] A Brief History Of Point-In-Time Backup and Recovery | Open Source Technology Blog […]

  4. […] A Brief History Of Point-In-Time Backup and Recovery | Open Source Technology Blog […]

  5. […] A Brief History Of Point-In-Time Backup and Recovery | Open Source … […]

  6. […] This post was mentioned on Twitter by Open-Tube.com. Open-Tube.com said: A Brief History Of Point-In-Time Backup and Recovery http://bit.ly/cmsbmP #Technology #Backup […]

Leave a Reply

Your email address will not be published. Required fields are marked *