I was recently asked whether a company should invest in brick-level backup for Exchange 2010. This request came as quite a surprise because I hadn’t run into anyone who was interested in this kind of technology for a number of years. Curiosity got the better of me so I agreed to have a look into the issue.
The first order of business is to understand why the requirement for brick-level backup might exist. Generally it’s because a company wants to be able to take selective backups of one or more mailboxes rather than a complete database. I could understand why this would be the case in the days when we used the NT Backup utility or third-party products to backup to slow tapes. Even though the databases were much smaller than those that are often encountered with Exchange 2010 or Exchange 2013, backups could be excruciatingly slow. In addition, tapes failed, usually right at the end of the backup or even worse, in such a way that an administrator might not notice the problem.
Neither Exchange 2010 or Exchange 2013 support tape backups, at least not using Windows Server Backup. Databases are now backed up to disk using Windows Volume ShadowCopy Services (VSS)-based utilities and operations proceed much faster than before. Indeed, many commentators have made the case that databases protected by sufficient copies (at least 3, preferably 4) to maintain adequate redundancy in Database Availability Groups (DAGs) do not require a traditional backup regime. It’s undeniable that company policy might require backups to be taken for purposes such as long-term offsite storage, but the case can be made that the advent of the DAG has created a whole new environment within which to consider how and when to take backups.
Does the ability to take faster and less frequent backups eliminate the need for brick-level backups? Well, another thing to factor into the equation is the small matter of support. Microsoft’s policy is pretty straightforward.
“There are several third-party backup programs that can back up and restore individual mailboxes, rather than whole databases. Because those backup solutions do not follow Microsoft backup guidelines and technology, they are not directly supported.”
Microsoft is obviously not in favor of brick-level backup solutions. The reason why this stance exists might well be that Microsoft doesn’t provide any supported method of performing a brick-level backup or, possibly even more critical, a restore operation that extracts information from a brick-level backup and merges the data seamlessly into a user’s mailbox. It’s easy enough to restore a complete mailbox using information contained in a complete database backup but much more complex to consider all of the issues that might crop up when faced with the problem of restoring data back into a mailbox containing some information. For example, how do you fix up a calendar meeting request so that it accurately contains all of the attendee responses? How do you handle conflicts when data in the mailbox differs from that in the backup? And so on.
Because Microsoft doesn’t provide software vendors with supported methods to perform brick-level backups and restores they have to come up with all sorts of innovative methods to access Exchange databases. For example, although I have no hands-on experience of the product, the online description of iBackup for Exchange indicates that it uses the Export-Mailbox and Import-Mailbox cmdlets (updated by New-MailboxExportRequest and New-MailboxImportRequest from Exchange 2010 SP1 onwards) to write to and read from intermediate PSTs. iBackup goes on to say that a brick-level backup “is not a method for a the complete backup or recovery of the Exchange database.” Quite so.
I emailed the PR contact for iBackup to ask them what methods they endorse and received no response. In any case, although using cmdlets is certainly a supported method to access database content, I suspect that it would be horribly inefficient and slow if you had to process more than a handful of mailboxes. Indeed, as noted by Acronis, brick-level backups can be 20-30 times slower than a full database backup.
It’s also good to ask the question whether the desired functionality can be achieved using standard product features. For example, if all you need to do is to take a backup of a single mailbox, why couldn’t you use the New-MailboxExportRequest cmdlet to export everything out to one or more PSTs? And if you need to restore content back from a database, perhaps you can use Exchange’s ability to mount a recovery database using a backup copy and then recover the data from it using the New-MailboxRestoreRequest cmdlet. It seems like this is what many vendors do, albeit with a nice wrapper around the cmdlets.
Don’t get me wrong – I think that there are good solutions to examine in this space and innovation does exist. For example, Veeam Software offers an interesting utility called Veeam Explorer for Exchange, part of their Backup and Replication suite. You can use this tool to open Exchange databases and recover information from individual messages to complete mailboxes. A 30-day free license is available from Veeam to allow you to test the software in your own environment to make sure that it meshes with your operational processes and whatever regulatory regime your company might operate under. As with any utility that has the ability to open and extract information from user mailboxes, you also need to pay attention to privacy and ensure that access is controlled at all times. Veeam is run by Ratmir Timashev, who previously ran Aelita Software before selling it to Quest (subsequently sold to Dell), so he knows the Exchange market. I am more positive about buying software from companies led by people who have a strong track record in the industry.
The company that asked the original question eventually decided to stick with a standard backup regime. I don’t pretend that brick-level backup is a bad concept. In fact, I’m sure that it has its place and can add value in the right circumstances. It’s just that it didn’t satisfy requirements in this case as Microsoft’s lack of support was the deciding factor. But I think the salient fact that most of what they wanted to do could be accomplished using standard product features had an influence too. This just goes to show that a solution designed to solve a problem in a certain environment isn’t necessarily as valid or as useful given current technology. Hasn’t this always been the case?
Follow Tony’s ramblings on Twitter.