A year with a Fitbit Flex


I have been using a Fitbit Flex for a year. Well, I should say that I have used two Fitbit Flex in the last year, mostly because I lost one while collecting a hire car soon after starting to use this handy little device to track my activity levels.

I lost my Flex because the strap fastener is truly horrible. It is all too easy for it to come undone and allow the Flex to fall off, especially when you take a coat or sweater off. I’m sure that it’s not beyond the wit of men (or women) to come up with a better fastener. I don’t think much of the way that the Flex measures sleep (time awake, time asleep, and so on) as the need to tap to tell the device that you plan to go asleep and tap again to tell it that you’ve woken up seems superfluous. I am not at my best just after waking up and often forget to tell the Flex that I am in fact awake. It happily then registers blissful sleep when I am working at my PC. Which might just be true.

But that ends the complaints, which is not normal when I discuss an electronic gadget (apart from the Bose QC20 in-ear noise cancelling earphones, which I think are just great, especially when travelling). Generally speaking I like the Flex very much and it has proven successful in getting me to exercise more in the past year.

IT people have a horrible habit of being sedentary. It goes with the territory of sitting down to work with computers and when you’re sitting down, it’s all too easy to stay nice and comfortable, sipping your favorite beverage, and the hours trickle by. The best thing about devices like the Flex is the prompts they provide to get up and do something.

And the data too. Data is very important to IT people and even though the Flex is very much an entry-level activity monitor, it collects enough data to create and measure activity against targets and to accumulate statistics over time. Seeing the miles and kilometers mount up over the weeks provides the necessary motivation to get up and do something energetic daily.

Or maybe it’s the five lights that the Flex lights up as progress is recorded towards the daily target. Doing enough to make the five lights flash as the target is attained becomes a goal and the encouragement to take some more exercise.

All of the data gathered by the Flex is synchronized up to your Fitbit.com account and can be reviewed through a web dashboard. Here I find that I have walked a total of 2,805.63 km in the last year, or some 3,705,520 steps, which seems a lot. And that proves the benefit of being motivated by a little flashing wristband. I reckon that I probably walk 2.5 km/day just doing bits and pieces, so the additional distance walked because I have been motivated to meet the daily target is some 1,892 km.

When I walk, I typically use a pedometer running on my Nokia Lumia 1020. Because it’s GPS-based, its data is more accurate in terms of distance walked. The Flex operates on the basis of steps, which is an imprecise science when it comes to measuring distance. You can input data from external sources into the Fitbit.com portal to update your statistics.

Overall, I can’t complain. I’ve enjoyed the walks and dropped 4kg in weight. Now I am ready to use a more sophisticated activity monitor. Perhaps the Microsoft Band or even the new Fitbit Surge? Time to consider my options.

Follow Tony @12Knocksinna

Posted in Technology | Tagged | 2 Comments

Using the Exchange 2013 cmdlet extension agent to populate mailbox settings


For space reasons, this text was cut out of my Exchange 2013 Inside Out: Mailbox and High Availability book. I only found it again recently, so here it is…

Many properties can be set to control exactly how a mailbox functions, but some are more important than others. All mailbox properties can be manipulated with EMS and the most critical are exposed through EAC. However, it’s easy to forget to update or set a property. Automation comes to the rescue in the form of Exchange’s cmdlet extension agents, a feature that first appeared in Exchange 2010. One of the standard agents is called the scripting agent and its purpose is to support the automation of common tasks such as mailbox creation. The most common use of the scripting agent is to update the properties of new mailboxes after they are created.

For example, if we create a new mailbox using EMS or EAC, its language and regional settings are not updated and the user will be prompted to provide these settings the first time that they access the mailbox with Outlook Web App. The scripting agent gives us an easy way to ensure that default language and regional settings are applied to new mailboxes and so avoid the need for the user to become involved in the process. If the default settings are not correct, the user can select new values through Outlook Web App options.

The scripting agent is disabled by default. The agent uses an XML configuration file stored in the <install directory>\V15\Bin\CmdletExtensionAgents folder to understand what processing it must perform and when it is invoked. Exchange provides a sample configuration file called ScriptingAgentConfig.xml.sample that you can edit to add your instructions. The sample file contains a number of interesting examples with which you can experiment, but our purposes require only a very simple file that can be created with any text editor and named ScriptingAgentConfig.xml.

The example ScriptingAgentConfig.xml shown below tells the scripting agent:

  • It should be invoked whenever the New-Mailbox or Enable-Mailbox cmdlets are run by any process. These cmdlets are used to create a new mailbox or enable a mailbox for an existing Active Directory account.
  • The specified code is invoked when the cmdlets have completed processing (the “OnComplete” event)
  • The “Name” parameter is to be retrieved from the provisioning handler (the framework that surrounds the scripting agent). The name is the identifier for the object being processed, in this case, a mailbox.
  • Three cmdlets are to be run. Set-Mailbox is used to set a default language value of “en-US”; Set-MailboxRegionalConfiguration sets the appropriate date and time formats; and Set-MailboxCalendarConfiguration sets the start of the working day.

<?xml version="1.0" encoding="utf-8" ?>
<Configuration version="1.0">
<Feature Name="Mailboxes" Cmdlets="New-Mailbox, Enable-Mailbox">
<ApiCall Name="OnComplete">
if($succeeded) {
$Name= $ProvisioningHandler.UserSpecifiedParameters["Name"]
Set-Mailbox $Name -Languages "en-US"
Set-MailboxRegionalConfiguration $Name -DateFormat "dd-MMM-yy" -TimeZone "Pacific                        Standard Time"
Set-MailboxCalendarConfiguration $Name -WorkingHoursStartTime "08:30:00"
}
</ApiCall>
</Feature>
</Configuration>

To enable the scripting agent so that it will process the code in its configuration file, we run the Enable-CmdletExtensionAgent cmdlet:

Enable-CmdletExtensionAgent “Scripting Agent”

This is an organization-wide setting, so it is obviously important to have the same configuration file in place on every mailbox server so that the same behavior happens throughout the organization. You do not have to put the file on CAS servers because Exchange 2013 only runs EMS cmdlets to execute operations such as mailbox creation on mailbox servers. After the configuration files are deployed and the scripting agent is enabled, Exchange will faithfully execute the specified commands to automate the finalization of mailbox settings. It’s also important that you test that everything continues to work as you deploy new cumulative updates for Exchange 2013.

Because the scripting agent executes code without administrator intervention (and probably without any knowledge on the part of some administrators), it’s obviously important to make sure that the code works as intended before deployment. After all, it would be embarrassing to see the error illustrated below each time one of your co-workers attempted to create a new mailbox. Errors in the scripting agent configuration file might prevent EAC or EMS from being able to complete an operation. Code should be developed and thoroughly debugged on a test system before being deployed into production and then become part of the “must check list” when a new version or update for Exchange appears as there’s no guarantee that the syntax of a cmdlet will not change in the future.

When you’re developing the configuration file, you’ll probably make some mistakes or want to test different versions of code. To force Exchange to pick up a new version of the configuration file, disable and then enable the scripting agent.

Disable-CmdletExtensionAgent “Scripting Agent” | Enable-CmdletExtensionAgent “Scripting Agent”

All of this goes to prove that there is a mass of interesting things to be discovered if you poke around under the surface of Exchange. And in this case, the interesting thing might just save you some time…

The cmdlet extension agent is not available in Exchange Online. This is logical because the last thing you probably want in a massive multi-tenant environment is to have administrators playing around (in the most professional sense) with code attached to standard cmdlets. Perhaps Microsoft will provide an alternate method in the future to accomplish the same functionality for Office 365, but it’s not there now.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2013 | Tagged , , | 1 Comment

Latest Office 365 SLA results prove robust performance


A recent question raised the issue of how well Microsoft has done against their vaunted 99.9% availability goal for Office 365. It’s a good question to ask because clearly all cloud services need to keep on proving that they can truly deliver a highly reliable and robust service if they are to have any chance of convincing on-premises customers to move platform.

As it happens, Microsoft began to publish quarterly SLA results in 2013 as part of the information available through the Office 365 Trust Center, so I headed over there to have a look-see only to discover that the data was stale. I knew that various bits of Office 365 had experienced outages during 2014, such as the June 24 Azure Active Directory failure that caused a seven-hour outage for a subset of North American tenants, but a couple of well-publicized outages didn’t seem to be a reason for Microsoft to stop publishing SLA results.

A call to Microsoft established that they had redone the Office 365 Trust Center and I was looking at an obsolete page, which is a nice way of saying that Microsoft published the data but on a different page. I guess it’s the nature of web site maintenance that it is easy to leave stagnant artefacts in place. Companies don’t like this because the bad data reflects poorly on them, but it’s easy to see how this might happen given the complex nature of large web sites.

In any case, the real situation is that Microsoft reports quarterly performance for Office 365 on a worldwide basis against the 99.9% SLA on the Office 365 Trust Center. The most recent numbers are outlined in the table below:

Quarter SLA Outcome
Q1 2013 99.94%
Q2 2013 99.97%
Q3 2013 99.96%
Q4 2013 99.98%
Q1 2014 99.99%
Q2 2014 99.95%
Q3 2014 99.98%
Q4 2014 TBD

The results for Q4 2014 are still being tabulated and should be available around February 1, 2015.

The drop in the Q2 2014 performance shows how an outage impacts a quarterly result. However, Office 365 is a global infrastructure and while the June 24 outage lasted seven hours, it was restricted to part of a single region, so the impact is less than you might imagine because these results are calculated on a worldwide basis.

Even with the caveat that all cloud SLAs need to be treated with caution because they exclude problems caused by poor Internet connectivity or any site-specific issues, the numbers reported by Microsoft are pretty impressive and prove that Office 365 has matured into a very capable platform.

Google guarantees the same SLA for Google Apps and makes a dashboard available for all to see the current status of their apps. However, I can’t find out whether Google makes quarterly SLA results available in the same way as Microsoft does so that an easy comparison is possible between the two platforms. I covered the lack of visibility for quarterly SLA data last May and Google does not appear to have done much to improve the situation since.

Follow Tony @12Knocksinna

Posted in Cloud, Email, Office 365 | Tagged , , , | 4 Comments

Selecting the right compliance framework to use with Exchange


Exchange 2010 upped Microsoft’s game when it came to the out-of-the-box compliance features available in the server. Exchange 2013 builds on that foundation to refine matters through features such as in-place holds and integration with SharePoint 2013; increased integration is available within Office 365 as Exchange and SharePoint Online share features like Data Loss Prevention. However, compelling and cost-effective as the Microsoft offering undoubtedly is, a market still exists for third-party compliance products.

I am frequently asked to recommend one product over another. Invariably, I decline to do so on the basis that a simple recommendation cannot take all of the factors that drive a very complex subject into account. Instead of giving the questioner a one-product answer, I prefer to propose a framework that a company can use to figure out what’s best for them.

The framework I use is composed of some simple but profound questions. Here’s how I look at the various products:

  1. Cost. In other words, how much will it cost to run one solution over another?  Exchange is the big winner here because of the way that its functionality is integrated directly into the server and clients, but loses a little because of the need to deploy the latest software in order to make full use of its compliance features (for example, don’t expect to use Data Loss Prevention or site mailboxes unless you deploy Outlook 2013).
  2. Support. Overall, this is another positive point for Exchange, if only because it is much easier to deploy and support the single integrated solution rather than different software from different vendors. However, it might be that a company already has significant in-house expertise with another compliance product that offsets the Exchange advantage – or that cost-effective and expert support is available locally. This last point is really important as no product can meet business requirements if expertise is not available to support its deployment.
  3. Coverage. In other words, what material can be captured and preserved. Exchange is great for email and anything else that can be stuffed inside a mailbox; it is less impressive when other sources of important content are considered such as SharePoint, Lotus Notes, other email servers, web sites, file shares, and other databases. Exchange 2013 is better because of its tie-in with SharePoint 2013, but the problem of needing the latest versions rears its ugly head again. The advantage currently lies with third party vendors because they have used their years of working in the field to steadily expand coverage of different repositories.
  4. Legal needs. These are dictated by the legal department and differ from company to company and geography to geography. The likely categories to be considered include immutability (no one can interfere with information that is held on systems), discovery (how information is retrieved performed and who can perform discovery), and preservation (how software can preserve information required to meet regulatory or legal requirements). New features have to be regarded with some caution until you’re sure how they fit into the compliance framework. Office 365 Groups provide a good example. These groups use special document libraries (in SharePoint Online) to hold files of interest to group members, but there’s no way to apply retention policies or other controls to the files nor have auditing or reporting facilities yet been made available.
  5. Expertise or industry focus. Some ISVs have been working in the compliance space for many years and have accumulated a huge amount of expertise in how information should be handled, specifically in particular industries. Their software is probably designed to handle common industry scenarios and expertise in how to exploit the software is likely to be more available than if you try to adapt general-purpose software to meet your needs. All in all, if you work in a regulated industry, your best option might be to use a company that specializes in that industry.
  6. Existing technology infrastructure. If a company uses a product to archive or capture information from other sources and that product supports Exchange, then the best option might be to leverage what’s already in place to incorporate Exchange. Given the current strength of Office 365, another factor to consider is how your choice functions in the cloud. Remember, buying add-on software is only the start of the journey. The software has to be managed and maintained over the long term too and that’s where the majority of expense might lie.

Looking from an Exchange-centric view, Microsoft is in a strong position. The current versions of Exchange support databases that are so large and so well protected that the idea of keeping everything online is viable. This doesn’t mean that stubbing, a technique that has served businesses well even if it makes Microsoft unhappy (see the “Ask Perry” video chat for their view on the matter), will go away anytime soon because the simple fact is that if a product solves a business problem then its technical implementation will be a secondary concern.  And anyway, ISVs like Symantec are advancing the state of their art too with developments such as the Enterprise Vault cloud service, similar in many respects to Microsoft’s Exchange Online Archiving service.

Whatever choice you make, keep an eye on the future and make sure that you don’t paint yourself into a corner. This means that the software you choose should be able to export and import data with maximum fidelity. The platform you select now might not be the one you want to use in five years.

The bottom line is that no simple answer exists for compliance. Put two lawyers in a room and ask them to define what compliance means for a company and you’ll wait a long time for an answer. The same is true of technologists… I guess.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Office 365 | Tagged , , , , , , | 4 Comments

Gmail as Gaeilge


I was charmed to learn yesterday that Google has added Irish (Gaeilge) to the list of supported languages for Gmail (the local news report is here). Not that I speak much Irish, even after having it drummed into my skull for 14 years at primary and secondary school (teaching techniques were primitive in the 1960s and 70s). But a small bit of grá (love) for the language lingers like it does for many other Irish people, even if we are only ever moved to attempt to use the language when the need arises to confuse foreigners (and ourselves, if the truth be known), mostly in bars and similar settings abroad. So off I went to select Irish as the language of choice for Gmail…

Selecting Irish as the preferred language for Gmail

Selecting Irish as the preferred language for Gmail

After saving the selection, Gmail reinitialized and I was back in my school days, figuring out what all the translated terms meant. As you can see, it’s only the dates and text strings in the user interface that are translated – no attempt is made to translate the content of messages as they arrive, but perhaps that’s the next trick for Google. Some interesting things can be learned from the translation. For instance, what do you think this means?

Iontach, níl turscar ar bith anseo!

Well, we discover here that “turscar” is the Irish for “spam” because the equivalent in English is “Hurray. No spam found here!”

Gmail as Gaeilge

Gmail as Gaeilge

I’ve been down this path before. Microsoft launched an Irish language pack for Windows XP in 2005. I downloaded it and upgraded the computer that our kids used at the time (no one had individual laptops or iPads then) and said nothing. Cries of anguish erupted when they found that Windows had been given an Irish makeover!  It’s actually very interesting to see how long it takes to locate common user interface components when new labels are applied. After a few days the cries of pain were too much to endure and I removed the Windows language pack to revert to English.

A Windows 7 language pack is available for Irish and one is also listed in the set available for Windows 8.1. Somehow I don’t think I will repeat my previous experiment, but I will leave Gmail in Irish to see how I get on, at least for a little while.

Follow Tony @12Knocksinna

Posted in Cloud, Email | Tagged , , , , | Leave a comment

A brief history of Exchange Time Management


It might just be me, but the sounds of bitter complaints by users whose calendars have been thrown into confusion by some combination of user error, Exchange server bug, and client mess-up appear to have quietened recently. At least, my mailbox is less full of sad tales of corrupted meetings or failed appointments. Things must be improving.

Office systems that include time management functionality have been around for a very long time and we have enjoyed the vicissitudes of their malfunctions for most of that time. Digital Equipment Corporation’s ALL-IN-1 Office Automation system offered scheduling features in 1982. Its V2.0 release was updated at the behest of the White House in 1984 to allow for minute-by-minute appointments for President Reagan and his senior staff.

Thirty years ago we had a much simpler environment to manage. Few people had email and all used wired terminals to connect to their accounts on timeshared computers. It was much easier to code the basics of time management such as setting up a meeting, sending out notices of that meeting, and handling the responses.

The first versions of Exchange used Schedule+ calendaring to maintain backward compatibility with Microsoft Mail. Once the first versions of Outlook (1997) came along to replace the original “Exchange Viewer” client, Exchange provided native time management. We were still in the era of wired terminals and the new challenge was interoperability. Not only between different email and calendaring systems, but even between different versions of Exchange. The different views of what an appointment or meeting was across various email servers contributed to some angst. The introduction of the iCalendar format has helped to solve the interoperability issue between different electronic calendars, but only relatively recently.

User expectations of time management were also growing as features such as delegate access were added. Multiple access to a single calendar is absolutely a wonderful idea from a user perspective but it creates many challenges, especially when you factor in different clients. Performance is also problematic when clients have to open multiple calendars. The worst example I have encountered is an administrator who was expected to manage the calendars of sixteen managers. Suffice to say that she had plenty of opportunities to brew coffee as Outlook struggled to cope with the load.

Mobile clients have a natural affinity to calendars. If you are on the road, you need to know what is in your calendar, and a mobile client that has no access to calendar is hamstrung. Early BlackBerry devices certainly could get to user calendars and cheerfully proceeded afterwards to wreak havoc on calendar items. We’re still seeing corrupt calendar items (“bad items”) dropped by the Mailbox Replication Service when mailboxes are moved on Exchange 2010 and 2013 servers. To be fair to RIM, some of the protocols used by Exchange were either difficult to understand (like MAPI) or badly documented. Microsoft now publishes full specifications on the different protocols and interfaces used by Exchange and that seems to have helped.

Telling someone about a protocol is one thing. Expecting them to use it properly is another. ActiveSync (EAS) is the best example. The EAS protocol documents are available to developers to help them work out how mobile clients should interoperate with Exchange. But a long history exists of mobile clients messing up user calendars, most notably the splendid efforts of several versions of Apple iOS to impose its will on the server, as in the infamous “calendar hijacking” issue in late 2012.

iPhone and iPad devices are terrifically popular as a percentage of mobile devices found in Exchange deployments, a fact that possibly influenced Microsoft’s decision to buy Acompli and its excellent iOS client over Thanksgiving. Apple and Microsoft have worked hard to improve the client-server communications between iOS and Exchange and apart from a recent hiccup with iOS 8.1, the updates have generally been better. Because of the work done on the server, the improvement is generally felt across the spectrum of mobile clients. Sure, bumps will happen along the road, but it is true that mobile clients are much better today at dealing with complex calendaring operations than they were in the past.

Thirty years of computer-based time management has taught us a lot. In terms of Exchange, I think you need to keep three basic principles in mind as you design or execute deployments in order to minimize the risk of things going wrong. These are:

  • The newer the client, the better. Generally speaking, the most recent versions of Outlook and mobile clients are better at dealing with all aspects of calendaring.
  • The newer the server, the better. Microsoft has poured effort into bullet-proofing Exchange so that malfunctioning clients can’t impact the server. Most of this work has happened since Exchange 2010 SP2. You benefit by using Exchange 2010 SP3 (with the latest roll-up update). Exchange 2013 and Exchange Online also include code to keep the demands of clients under control.
  • Be sensible in user placement. The basic idea is to keep users who work together on the same server. If someone needs to manage calendars, all the mailboxes for those calendars should be on the same server. You create risk once you distribute operations. For instance, having an administrator located on Exchange Online attempting to manage multiple on-premises calendars is a recipe for disaster. Sure, it will probably work. But note that word…

Going back to ALL-IN-1, even though we had hard-wired terminals connected to central computers, we had problems with time management at that point too. I got a support call to go to the White House once but never made it. All because I was an “alien.”  Oh well…

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Exchange 2013, Office 365 | Tagged , , , , , , , , | 3 Comments

Exchange Unwashed Digest – November 2014


The normal flow of new features, updates, and bugs flowed across the “Exchange Unwashed” desk during November. Some of the bugs were puzzling, others were infuriating, but everything was interesting – at least to me.

A clash between S/MIME and transport rules (Nov 27): Data Loss Prevention (DLP) was a big new feature in Exchange 2013 that has now been extended to SharePoint Online. DLP depends on special forms of transport rules to ensure that messages containing sensitive data don’t get outside an organization. As it happens, S/MIME messages might stop DLP working and so cause messages to be blocked. I never realized that this might happen until it did…

OWA functionality gap widens between Office 365 and Exchange 2013 (Nov 25): If you’re a user of Office 365, you’re probably aware that lots of new features have been popping up recently (People View, Clutter, and Groups). All great stuff, but these features aren’t in on-premises Exchange and might not ever feature in an on-premises release. Other UI tweaks have also popped up in Outlook Web App (OWA) that aren’t yet in on-premises Exchange, all of which underlines the growing functionality gap that has appeared between the cloud and on-premises versions.

Office 365 Groups problem exposes the seamy side of evergreen software (Nov 20). Another Office 365 story, this time one exploring how the rush to get new features to users and fulfill the promise of “evergreen” technology can have a downside. Office 365 Groups are nice (apart from the total lack of management features) but a security weakness was discovered after they were made available to users. The developers moved to fix the problem but didn’t communicate and the net result was that folks lost access to documents stored in the OneDrive for Business sites used by Groups. All a bit of a mess…

FAQ: Answers to common Office 365 Clutter questions (Nov 18). I really like the new Clutter feature now available in OWA for Office 365 but it’s clear that lots of people are struggling to understand how they might use it. So I put together this FAQ based on my own experience and a discussion with the developers. See what you think.

OWA updates in Office 365 help with Chrome showModalDialog issue but no joy for on-premises versions (Nov 13). Google shipped Chrome 37 and broke OWA and EAC, which use the showModalDialog method for common operations like attaching files to messages. Google hasn’t done much to help since September, but Microsoft has updated OWA (but only for Office 365) to remove the use of the method. It’s a help but overall still a mess.

Microsoft pulls Exchange updates to fix installer problem (Nov 11). We were all set to welcome Exchange 2013 CU7 when news emerged that Microsoft had pulled the update because a security fix might affect OWA. Given the criticism that has been leveled at Microsoft about product quality the call was a good one. We now await CU7 sometime in December…

Clutter arrives to impose order on Office 365 mailboxes (Nov 11). This is the post to welcome the initial appearance of Clutter in Office 365 tenant domains that had signed up for “First Release.” Between this and the FAQ referred to above, you should get a pretty good view of how useful Clutter might be to you.

Forcing Exchange’s Admin Center to use a specific language (Nov 7). You know what it’s like. It’s a winter afternoon and you have nothing much to do, which leads you to think that you want to perform Exchange administration in Welsh or German or some other language. And then you find out that EAC will allow you to do just that. Which is nice…

Active Directory schema extensions now a feature of every Exchange update (Nov 5). There was a time when the thought of a schema update would reduce an Active Directory administrator to a gibbering wreck. But that was in 2000 or thereabouts. We’re more mature about these updates now. After all, just wait for a few months and a new schema update will come along…

iOS 8 ActiveSync problem causes out-of-date meetings (Nov 3). Ah yes, the sheer quality of iOS mail app upgrades are a source of wonder and bemusement. Wonder because Apple keep on screwing up the ActiveSync connection to Exchange; bemusement because there’s no good reason why bugs keep on appearing in this part of iOS. There’s no conspiracy here at all. Apple and Microsoft are good friends. Move on please…

December brings us the holidays and all that well-feeling to friend and foe. I hope that the month brings a similar mix of events to report. Actually, I’m pretty sure that it will!

Follow Tony @12Knocksinna

Posted in Cloud, Exchange, Office 365 | Tagged , , , , , , | 2 Comments

Brick backups and Exchange – not recommended


I was recently asked whether a company should invest in brick-level backup for Exchange 2010. This request came as quite a surprise because I hadn’t run into anyone who was interested in this kind of technology for a number of years. Curiosity got the better of me so I agreed to have a look into the issue.

The first order of business is to understand why the requirement for brick-level backup might exist. Generally it’s because a company wants to be able to take selective backups of one or more mailboxes rather than a complete database. I could understand why this would be the case in the days when we used the NT Backup utility or third-party products to backup to slow tapes. Even though the databases were much smaller than those that are often encountered with Exchange 2010 or Exchange 2013, backups could be excruciatingly slow. In addition, tapes failed, usually right at the end of the backup or even worse, in such a way that an administrator might not notice the problem.

Neither Exchange 2010 or Exchange 2013 support tape backups, at least not using Windows Server Backup. Databases are now backed up to disk using Windows Volume ShadowCopy Services (VSS)-based utilities and operations proceed much faster than before. Indeed, many commentators have made the case that databases protected by sufficient copies (at least 3, preferably 4) to maintain adequate redundancy in Database Availability Groups (DAGs) do not require a traditional backup regime. It’s undeniable that company policy might require backups to be taken for purposes such as long-term offsite storage, but the case can be made that the advent of the DAG has created a whole new environment within which to consider how and when to take backups.

Does the ability to take faster and less frequent backups eliminate the need for brick-level backups? Well, another thing to factor into the equation is the small matter of support. Microsoft’s policy is pretty straightforward.

There are several third-party backup programs that can back up and restore individual mailboxes, rather than whole databases. Because those backup solutions do not follow Microsoft backup guidelines and technology, they are not directly supported.

Microsoft is obviously not in favor of brick-level backup solutions. The reason why this stance exists might well be that Microsoft doesn’t provide any supported method of performing a brick-level backup or, possibly even more critical, a restore operation that extracts information from a brick-level backup and merges the data seamlessly into a user’s mailbox. It’s easy enough to restore a complete mailbox using information contained in a complete database backup but much more complex to consider all of the issues that might crop up when faced with the problem of restoring data back into a mailbox containing some information. For example, how do you fix up a calendar meeting request so that it accurately contains all of the attendee responses? How do you handle conflicts when data in the mailbox differs from that in the backup? And so on.

Because Microsoft doesn’t provide software vendors with supported methods to perform brick-level backups and restores they have to come up with all sorts of innovative methods to access Exchange databases. For example, although I have no hands-on experience of the product, the online description of iBackup for Exchange indicates that it uses the Export-Mailbox and Import-Mailbox cmdlets (updated by New-MailboxExportRequest and New-MailboxImportRequest from Exchange 2010 SP1 onwards) to write to and read from intermediate PSTs. iBackup goes on to say that a brick-level backup “is not a method for a the complete backup or recovery of the Exchange database.” Quite so.

I emailed the PR contact for iBackup to ask them what methods they endorse and received no response. In any case, although using cmdlets is certainly a supported method to access database content, I suspect that it would be horribly inefficient and slow if you had to process more than a handful of mailboxes. Indeed, as noted by Acronis, brick-level backups can be 20-30 times slower than a full database backup.

It’s also good to ask the question whether the desired functionality can be achieved using standard product features. For example, if all you need to do is to take a backup of a single mailbox, why couldn’t you use the New-MailboxExportRequest cmdlet to export everything out to one or more PSTs? And if you need to restore content back from a database, perhaps you can use Exchange’s ability to mount a recovery database using a backup copy and then recover the data from it using the New-MailboxRestoreRequest cmdlet. It seems like this is what many vendors do, albeit with a nice wrapper around the cmdlets.

Don’t get me wrong – I think that there are good solutions to examine in this space and innovation does exist. For example, Veeam Software offers an interesting utility called Veeam Explorer for Exchange, part of their Backup and Replication suite. You can use this tool to open Exchange databases and recover information from individual messages to complete mailboxes. A 30-day free license is available from Veeam to allow you to test the software in your own environment to make sure that it meshes with your operational processes and whatever regulatory regime your company might operate under. As with any utility that has the ability to open and extract information from user mailboxes, you also need to pay attention to privacy and ensure that access is controlled at all times. Veeam is run by Ratmir Timashev, who previously ran Aelita Software before selling it to Quest (subsequently sold to Dell), so he knows the Exchange market. I am more positive about buying software from companies led by people who have a strong track record in the industry.

The company that asked the original question eventually decided to stick with a standard backup regime. I don’t pretend that brick-level backup is a bad concept. In fact, I’m sure that it has its place and can add value in the right circumstances. It’s just that it didn’t satisfy requirements in this case as Microsoft’s lack of support was the deciding factor. But I think the salient fact that most of what they wanted to do could be accomplished using standard product features had an influence too. This just goes to show that a solution designed to solve a problem in a certain environment isn’t necessarily as valid or as useful given current technology. Hasn’t this always been the case?

Follow Tony’s ramblings on Twitter.

Posted in Exchange, Exchange 2010, Exchange 2013 | Tagged , , , | 7 Comments

ePublishing for Technology: a new book on Exchange 2013 High Availability


Time is both the greatest enemy and greatest friend of technical books. I know that seems like a statement which makes little sense, but truth lurks in these words.

We all know that technology now evolves at an ever-increasing cadence. The upshot is that the traditional publishing cycle struggles to keep up. In the past, an author would have time to consider several betas of a new product and then the final version before settling down to write text that (after technical and copying editing) would be accurate and valid for a couple of years. The publishers were happy because the investment they made in bringing a book to market could be recouped over that period; authors were happy because the hundreds of hours of work required to create the text would be compensated for through royalty payments.

The cloud has had a terrific effect on all of us, most positive as new features and functionality are revealed every week. But this makes it really difficult for authors who write about technology because their text ages dreadfully quickly, even as the first printed copies of books appear.

Take Exchange 2013 for example. Paul Robichaux and I declined to write our “Exchange 2013 Inside Out” books based on the first (RTM) version because past history had taught us the wisdom of waiting for at least six months to see how a new server functioned when revealed to the harsh judgment of customer deployments. Even though some kudos can be gained through first to market status, books rushed out to coincide with the first availability of a new product are invariably flawed, and in the case of Exchange, they can be horribly flawed.

So we worked away in the background to create and hone content, going through the exacting editorial process managed by Microsoft Press to ensure that the books were as good as a team of technical reviewers, copy editors, indexers, design artists, and series editors can deliver. We eventually ended up with material that is up to date with Exchange 2013 CU2, but that’s five cumulative updates ago!

A lot has happened since CU2 appeared. I would argue that the content of Exchange 2013 Inside Out: Mailbox and High Availability and Exchange 2013 Inside Out: Connectivity, Clients, and UM are still valuable resources because although some details have changed since Paul and I stopped writing in September 2013, the concepts and general descriptions of technology have not. Some of the content could be rewritten now because we have more knowledge about a topic or Microsoft has made decisions that affect how we might describe things. Modern public folders are an example as the scalability issues that have forced Microsoft to focus on some reimplementation and tuning in this area were not known when I wrote that chapter and I would definitely have some different advice to offer today.

Still, the books are valuable resources and have largely stood the test of passing cumulative updates as long as you treat them as a starting point for understanding Exchange and supplement what you find in the Inside Out series with information published since Microsoft released Exchange 2013 CU2.

Which brings me to “Deploying and Managing High Availability for Exchange 2013”, a new eBook authored by a high-powered trio of very experienced Exchange MVPs: Paul Cunningham (“Exchange Server Pro”), Michael Van Horenbeeck (“Van Hybrid”), and Steve Goodman (all-round nice guy and co-host of the regular UC Architects podcast). That’s a pretty good line-up of talent to focus on a topic like High Availability.

Spread over 210 pages of content and 43 of a useful lab guide, the book addresses the following areas:

  • Client Access server High Availability
  • Mailbox Server High Availability
  • Transport High Availability
  • High Availability for Unified Messaging
  • Managing and Monitoring High Availability
  • High Availability for Hybrid Deployments

The best thing about the book is its practical nature. The content is approached from the perspective of an administrator who needs to get things done and there are lots of examples included to show you what commands need to be executed to perform different tasks.

The interests of the authors shine through too. Paul has long been a dedicated fan of Database Availability Groups (DAGs), so the coverage of how to put a DAG into operation is detailed and exact. Michael’s interests cover hybrid connectivity (obviously), but also the murky world of Managed Availability, so there’s plenty on that topic. And I suspect that Steve had something to say about certificates and their proper use within an Exchange deployment.

You can buy an electronic (PDF or EPUB format) copy of the book here. The cost is a very reasonable $34.99 (check the site for a discount). That might seem high for an eBook, but consider how much you have to pay for an hour of a consultant’s time and it makes perfect sense to acquire some knowledge by buying a book.

No book is perfect and I am sure that people will find points on which they disagree with the authors in this book. But that’s missing the point. A book about technology should never be deemed to be the last word on a subject, especially when dealing with servers that are deployed into a huge variety of different on-premises environments where one implementation differs from the next. It is the role and responsibility of an administrator to accumulate knowledge from books like this and then put that knowledge to work by placing it in context with the operational environment and business needs of their company. This book provides a lot of useful information that will help people immediately but it is important that readers surround the knowledge contained in the book with their own experience, background, and opinions.

And because no book is perfect, it’s good to know that this eBook can be updated pretty quickly if new information comes to hand. For example, the thinking around DAGs evolved significantly with the introduction of the simplified DAG in Exchange 2013 SP1. It will evolve again when Microsoft allows witness servers for multi-site deployments to be located in Azure early next year. And so on.

I believe that the future for technology books is not in the printed form. Sure, we will continue to have some books that are suitable for printing, but I think that the vast bulk of the market for books covering commercial application servers like Exchange will soon be in electronic format. Given the release cadence, it just makes sense.

Follow Tony @12Knocksinna

Posted in Exchange, Exchange 2013 | Tagged , , , , , | 2 Comments