Controlling problematic iOS6.1 devices


Reflecting the large number of iPhones and iPads that are connected to Exchange mailboxes via ActiveSync, I received a lot of email after I posted a note on February 7 reporting the problems with excessive transaction log generation that Exchange servers experience after iOS devices are upgraded to iOS 6.1. Microsoft’s formal knowledge base article (KB2814847) on the topic is also available for your reading pleasure. Microsoft offers three ways to resolve the problem, including the creation of a custom throttling policy, which I do not cover here.

Ever since the problem appeared, people have been asking what to do to control these pesky iOS 6.1 devices. The problem can be a delicate matter because senior management are often fans of Apple devices and it wouldn’t go down well if IT suddenly imposed a blanket ban on iOS 6.1. The stress on Exchange would certainly ease if the rate of transaction log generation reduces but the stress level of iPhone users might increase if they were to receive a message from Exchange to tell them that their device is blocked.

Apparently Apple is responding by rushing a new build (6.1.1) out to address the issues noted with Exchange as well as some other problems with 3G performance and degraded battery life.  The new version is being directed to iPhone 4S devices first. If you don’t want to wait for the new build to get to your iOS devices and are interested in the details of how to use ActiveSync device access rules to control access to iOS 6.1 (and other devices), fellow MVP Paul Cunningham has written up the necessary steps to put the blocks in place. A big health warning applies before any block is imposed: test, test, and test again.

Some correspondents have found that they can handle the problem by deleting the device partnership and forcing a complete resynchronization and indeed, this is one of the suggestions listed in KB2814847. It is reasonable to assume that this step causes the iOS mail app to use different code to synchronize the entire mailbox rather than just the meeting requests that seem to lie at the root of the problem.

A device partnership essentially links a mailbox and a mobile device and allows Exchange to track what happens with the device. You can see details of a partnership using the Get-ActiveSyncDevice command (works with Exchange 2010 and 2013) and you can remove a partnership with the Remove-ActiveSyncDevice command. Two small problems get in the way. First, you have to pass the identity of the partnership that you want to remove. Second, many users are associated with multiple partnerships because they use (or have used) multiple devices to synchronize with Exchange via ActiveSync.

Of course, forcing every single iOS device to resynchronize is neither good for user blood pressure nor your server, so it’s good to know what mailbox is linked to the problem. One good way of doing this is to run the Exchange User Monitor (ExMon) utility to monitor the demand that mailboxes are making on a server. The problem mailboxes will be easily seen as they are the ones that consume far more resources than any other mailbox. Another good suggestion is to use Microsoft’s Log Parser studio (this post provides good guidelines for what you need to do) to extract information from the IIS logs to focus on the misbehaving clients. Others have found that the ActiveSync Report script posted on EHLO is useful.

Whatever method you use to identify the mailboxes, take a note of their names and then run the Get-ActiveSyncDeviceStatistics command to confirm that they are using iOS 6.1 devices. If they are not, the excessive mailbox activity can only be accounted by their insertion of new set of batteries recently. For example, this command returns a list of the ActiveSync partnerships that are known for my mailbox:

Get-ActiveSyncDeviceStatistics –Mailbox Tredmond | Format-Table DeviceID, DeviceType, DeviceOS

DeviceOS will start with “IOS 6.1” for any device running iOS 6.1. DeviceID contains the identifier for the partnership that we need to remove, so an amended command will remove the partnership. Get-ActiveSyncDeviceStatistics returns all known partnerships for the mailbox, so the filter is absolutely needed to remove just the partnerships associated with iOS 6.1:

Get-ActiveSyncDeviceStatistics –Mailbox TRedmond | ? {$_.DeviceOS –match “iOS 6.1”} | Remove-ActiveSyncDevice

A polite phone call to the user might be in order to avoid the frantic help desk call that will ensue when they realize that their device needs to resynchronize.

Microsoft is  also struggling with the same operational concerns over iOS 6.1 device with their Exchange Online service and have implemented throttles to ensure that problematic devices are blocked. Seems like the right approach to take – never let badly functioning devices (for hardware or software reasons) compromise the overall stability of a service.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2010 | Tagged , , , , , , | 21 Comments

Google flogging a dead horse with Motorola pager patent


For those who follow patent matters, the spat between Google and Microsoft revolving around whether Exchange ActiveSync infringes a Motorola pager patent dating from August 1995 has moved on to Germany, where an associated case is being heard in Mannheim. Google acquired the patent along with many others when it acquired Motorola Mobility last year.

The opinion of most observers is that Google is throwing good money after bad in pursuing this case. I’m not an impartial bystander as I was an expert witness for Microsoft in the High Court hearing in London last December, a case that Google lost very badly. Latest reports indicate that things do not seem to be going any better for Google in Germany (this post includes the full text of the U.K. judgment). Google lost another case against Microsoft involving 13 different patents in a U.S. court yesterday, so it seems like their lawyers are not having a good run.

Going back to the U.K./German case, my feeling always was that you could not reasonably expect that a patent describing how to synchronize settings and messages across two or more pagers could be applied it to the kind of synchronization that applies in email, specifically that which occurs when Windows Phone 7.5 devices synchronize mailbox contents with Exchange 2010 (for that was the scenario that the trial covered). Google said that their patent covered the method used for email synchronization as well as how Lync and Messenger synchronize user statuses between devices.

Other issues were covered in the trial, such as whether another Motorola invention and patent covering an “electronic wallet” constituted prior art and what could be done with the protocols used by other pager manufacturers to understand whether Motorola’s claims were obvious. I spent about four hours giving evidence with the most challenging (or amusing) question being that posed by Daniel Alexander QC when he probed my relationship with Microsoft in an attempt to undermine my credibility as a witness (perfectly understandable). This extract from the court transcript describes the interchange as Mr Alexander quoted from the foreword to my Exchange 2003 book:

Q: “I want to read a little bit from the foreword to it by Dave Thompson, who is the Vice-President of the Exchange business unit. He says of you “Tony has more history with Exchange than anyone else I know outside Microsoft.  He did not let me down.  His ten-year love affair with Exchange and his straightforward style meant that I got the complete picture.”  Would it be fair to say that you had a ten-year love affair with Exchange?

A.  Well, he is factually wrong on that because I could not have had a ten-year love affair at that point.  It was probably an eight-year love affair given the dates of the product.  Love affair?  It is a silly phrase that they used.  I would not say…

Q.  Do not worry, I am not holding you to it, but it would not be unfair to describe you as a Microsoft Exchange enthusiast?

A.  I think if you read my recent articles on the topic, specifically those published in November 2012, there are some people in the Exchange group in Redmond who probably have pictures of me on the wall and are throwing darts at it.

Q.  We all fall out with our lovers!  (Laughter)

Quite. I’m not so sure about always falling out with our lovers but what’s certain is that Microsoft and Google are well and truly fallen out at this point when it comes to many topics, including email. Recent developments such as Google’s “Winter Cleaning” that dropped ActiveSync support for Gmail (something that only really only affects Windows Phone users) and Microsoft’s new privacy-centric offensive that focuses on how Gmail uses email content to decide what ads to display are only two skirmishes in the campaign.

Personally, I think that if you use a free email service, you have to expect to pay for the facility in some way. The ads that I see in Gmail seem relatively harmless and I don’t keep anything sensitive in Gmail or use it for business purposes. If you don’t want the email system to “use” the deep and wonderful thoughts that you’ve committed in messages for advertising, use a paid-for system. Microsoft must agree because they’d prefer you to use Office 365 rather than Outlook.com. Such is life.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Technology | Tagged , , , , , | Leave a comment

Thoughts on Exchange’s new cumulative update model


Today’s announcement by Microsoft of a new approach to updates for Exchange is welcome in many respects. A new system of cumulative updates issued on a quarterly basis and applied to servers via Exchange’s B2B (build to build) installation program will be used to keep on-premises deployments updated at the same rate as the cloud-based Exchange Online. Some questions are inevitable about why Microsoft would move to this model. Here’s my take on what’s happening.

Cumulative updates are like the older roll-up update packages for Exchange 2007 and Exchange 2010 in that they deliver a cumulative set of bug fixes and enhancements that, when installed, bring a server up to date with the latest software. In addition, cumulative updates will be increasingly used to deliver new functionality in line with the way that features “light up” inside Exchange Online. The precedence of using cumulative updates to deliver new functionality and fixes has been set by Lync, and as Microsoft’s blog says, both Lync and SharePoint will use the new update model.

Over the last eighteen months, Exchange has had an unfortunate record with the quality of roll-up updates. Some of this has been due to problems introduced by third-party software components, as in the case of the WebReady document viewing modules that introduced a security vulnerability, some due to Microsoft’s own deficiencies as in the failure to communicate between engineering groups for the MMC/IE problem or the rush to issue a fix for the WebReady problem with unsigned code.

Microsoft is acutely aware of the issue caused by poor quality and the impact on Exchange’s reputation, which isn’t helped by some of the shortcomings that are painfully obvious in the Exchange 2013 RTM release.

Microsoft’s response breaks down into three actions. First, they have reintegrated the code libraries used to generate Exchange Online as used in Office 365 and the on-premises version of Exchange. Second, for Exchange 2013 onwards, they will issue regular cumulative updates for the on-premises version that match the slipstreaming of new features into Office 365. Third, the previously separate sustaining engineering team that was responsible for the production of roll-up updates and service packs has been merged with the mainline development group to form a single team that is now responsible for the development and maintenance of Exchange. All seem reasonable steps to take, although some will ask why they haven’t happened before.

The code base used to create Exchange Online and on-premises began to diverge from Exchange 2010 SP1 onward when a separate code branch was created for the software that runs in the Office 365 datacenters. Since then a gradual gap in functionality has grown between the two branches where new features appear in one platform and are exposed to customers before being available in the other. For example, the cmdlets to control where delegate-sent messages are stored is available in Exchange 2010 SP2 RU4 but not yet in Exchange Online. On the other hand, features exist in the datacenter that aren’t seen on-premises. If you doubt this, just look through the TechNet documentation for Exchange and observe the number of occasions where cmdlet parameters are “for Microsoft use only”.

Unifying the two branches makes eminent sense. In fact, it’s strange that such a divergence was allowed to happen. Obviously, it is easier to fix problems in one place rather than two; new features should appear consistently across the cloud and on-premises platforms (important in hybrid environments), and engineering effort can focus on driving quality rather than worrying about development for multiple delivery vehicles. This work has already started, so when Exchange Online is upgraded to use Exchange 2013, feature parity should exist from then on across the two platforms. It also means that it’s easier to make sure that the finer points of new features are fully covered, such as making sure that everything is localized.

Having to deliver updates that support hundreds of thousands of Exchange Online mailboxes and on-premises customers alike should increase product quality too. I cannot see how Microsoft would allow Exchange 2013 updates to be released to customers if they aren’t of sufficient quality to run without problems inside Office 365. Hopefully, Microsoft will seize the opportunity to test updates for a couple of weeks inside Office 365 before they are released to on-premises customers so that they can then say that the code now released has run without problems in the datacenters.

Of course, Office 365 does not resemble the operational environment of any on-premises customer, not least because of some of the integration that’s done on-premises to run Exchange alongside third-party or home-grown code. It’s also fair to say that some unique hybrid, co-existence, or migration scenarios will not be tested – that work still has to be done after an update is available and before any new code is introduced into a production environment. However, even with these caveats, the fact that code is tested inside Office 365 should give on-premises customers a higher degree of confidence that an update will in fact work and work well.

The cadence of new feature release is becoming faster because that’s the way that the cloud works and Microsoft has to respond for Office 365 to remain competitive. New features have appeared in Exchange 2010 roll-up updates in the past so this is hardly news (think of the change to the way the Managed Folder Assistant processes calendar and task items in Exchange 2010 SP2 RU4). The big difference is that new features will appear in all updates rather than intermittently.

Which then begs the question as to what happens with service packs? The word is that Microsoft will continue to ship service packs for Exchange, if only to provide milestones for customers to incorporate into their planning for software refreshes. Service packs are likely to be collections of cumulative updates bundled with perhaps a documentation refresh and possibly some big new functionality or refresh of an existing feature. For example, Outlook Web App as delivered in Exchange 2013 isn’t the most functional client that Microsoft has ever built and it is in line for a major overhaul to add missing features like a movable reading pane and support for public folders and site mailboxes. An Active Directory schema update to support a new feature is another element that might warrant a service pack. Microsoft’s announcement says that an update might include an Active Directory schema change, but I can’t imagine that enterprise customers will welcome the prospect of frequent schema updates. Then again, perhaps those who run Active Directory have gotten over their fear of schema updates.

As to the third change, it makes a whole heap of sense to create a single integrated development group that owns the responsibility for development, maintenance, and support of Exchange for both on-premises and cloud platforms. I’m sure that the two previous teams worked closely together, but once you have multiple teams involved in any effort, there’s always a lingering suspicion that problems can arise at the join between the teams.

Will the changes drive additional quality and restore Exchange back to the point where it once was? Creating a common code base and driving a single release across cloud and on-premises is sensible and should help to eliminate issues that should never surface outside Microsoft. Making new features available to customers faster is good if you want the features. Others who are quite happy with the functionality of Exchange will worry about the impact of new features on administrators, help desk, and end users, so I guess your view of this change is driven by how you think new functionality should be introduced. I doubt that this change will improve quality, but I do think that the new integrated structure for Exchange will help.

Some worrisome issues are contemplated by the new strategy. Support is an obvious concern. Microsoft says:

A CU will be supported for a period of three (3) months after the release date of the next CU. For example, if CU1 is released on 3/1 and CU2 is released on 6/1, CU1 support will end on 9/1.”

But few large on-premises deployments of Exchange have the capacity or capability to handle the testing and installation of new updates to Exchange four times a year. The fact that the support window closes just three months after the release of a new cumulative update seems to be too quick. It certainly encourages customers to keep Exchange updated, but perhaps it’s at the expense of introducing too heavy a load for some. A cynic or conspiracy theorist might speculate that this is simply a way for Microsoft to convince on-premises customers that now is the time to move to the cloud when these concerns simply go away. Sure, other concerns come into play when using a cloud platform, but that’s not important right now.

Another problem is that a cumulative update might overwrite customized elements within your deployment. It’s nice to have the opportunity to customize that OWA log-in screen again, but not so if you have to do it four times annually.

Finally, I’m not 100% clear on what happens with security updates. Microsoft says:

The security updates released will be CU specific and will contain all of the fixes available at the time of release for a given CU in a single cumulative package. In the event that multiple releases have been made available for a given CU, only the most recent version of the security update package will be required to be installed and it will contain all previous fixes as well.”

Reassuring enough… but what happens when Windows finds a security hole and rushes a patch out. Will it be tested with the latest CU or the just-about-to-be-released CU? And what happens when a security hot fix interferes with a component in a CU in such a way that it stops some feature of Exchange working? I’m sure that Microsoft has an answer but it’s not clear yet to be what that answer might be.

Of course, nothing is certain when you deal with software and Microsoft will be judged on results, not grand plans. Exchange is suffering from a quality problem today. We shall soon have the chance to judge whether the new cumulative update model works when Microsoft issues the first update for Exchange 2013. It can’t come fast enough.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2013 | Tagged , , , , | 6 Comments

Selecting a companion device for my PC


It’s time to find a more portable device than my current laptop, an HP Envy 17 that I like very much for its design, appearance, large screen and expandability. I replaced the two 500 GB 5,400 rpm drives that came with the Envy and replaced them with 256 GB SSDs and the PC just screams along no matter what’s thrown at it, including several virtual Windows 2012 servers running Exchange 2013.

But the Envy is heavy. In fact, it’s a brute that won’t fit into normal laptop carrying cases. I had to find a pull-along Tumi laptop case to transport the Envy with some reasonable degree of protection (and protect my aching back). And when the case is stuffed with AC power connector and various bits and pieces like a mouse, the result is that it’s a real heavyweight.

I need a much lighter and more portable device to bring on the road. Because of what I do, the device has to run Windows and Office, so that’s the starting point.

I bought a Windows RT Surface for my son last November. He likes the lightness and 10-hour battery life and finds that the Touch cover is acceptable for informal typing. Documents and presentations need a better input device and a USB keyboard provides a good solution. It would be nice if the Surface supported two USB ports as this would allow use of both an external keyboard and mouse. Fortunately one of the USB hubs that I’ve accumulated over the years has solved that problem.

Windows RT has many good features but the big issue that has to be overcome before I could use it permanently is the lack of Outlook. Yes, I could run Outlook Web App to connect to Office 365, but I’m just too used to Outlook with all its limitations and shortfalls. The devil you know… An excellent post by Hal Berenson speculates that Microsoft has Outlook for RT but don’t want to ship it because Outlook’s real value is its extendibility. None of the add-ons for Outlook would probably run on RT so you’d be left with bog-standard Outlook; I’d be happy with this but many would probably not be so content.

I am interested in the Windows Surface Pro. It seems to tick many of the boxes for features that I need and I would not be upset with the 5-hour battery life claimed for the device. I like the RT’s build quality and assume that the Pro will deliver much of the same. Some reviews have remarked that the i5 CPUs run hot and there could be a lot of fan noise. I suspect that some of this FUD originates from PC vendors who aren’t thrilled about Microsoft’s foray into PCs. In any case, we shall soon know after production Surface Pros go into general use. The reviews published after the embargo period expired have not been great for the Surface Pro, but it seems to me that a lot of people have treated the Surface Pro as a replacement PC or as an ultrabook rather than a companion device, which is what I have in mind.

When it comes to PC vendors, I have a natural affinity for HP. This doesn’t mean that HP wins every deal in our house as we have a MacBook Air and Acer and Toshiba laptops in use plus an array of iPads and iPhones. But I always look at HP to see what the PC business group is doing. Last week I spoke at a conference in Dublin and had the chance to handle the new HP ElitePad 900 and also look at the HP Envy X2. The X2 is aimed at consumers while the ElitePad is very much a business device – the differences being cost, the materials used, and the fit and finish, all of which are better on the ElitePad. In addition, consumers aren’t usually happy to buy expensive but probably essential accessories such as “productivity jackets” (sounds like the jackets that iPAQs used to have). On the other hand, they’ll be charmed with the detachable keyboard and much lower price of the Envy X2. But I do wonder about the long-term robustness of that keyboard.

Even so, I liked the Envy X2. It seems like a nice device and detaching that keyboard is party-trick neat. But when I consider why I am even looking at these devices, the ElitePad seems to be more like a device that I’d use. The X2 is just too much like a laptop. In fact, it is a laptop. I don’t want another laptop – I want a companion device, one that is very different in some respects than my Envy 17. I like the idea of figuring out whether touch is an interface that I really like and to see how Windows 8 behaves when its interface is touch-driven rather than through the classic keyboard-mouse combination that I’ve used to date.

The challenge over the next few weeks is to figure out whether a Surface Pro or ElitePad is best for me. The fact that the Surface Pro doesn’t come with a 4G option (the ElitePad does) is not a deal breaker for me. The higher resolution screen (1920 x 1080) of the Surface Pro is probably not a deal breaker either. In fact, the decision might come down to cost. The Surface Pro seems like it might be a good deal more expensive. Using U.S. prices, a 64GB Pro costs $899, plus the $129.99 Type cover that I’d probably buy. The ElitePad seems to be around $735, depending on where you buy, but the cost of the accessories such as the expansion jackets is not clear to me yet. The cost difference between the two base units is probably down to the much faster i5 CPU in the Surface Pro.

Even though there’s been some fuss about the amount of disk eaten by Windows and recovery utilities on the Surface Pro (leaving 23 GB of available space), in reality I don’t think this is an issue because you can expand storage easily with cheap Micro SD cards (another interesting post by Hal Berenson discusses this and the use of compression for the Surface). In any case, 23 GB is more than enough for what I plan to do on a companion device. I don’t need vast storage to write blogs, articles, and the like and PhotoShop can be safely left to the Envy 17.

There’s no Microsoft Store in Ireland (however, Microsoft has announced that it will sell the Surface online in Ireland from later on this month), so it’s difficult to get hands-on exposure to a Surface Pro. I’ll be in Bellevue in mid-February for the annual MVP Summit and plan to pay a visit to the Microsoft Store there. Some other contenders will probably come to light before then to confuse matters. Decisions, decisions…

Follow Tony @12Knocksinna

Posted in Outlook, Technology | Tagged , , , , , | 2 Comments

Exchange Unwashed Digest for January 2013


January seemed to go on forever, or maybe this was just because it was mostly cold, wet, and grey in Ireland (a distressingly usual state of affairs at this time of year). Nevertheless, “Exchange Unwashed” needed to be written, so here’s the digest of the January 2013 posts.

Office 365 for Everyone – how nice! (January 31): Microsoft’s launch of Office 365 Home Premium for $99/year to cover five PCs/Macs brought joy to many (or so the press releases implied) but didn’t move me all that much. I’m much more interested in when Microsoft will upgrade Office 365 to use the Office 15 wave of products. Hopefully the signs from a Steve Ballmer blog point to this happening sometime in late February 2013. We shall see…

What happened to Iammec? (January 29): The www.iammec.com site was launched with great fanfare at MEC last September. Supposedly to become the place to go for Exchange content, it has failed. The question is “why?”  Who knows…

Mark Crispin, father of IMAP, RIP (January 24): I spent quite a long time during 2012 arguing that IMAP4 had established the basics of synchronization for multiple devices in a patent action between Microsoft and Google that was eventually decided (for Microsoft) by the High Court in London in December. The untimely death of IMAP4’s creator and prime mover since came as a shock. It still is.

Looking for a book on Active Directory? (January 21): It’s good to see new editions of well-respected technical books being issued, which is the case for the fifth edition of Brian Desmond’s Active Directory tome. Absolutely something to have on your bookshelf, just in case you have to stare a domain controller in the face.

Microsoft Dodges a Support Bullet with Exchange 2013 (January 17): By far the most popular post in January, I wondered whether the delay in shipping Exchange 2010 SP3 and the resulting knock-on lack of deployments of Exchange 2013 because of the lack of co-existence has helped Microsoft to avoid a lot of support calls due to some of the obvious deficiencies that exist in Exchange 2013. Of course, Microsoft is busily fixing problems and will release an update for Exchange 2013 soon. The question is whether it will fix enough to make the product supportable. I do hope so.

The meaning of FYDIBOHF23SPDLT (January 15): The meaning has been known for a long time but it’s surprisingly just how many people don’t know about it. So in a spirit of knowledge sharing, I tell the story so that Google, Bing, and other search engines will be able to answer the question in future.

Exchange, EAS, and Outlook 2013 (January 13): Early versions of Outlook 2013 allowed you to configure connections to Exchange via ActiveSync (EAS). The final version did not. This post explains why the decision to prevent such connections is perfectly logical and technically correct.

Exchange 2010 SP1 reaches end of support (January 8): I liked Exchange 2010 SP1 a lot, mostly because it fixed so many of the problems that afflicted the RTM version of Exchange 2010, added the best Outlook Web App client shipped so far and introduced some really nice new features such as block mode replication within a DAG. Alas, all software degrades with age and Exchange 2010 SP1 is now technically out of support. Which is why you install Exchange 2010 SP2 – or even SP3 when it is finally shipped.

Downgrading an Exchange 2010 Server (January 3). Trivial fact of the month was that you can’t downgrade an Exchange 2010 server by installing a different license. You can upgrade a server, but a downward motion is out of the question. The lawyers must have insisted, or there’s another logical answer… maybe.

Exchange searches limited to specific item types (January 1): To flex my writing muscles as we entered the New Year, the fact that you can’t search Exchange for every possible type of item was revealed. It came as news to me, but maybe not to you. In any case, it’s documented now.

I have great hopes for February. Then again, I have great hopes for every new month. The only way to be sure that you don’t miss anything is to subscribe to my Twitter feed so that I can broadcast news your way.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2010, Exchange 2013 | Tagged , , , , , , , , | 3 Comments

Exchange 2013 simplifies DAG management


As I work through the process of understanding Exchange 2013 so that I can write about it for “Microsoft Exchange 2013 Inside Out”, various odd thoughts come into my mind. One of those that recently arrived was that Microsoft has dumbed down the new Exchange Administration Center (EAC) when it comes to Database Availability Group (DAG) management. On the surface, it seemed like the Exchange Management Console (EMC) in Exchange 2010 gives administrators more control over the DAG, member servers, and databases, but when you work things through the situation is not quite as clear-cut.

How EAC displays DAG properties

How EAC displays DAG properties

The DAG was brand-new in Exchange 2010. Accordingly, although the developers did their very best to make the DAG easy to work with, some flaws exist. For example, it must have seemed like a very good idea to display the copy queue length and replay queue length for a database copy to flag potential replication problems to administrators. It’s absolutely true that knowing that logs are accumulating on these queues is an indication that all might not be right in the DAG, but the problem is that EMC only ever shows a snapshot of replication activity that’s accurate when EMC checks queue lengths. To be totally accurate, you’d need to have EMC refresh its data at a frequent interval, something that would impose a load on Exchange.

The processing overhead required to query servers about replication activity might be acceptable for a small DAG where Exchange only needs to check ten or so database copies spread over two or three servers. I can imagine big problems if you’d ask EMC to check the status for a hundred databases spread over ten servers – apart from the processing load, it would probably take EMC a few minutes to collect all the data from the servers and display the information and by that point the data is stale and needs to be refreshed again, so we get into a continuous loop of fetch and display. Not good…

Speaking of stale data, you might even get into a situation where EMC displays the famous copy queue length of 9,223,372,036,854,775,766 (see below), which seems like quite a lot of replication to get through! The reason, as explained in Tim McMichael’s excellent blog, is that despite the database copy in question being reported as “Healthy”, for some reason (potentially because the Replication Service on the server hosting the copy is stopped) a divergence has opened up between the timestamp (made available to DAG members though the cluster registry) for the last available log generated by the active copy and the system time on the server hosting the problematic copy. If the divergence is more than 12 minutes it could cause a problem if Active Manager attempted to activate this database copy because the potential exists that some logs are available for the previously active copy that will be ignored if this copy is brought online. Cue hole in database syndrome…

That's a large copy queue length!

That’s a large copy queue length!

Exchange detects these conditions and considers that replication is “stale”. To stop automatic activation, Exchange sets the copy queue length to 9223372036854775766 on the very sensible basis that such a number is going to exceed the AutoDatabaseMountDial setting for the server and so prevent Active Manager activating the copy automatically.

Getting back to EAC, the only way that you now see details of the copy queue length and replay queue length for a database copy is to select the relevant copy and then click the View Details link. This exposes all the relevant information, meaning that this isn’t another case where EAC is less functional than EMC – it’s just different and arguably a better implementation. If you prefer not to go through the somewhat tiresome select and click routine to check multiple database copies, you can simply run the Get-MailboxDatabaseCopyStatus command to review the replication status for all databases, or those belonging to a specific server or DAG.

I don’t mind that Microsoft has simplified matters by not displaying replication queue information for the DAG. It is in line with other efforts to simplify DAG management, such as removing the need to collapse DAG networks when DAGs extend across multiple subnets. In fact, Exchange 2013 prefers that you leave DAG network management to it.

Simplification and automation are good so I approve of what’s been done to make DAG management easier in Exchange 2013. Once they fix the fit-and-finish problems exhibited by the current version of EAC, it seems like some real progress will have been made over EMC.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2013 | Tagged , , , , , , , , | 7 Comments

Bricked Nokia forces reconsideration of available options


Infuriatingly, my Nokia Lumia 800 decided to turn itself into an expensive brick this afternoon whilst playing a podcast. No number of curses, sacred chants, or combination of button pressing could persuade the Lumia to come back to life, so it’s gone off to be fixed by Nokia, thanks to the two-year warranty that they provide for purchases in Europe.

Before taking the step to bring the Lumia to a Nokia Care Centre (luckily one is about 3 km from my house), I did what any Internet-literate person would do and consulted the web to see what advice others might offer about a bricked Lumia. The first impression that I formed is that Lumia 800s like to turn themselves into cold electronics on a pretty frequent basis, with the most common potential cause being a problem that’s seemingly associated with battery exhaustion. The second is that once the Lumia decides that it wants to check out, there is very little a normal human being can do to make it come back to life. If you’re very lucky, plugging the device in to charge it might work. Or maybe placing it in such a way that the sun might warm its bones. Or perhaps by holding down the volume down, camera, and power on buttons at the same time, or maybe another combination. All lead to a point where you want to throw the blessed device at the wall.

While the Lumia 800 is away in Nokia’s tender care, O2 and Vodafone will introduce the Lumia 920 to Ireland on February 1. I’ve been looking at the 920 with an eye to moving over to Windows 8 and have also considered the HTC 8X. However, I like the Nokia Drive software and their cameras and that’s probably enough to keep me in the Nokia camp. The sticking points are the sheer size of the 920. It’s never seemed appealing to hold a large slab to your ear to make a call and that’s the strong impression many current smartphones give (look at anyone attempting to make a call on a Samsung Galaxy Note). The Lumia 800 is an example of elegant restrained design compared to some of the oversized slabs, as long as it stays running…

Even after some hardware problems, I’m still happy that I made the move from iPhone to Windows Phone about a year ago. The hardware is appealing, the software seems to be more modern than iOS, and email just works – and doesn’t hijack or interfere with calendar meetings either.

I guess I’ll wait to see what Nokia do with the 800 before making any decision to replace the phone. Maybe I can find some oversized pockets to help ease the decision.

Follow Tony @12Knocksinna

Posted in Technology | Tagged , | 3 Comments

Exchange, Voicemail, and retention


Have you ever wondered why Exchange 2010 and Exchange 2013 support the creation of an explicit default retention tag just for voicemail? Perhaps not, and certainly not if you are not interested in Unified Messaging and voicemail will never darken the mailboxes of your Exchange server or if you believe that retention policies are the work of a particularly horrible devil who wants to wreak havoc on user mailboxes.

But some, like Paul Robichaux, like Unified Messaging (Paul tells me that he has just finished the chapter on Unified Messaging for Volume 2 of Microsoft Exchange 2013 Inside Out, but I haven’t seen his opus yet). A smaller subset like both Unified Messaging and retention policies or manage organizations where voicemail is stored in Exchange mailboxes that come under the control of retention policies. If you’re in these categories, the special default retention tag for voicemail is of great interest.

Exchange 2010 began by setting a rule that retention policies could include a default tag that controls how items are removed from mailboxes and a second default tag that controls how items are archived, if archive mailboxes are in use. Of course, the trick is to move items into the archive before deleting them as it doesn’t make much sense to configure the tags to operate the other way around. A typical setup might then be to have tags to archive items after two years and remove them after five.

But then you come to voicemail. The problem here is that people tend to be a little less careful with voicemail than they do when they compose email. In many corporations, it is common practice for executives to use voicemail to communicate between each other when they don’t want the discussions recorded in such a way that the content is exposed to legal discovery. I think this is an understandable stance to take because conversations are less structured and formal than email and the risk exists that something said might be taken out of context if it came to light during a discovery process.

Most of the older voicemail systems on the market had pretty rudimentary storage capabilities and voicemail was cleared out after 14 days or so to allow newer messages to be stored. Users got to know and rely on the fact that their conversations would not linger on for extended periods and everyone was happy.

And then Exchange 2010 came along. There had been many voicemail integrations with previous versions of Exchange. Companies like Cisco and Nortel had created and sold applications to interact between their PBXs and Exchange to enable the capture and replaying of voice messages. Exchange 2010 changed the game profoundly by driving the cost and complexity of Unified Messaging down to a point where the decision to implement became much easier. Until of course until the topic of voicemail retention surfaced.

One CEO of my acquaintance learned about plans to roll out Exchange 2010 Unified Messaging and responded with a four-letter expression of disgust. Even recognizing the value of features such as having voicemail delivered to his mailbox and the content preview, he didn’t want any possibility that executive conversations could be recorded for extended periods. Grudgingly, he allowed that it would be acceptable to implement Exchange 2010 if some mechanism could be put in place to eradicate voicemail as quickly as the current system erased messages.

Microsoft must have received similar feedback from other executives because Exchange supports voicemail retention as a special case. No other message class is supported for a default retention tag as “All” is used in every other case to force Exchange to apply the default tag to items that are not controlled by a specific folder or personal tag.

The solution for the CEO was to create a new retention tag by running the New-RetentionPolicyTag command to permanently remove voicemail messages after 7 days as shown below. This has to be done through EMS as neither the Exchange 2013 EAC nor the Exchange 2010 EMC support the creation of anything other than an all-purpose default retention tag:

New-RetentionPolicyTag -Name “Delete Voicemail after 7 days” -Type All -MessageClass Voicemail -AgeLimitForRetention 7 -RetentionAction PermanentlyDelete

Of course, to complete the task the new tag has to be included in a suitable retention policy that is then assigned to mailboxes. The CEO was happy and Exchange 2010 was deployed. I never heard whether any voicemail has ever been involved in a legal discovery since, but I bet that the CEO and the other executives still continue to discuss the deep dark secrets of the corporation as they leave voice messages for each other, if only because they’ve forgotten that their voicemail is now stored by Exchange.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2010, Exchange 2013 | Tagged , , , , , | 3 Comments

Symantec ships Enterprise Vault 10.0.3


When I wrote about some of the problems surrounding Exchange 2013 in “Exchange Unwashed” on January 17, I noted that “The signs are that the Exchange ecosystem is incomplete or not prepared for prime time.”  Well, one of the important pieces of the Exchange ecosystem for many companies is now available as Symantec has released Enterprise Vault 10.0.3. According to Symantec’s blog, the new release supports:

  • Microsoft Exchange Server 2013
  • Microsoft SharePoint Server 2013
  • Microsoft Outlook 2013 (on the desktop)
  • Windows 8 (desktop)
  • Windows 2012 as a File System Archiving target
  • 64-bit Domino as a gateway
  • Mac client OS 10.8 (Mountain Lion)

All of this is good stuff and I like that Symantec is prepared to support its customers who want to deploy Exchange 2013 along with Windows 2012 servers (the natural platform for Exchange 2013) and Outlook 2013 (the best client for Exchange 2013). Of course, quite how many customers will be ready to go ahead and deploy is the big question, but it’s good to be ready.

When Microsoft shipped a mass of compliance features in Exchange 2010, including archive mailboxes, some questioned how long products like Enterprise Vault could compete against the combination of well-integrated features and low cost offered by Microsoft. Compliance has received further development in Exchange 2013 with new features such as in-place hold (replacing retention and litigation holds and better integrated with search) and data loss prevention (basically, a sophisticated form of transport rule that checks for known data patterns in email). And Microsoft has not been slow to impress customers with the perceived advantages of its approach to keeping everything in mailbox databases and why stubbing is so bad, understanding of course that Enterprise Vault and other third-party archiving and compliance products extract data from mailboxes to populate their own indexes.

Enterprise Vault has been around a long time. If you’re interested in its history from an add-on for Exchange created by Digital through a sell-off by Compaq and development by KVS, you can read my post covering its general history and one from Nigel Dutt, ex CTO of KVS, who led its development before Symantec acquired the product some years ago.

With such a long history in the areas of compliance and archiving, it’s fair to say that Symantec know what they are doing. Based on the feedback from many companies that use Enterprise Vault, it seems like a solid product that fits well into compliance processes. Some customers have been lost to Microsoft, which then creates a whole new set of challenges to migrate data out of the Enterprise Vault repository back into Exchange databases while continuing to comply with regulatory and legal requirements for data retention, privacy, and security. In short, an expensive and time-consuming operation.

Apart from anything else, the presence of such a longstanding competitor (and partner) in the form of Enterprise Vault keeps Microsoft honest and provokes innovation through competitive pressure. Long may this continue!

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2013 | Tagged , , , , , , | Leave a comment

New Store, new memory tuning techniques required


One of the more interesting observations that we will soon be able to make is how well the thoroughly overhauled and rewritten Exchange 2013 Information Store functions in production environments. Of course, there’s the small matter of the rest of the deployment prerequisites to be met, such as co-existence with legacy versions, what to do about clients and third party products, and so on. But all of that should not take away from contemplating the Store.

A very good reason has to exist before any software development group will take on the rewrite of a core component. From an Exchange perspective, the good health and reliable functioning of the Store is of primary concern. After all, you don’t want to introduce problems by rewriting code just for fun, to use the latest and greatest programming language or application framework, or for any other reason dreamed up by management (cue thoughts of Dilbert’s curly-haired boss…). This is especially important when considerable effort has been dedicated to upgrading the Store, which Microsoft did pretty successfully in Exchange 2010 through a schema refresh, the introduction of the Database Availability Group, and so on.

The rewrite that the Exchange 2013 developers embarked upon converted the Store into managed code, using C# for the purpose. Despite an ongoing clamour to move Exchange to SQL, the underlying ESE database engine remains in place, simply because it’s the best tool for the job. Above ESE, a set of Store processes run instead of a single monolithic process. Each database is served by a worker process, and all of the worker processes are coordinated by Store.exe. According to Microsoft, the new Store is more tightly integrated with the mailbox replication service (MSExchangeRepl) to ensure that the replication to database copies progresses more smoothly and that database activations are faster.

One of the side-effects of the rewrite is that the Store contains fewer lines of code. When you rewrite code that has been around for a long time, you invariably find code that is no longer required, code that was inserted to handle a long-forgotten condition, or code that might have been required to work around a problem that can no longer occur. And so it was with the Store rewrite. Many lines of unwanted and unused code have been consigned to the wastebasket and Exchange now has a Store that’s been through the equivalent of a programming colonic irrigation.

In practice, you’re probably not going to be aware that a rewrite has happened, unless of course you poke around under the hood and look to see what processes are active on an Exchange server. And then you might see something like what’s shown below, which comes from a multi-role Exchange 2013 server that supports four databases. You can see the central Store process and four worker processes, one for each database. The SharePoint Search component processes shown at the bottom of the screen shot are those that index content in the databases. They’re shown as SharePoint because the Search Foundation (aka FAST) code “belongs” to SharePoint and is shared with Exchange to create a common indexing and search capability across both repositories. The search processes are memory and CPU hungry and can occupy 10% of system resources quite easily.

Store processes running on an Exchange 2013 multi-role server

Store processes running on an Exchange 2013 multi-role server

Which brings us to system tuning. The “old” Store process had an endearing capacity to seize as much memory as it could on a server using a process called dynamic buffer allocation (DBA). The idea being that caching as much data in memory was a good thing because it avoided expensive I/O to disk. Exchange’s DBA scheme worked well, but could cause problems for smaller servers where Exchange was just one of the applications in the mix and it wasn’t good for Exchange to be quite so demanding in terms of memory.

A long time ago, back in the mists of time when Exchange 2000 was the most modern email server you could imagine, Microsoft published KB266768  “How to modify the Store database maximum cache size”. Essentially, you practise brain surgery on Active Directory with ADSIEdit to modify the properties of the Information Store object to set the msExchESEParamCacheSizeMax value for a server, which controls the maximum size of the Store cache. This technique works for Exchange 2000, Exchange 2003, Exchange 2007, and Exchange 2010 (as explained here, it’s slightly different for Exchange 2010 SP1 onwards) and is often applied to servers running Windows Small Business Server and similar configurations.

Unsurprisingly (given the Store rewrite), the ESE cache maximum and minimum size properties don’t exist by default on Exchange 2013 servers. It might be that the new Store is much better at controlling memory demands than the old. Certainly, the Store worker processes appear to demand less memory in total than the older monolithic process. But we’re only at the start of understanding how to manage Exchange 2013 and I’m sure that new ideas and suggestions will appear as time goes by.

Change always brings disruption. It seems like this old technique from Exchange 2000 is no longer viable – unless of course someone knows different?

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2013 | Tagged , , , , , , , | 2 Comments