Zombie health mailboxes and EAS probes

When I discussed how some corrupt health mailboxes came about following my flattening of an Exchange 2013 server, I called them “zombie mailboxes”, which might have seemed unkind to some. In fact, it’s an accurate reporting of the fact that the Exchange Health Manager service will faithfully recreate two health mailboxes per mailbox database if it finds them missing. A Managed Availability probe cannot be deprived of its mailbox, after all.

You could have hours of harmless fun dealing health mailboxes and waiting for the Health Manager service to notice and recreate them, but this is probably not a very good use of your time and is certainly not a productive activity. Unless, of course, you need to be busy when a manager looks for you to do something which you’d really prefer to avoid, in which case declining on the basis that you have to “recreate some health mailboxes to keep Managed Availability working” is just the sort of excuse that comes in handy.

By now you might be convinced that I am super-sharp to notice that health mailboxes were corrupt. Alas, this couldn’t be further from the truth as I was blissfully unaware that anything was amiss until I noticed some device quarantine notifications show up in my mailbox. These notifications are created by Exchange ActiveSync (EAS) after you establish a mobile device access rule (otherwise known as an ABQ rule for “allow, block, or quarantine”) that quarantines unknown devices when an attempt is made to connect them to a mailbox. Exchange 2010 and Exchange 2013 support ABQ rules.

Managed Availability uses the health mailboxes to create synthetic transactions. One message category verifies that EAS is working properly by connecting to the health mailboxes to emulate an EAS transaction from a mobile device type called “EASProbeDeviceType”. The quarantine message (shown below) also tells us what health mailbox was used.

Warning health message

EAS quarantine notification for a health mailbox

The health mailboxes were recreated a short period after I removed the corrupt mailboxes. Managed Availability then attempted to use the new mailboxes for the EAS synthetic transactions. The new mailboxes were unknown to EAS and were not covered by any existing ABQ rule so their attempts to connect were intercepted and the devices quarantined. Of course, these aren’t real EAS clients and are not as important to deal with as would be messages about quarantined devices that belong to humans. On the other hand, it is important to allow Managed Availability to work as designed, so some action is necessary.

The easiest solution is to create an ABQ rule that permits the probes to connect to the health mailboxes with EAS. This takes less than a minute of administrator time to fire up EAC, go to the mobile tab, and create the necessary rule (below). Afterwards EAS will recognize “EASProbeDeviceType” as a valid device type, much like it might recognize a Windows Phone 8 device or an iPhone.


An EAS access policy for EASProbeDeviceType

The good news is that you might not have to create a rule at all. The default is to let any device connect to EAS and unless you made a decision to block selected devices you will probably discover that no ABQ rules are in place, which then means that the health mailboxes can connect as they wish. On the other hand, it’s entirely possible that you put some ABQ rules in place some time ago and have missed the quarantine messages about the EAS probes. But that would never happen, would it?

Follow Tony @12Knocksinna

Posted in Exchange 2013 | Tagged , , , , | 4 Comments

Announcing “Office 365 for Exchange Professionals”


The world of technical book publishing is going through a transformation. More information is available than ever before online; software and hardware products evolve faster; people demand up to date knowledge that is also insightful and in-depth. These factors create enormous difficulties for the way that technical books were written, edited, and published in the past. Simply put, it is no longer acceptable for an outdated technical book to appear.

I’ve been immersed in the traditional approach since 1991 and have written fifteen books in that time. The usual approach is:

  • Identify a need: “Let’s write a book about Product A”.
  • Create book proposal: Outline the structure of the book – how many chapters, what each chapter will cover, the overall length – and perhaps write the first chapter to demonstrate the kind of coverage you want to give to a subject.
  • Sign with a publisher: Agree a contract outlining when the book will be delivered and the commercial terms that apply, including the royalty rate paid for book sales, translations, and electronic copies.
  • Write the book: Depending on the length, this can take months. Modern software is complex and has many twists and turns.
  • Technical edit: Have the book reviewed by an acknowledged expert in the space who will helpfully point out all the places where mistakes are made, omissions occur, and an author’s opinion on a matter if is just plain wrong.
  • Copy edit: Correct the content by applying a “house style” to the text. Some authors need a lot more work than others (even up to and including a complete rewrite of text) and some of the changes that are made can seem bizarre. Publishers might not like particular phrases, for instance, because they make the book harder to translate for different markets, and some look for additional input by the author to make the book more accessible. And then you have the “commercial” side where a publisher might not like the author to use particular examples. Microsoft Press, for instance, doesn’t like screen captures of the Chrome browser.
  • Editing: Constant effort is required from an editor to keep the workflow going from initial creation to publication. Details like creating an index, book layout and styles, and so on are discussed and implemented.
  • Preparation: The final text is given to a desktop publishing expert who takes Word documents, bitmap graphics, and anything else that’s needed and composes a good-looking book that meets the layout and graphic requirements of the publisher.
  • Printing: The desktop publishing output (usually high-quality PDF or PostScript files) are generated in the format required by the printer. The printer then prints, binds, and ships the books to wherever they are to be sold. If required, electronic copies are produced in the formats required by the various readers.

Phew! That’s a lot of work by a lot of people and it takes place over a long time. A book about a new version of Exchange would be usually agreed some months before the software is available but writing can’t start until the software is in reasonable shape, normally well on in the beta cycle. I’ve found that it is worthless to write about the software that is shipped to customers as the RTM (Release to Manufacturing) version because very few companies ever install the RTM version. It is better to base a book on software that has matured a little and the obvious bugs have been fixed.

Writing the text of a book might not finish until six months after RTM. During this time it is relatively easy for the author to keep up with new developments as people discuss the software, encounter bugs and workarounds, or find new ways to use it. That knowledge can be incorporated into the text as it arises.

The technical edit phase probably lasts two months, depending on the length of the book, and will overlap the writing phase somewhat. The author has to provide finished chapters to the editor, who checks them and then releases the chapters to the technical editor. Reviewed chapters flow back to the editor after a couple of weeks and are sent on to the author. The comments usually result in a set of updates to the chapters. Any new information that has come to hand about the software can be incorporated into the text at this point.

The copy editing phase kicks in as fully edited (through technical review and author update) chapters become available and the interaction between copy editor, editor, and author takes another four to six weeks.

Chapters are also being worked on by the desktop publishing expert to create the final form of the book. From this point on it becomes more difficult to accommodate new information because the page count is being finalized and insertion (or deletion) of material will affect page flow, indexing, and so on. It’s still possible to make changes, but the changes tend to be relatively minor. There’s no way that a section of a chapter will be rewritten at this point unless it is badly flawed and absolutely needs the work to be done.

A final round of reviews occurs after all the chapters are in their almost final form to identify any issues – code extracts are often problematic as pouring text from Word documents into desktop publishing packages can affect their formatting and meaning. The last few patches are made to the text, but now it’s really only done on a must-need basis.

The book now goes to the publisher and appears four to six weeks later. Noticing a mistake at this point produces real heartburn for all concerned, but it happens. That frustration continues as time goes by and new updates appear but the printed copies stay the same. I would very much like to have an update for my Exchange 2013 Inside Out: Mailbox and High Availability book, but that’s not going to happen anytime soon, despite the multitudinous updates that have appeared since the book was published in September 2013.

Everyone involved in moving a book from text to print attempts to work as efficiently as possible, but even so the entire process can take between nine and fifteen months. And it’s terribly difficult to accommodate changes due to software updates during the last three months.

Extended as it was, the process has worked for years. But that’s because a products like Exchange, SharePoint, Outlook, or Windows have been engineered in three or four year cycles, leading to releases like Exchange 2000, Exchange 2003, Exchange 2007. Exchange 2010, and Exchange 2013. It worked, but it’s been getting harder and harder as Microsoft has changed its engineering cadence in response to the faster release cycles of competitors and customer demands for new functionality sooner.

It’s crazy to think about using traditional publishing methods to produce a book about Office 365. Too many changes happen too quickly for the old approach to work. Anyone who stays abreast of the constant flow of announcements on the Office Blogs knows this. Any administrator responsible for an Office 365 tenant realizes that things are different in the cloud and that software can change daily. At the time of writing, Microsoft’s Office 365 Roadmap lists 46 engineering developments under way. That list is incomplete because it only includes the major efforts; bug fixes and adjustments to features happen all the time.

The work necessary to keep text up to date about Office 365 is enough to make an experienced author cry. It’s time for a new approach. That’s why Paul Cunningham (ExchangeServerPro.com), Michael Van Horenbeeck (nicknamed Van Hybrid for good reason), and myself will publish “Office 365 for Exchange Professionals” on May 3. The book is designed to explain Office 365 to experienced Exchange on-premises administrators, but I think it will be valuable to anyone who wants to learn more about Office 365.

We’ve been writing since last December and underestimated the effort necessary to stay abreast of new developments inside Office 365. But the work has justified our belief that this book would be impossible to do using traditional methods.

“Office 365 for Exchange Professionals” will be available for purchase as an eBook from ExchangeServerPro.com. We plan to update the book regularly, so the version you download from the site will be the latest text. We’ll probably take a bit of a rest after the first version appears, but you can expect regular updates from September onwards.

We’re still working on the text and won’t finalize it until April 15. I can tell you that the book currently spans some 600 pages divided into 18 chapters.

“Office 365 for Exchange Professionals” has been a fantastic project. Apart from learning a ton of stuff, we have received terrific support from the Exchange product development group and the MVP community. Jeff Guillet is the overall technical editor and he’s being helped by other MVPs who are reviewing individual chapters. And we have the pleasure of a foreword written by Microsoft Vice President Perry Clarke, who has led the development effort to take Exchange from being software designed for deployment inside corporate datacenters to cloud software that supports tens of millions of mailboxes with a very substantial record of meeting SLAs.

Hopefully you’ll like the book. And hopefully we will receive lots of ideas and suggestions that we can incorporate into the second edition, and the third edition, and so on. I suspect that this project might turn into an ongoing effort, but we’ll see how the first edition turns out and decide what happens then.

Now we had better get on and finish the book else it won’t be at Ignite.

– Tony (with a lot of help from his friends)

Follow Tony @12Knocksinna

Posted in Exchange, Office 365 | Tagged , , , , , , , , , , | 6 Comments

Exchange Unwashed Digest – February 2015

February might be the shortest month but that’s no reason not to have lots to discuss on my “Exchange Unwashed” blog on WindowsITPro.com. Here’s what happened during the month – a lot about the new Outlooks apps and Delve, but some other stuff too, including a new estimate for the total number of mailboxes running inside Exchange Online.

Why PowerShell is often not the best tool for reporting Exchange data (Feb 26): Of course, PowerShell is wonderful. It’s a great tool to interrogate Exchange and it has enabled a tremendous amount of automation, including lots inside Office 365, since its introduction with Exchange 2007 nearly nine years ago. But PowerShell isn’t great at generating reports so that all the data you uncover is formatted nicely for the powers-that-be. A reporting product might help, or you can continue to roll your own and be happy…

How Office 365 Groups could be so much better – and probably will be in the future (Feb 24): Even if you use Office 365, you might not be even aware of the new groups that can be created using Exchange and SharePoint (and a bit of OneNote and OneDrive). And if you’re an on-premises customer, you definitely don’t know about them. But Office 365 groups are in use and they are getting better – and I have some ideas about how they could be even better.

Compliance and hybrid problems loom as Microsoft plans to keep every deleted item in Exchange Online (Feb 19): On February 20, Microsoft announced that Exchange Online would no longer remove items from the Deleted Items folder in user mailboxes. As they do with most changes in Office 365, Microsoft gave some up-front warning by including the change in the Office 365 Roadmap, but I guess most people didn’t notice it – but I did. Perhaps it’s because I have an interest in compliance or that my Deleted Items folder is a real mess (most of the time), but I’m not a great fan of the change. Some will love it, others won’t, but at least it’s an easy change to reverse.

First update for Outlook apps improves security but lots remains to be done (Feb 18): After releasing the corporate-branded version of the acquired Acompli apps as the new and improved Outlook for iOS and Android, Microsoft received a few critical comments (to put it mildly). The first response came soon afterwards with some changes to PIN protection and other updates. It’s a start, but there’s more to do.

Delve ups and downs illustrate complexity of Office 365 engineering (Feb 17). Office Delve has been in “first release” since last September and many Office 365 tenants have still not seen the new application that makes it easier to find work of interest to you from the silos of information that exist in many large companies. Delve is still a work in progress and this article notes some of the issues that I’ve seen in the last five months.

What Microsoft needs to do to upgrade the Outlook apps (Feb 12): Those ex-Acompli apps again, but this time a list of places where I think the apps need improvement before they can really be a player in the big-time enterprise market. That doesn’t mean that you can’t use Outlook for iOS and Outlook for Android today. These are very nice email clients and are extremely usable, but maybe should be treated as betas for what you might see in the future.

Delve exposes Exchange email attachments but some fine-tuning is needed (Feb 10): As part of the work to develop Delve so that it can escape from first release status, the engineers added the ability to find attachments circulated with Exchange email (but only for Exchange Online). I like this because an awful lot of valuable information is sent around organizations as email attachments, but I had some comments about the implementation that I hope will be fixed before Delve hits prime time.

80 million Exchange Online users as Office 365 progress continues (Feb 5): Microsoft is extremely cagey during any discussion of how many cloud mailboxes run inside Office 365. It’s confidential competitive data after all and they don’t want to get into a “mine is bigger than yours” competition with Google. All we have to go on is the financial data that Microsoft has to report to the market, and all we can say is that things are on the up. Some noble guesswork indicates that at least 80 million cloud mailboxes exist inside Office 365. Maybe it’s more.

Shock! Outlook 2010 users on Windows XP experience problems with Exchange 2013 CU7 (Feb 3): Some of the fine readers of this blog complained that they had run into problems after upgrading their servers to Exchange 2013 CU7. It looks very much like some changes made to how public folders are accessed don’t play nice with Windows XP clients running Outlook 2010. Any sympathy for these users is negated by the creaking age and unsupported status of their chosen platform. Which is sad.

Worried about security and privacy in Outlook for iOS and Android? Here’s your chance to debate the issues (Feb 2): After receiving so much negative feedback about the ex-Acompli apps, Microsoft held a public “Yamjam” (think online forum) for people to ask questions and voice concerns. That’s long in the past now. But what’s interesting is the communication from the developers about how they came to be in the current situation and how they plan to move on from here. Worth a read.

Now we’re back into a longer month and I can already see a batch of new topics to be discussed. Stay tuned and stay connected with Exchange Unwashed – and a new podcast episode of Exchange Exposed that will be coming soon. The podcasts are now available on iTunes for your listening pleasure.

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Exchange 2013, Office 365, Outlook | Tagged , , , , , , , , , , , , , | 1 Comment

Corrupt health mailboxes from a flattened Exchange server

I flattened an Exchange 2013 server the other day. I don’t mean that I took the physical computer out into the parking lot and drove a large vehicle over it to reduce the server to so many random bits of metal. Instead, I did what Exchange administrators have done ever since Exchange 2000 came along when a server is proving truculent and Exchange (the product) won’t uninstall cleanly. I ran ADSIEdit and blew away the server object. End of story.

But it wasn’t really. In the old days, removing the server object with ADSIEdit was clean and efficient. You could then reinstall Exchange on the server and all would be well. Now, Exchange leaves traces of itself in many different places in Active Directory or the system registry, and generally it’s a real pain to find, remove, and validate that all vestiges of a server have truly been removed. In short, flattening with ADSIEdit is only the start of the process.

The support engineers in the Exchange product group are appalled by such behavior. It’s not polite nor the least bit subtle to remove a server so brutally. They prefer that you run the Setup program and take the uninstall option. This would be nice if it worked all the time but sometimes it just doesn’t. In my case, I had committed a major faux pas that prevented the uninstall process from completing.

When I installed the server, I told Setup to use C:\Exchange as the base directory. That installation failed (it was a beta version), so I restarted. The Exchange setup program is pretty intelligent and uses watermarks to know how far it had progressed before a problem occurred so that it can restart and not redo work. Unfortunately, I failed to input C:\Exchange when prompted by Setup and the program therefore used its default location, C:\Program Files\Microsoft\Exchange Server\V15. Setup ran through to completion but left a confused and bewildered server whose files were merrily scattered across the two directories. Hence the need for uninstall, frustration when uninstall didn’t work, and using ADSIEdit to remove the server from the organization.

A better approach might have been to rebuild Windows on the server and then use Setup’s /RecoverServer option, which takes the information held about a server in Active Directory and uses it to reinstall Exchange. Such an approach might have worked, but I concluded that the reinstalled server would probably have been as confused as the original.

It would be nice if Exchange offered a /DeleteAndRemoveNow switch for Setup that would blow away a server and remove every possible trace through brute force if necessary. Unhappily that request has fallen on deaf ears as the product group doesn’t believe that it’s necessary. But it is, especially in test labs or when administrators (like me) do stupid things to servers.

In any case, I did learn something from the experience. After reinstalling Windows from scratch and then Exchange 2013, I found that some weird results were reported when I ran the command Get-Mailbox –Monitoring to view the set of health mailboxes. You might think that this is an odd command to run and certainly not one used regularly. This is true, but I was investigating the depths of Managed Availability and this command reports the set of health mailboxes created in every database in an organization so that probes can create synthetic messages to test that mail flow and other components are working correctly.

As you can see from the screen shot, EMS reported a corrupted mailbox. In effect, some of the properties required for the mailbox were missing (database being one). Although the screen shot shows just one corrupted mailbox, I started off with many such mailboxes.

Inconsistent health mailboxes

Inconsistent health mailboxes

I noticed that the corrupt mailboxes are reported as being associated with an object stored in the Monitoring Mailboxes organizational unit, a child of the well-known Microsoft Exchange System Objects (MESO) organizational unit. This was a surprise because Exchange 2013 used to create the disabled user objects associated with health mailboxes in the Users organizational unit. Apparently the change to MESO was made in Exchange 2013 CU1, something that passed me right by. The change makes perfect sense because most installations don’t like random objects showing up in Users; it’s much better when applications have their own location for data that they use.

But why had I some corrupt health mailboxes? The answer is simple: these are lingering traces of the databases that used to exist on the server that I flattened with ADSIEdit. Because the server had gone away, Exchange was not able to associate the user objects with the mailboxes in the now-departed databases. Hence corruption.

The solution was simple – remove the corrupted objects (using ADSIEdit of course). In this case, I knew that I had a backstop because if I made a mistake and deleted the wrong objects, the Microsoft Exchange Health Service would recreate the mailboxes the next time they are needed. Think of them as zombie mailboxes – they always come back from the dead.

Follow Tony @12Knocksinna

Posted in Email, Exchange 2013 | Tagged , , , | 4 Comments

Synchronizing the Exchange 2013 public folder hierarchy across public folder mailboxes

For space reasons, this text is another bit that was cut out of my Exchange 2013 Inside Out: Mailbox and High Availability book. FWIW, here it is…

As you’re probably well aware, public folders are scrubbed and polished to become bright, new, and modern in Exchange 2013. Instead of the legacy public folder database, the folders are stored in public folder mailboxes alongside other mailboxes (both user and system) in regular mailbox databases so that they can benefit from the investment Microsoft has made over the last decade to give Exchange many high availability features.

The good news is that the new scheme works for both on-premises and cloud deployments. As usual, migration is a bit of a pain, but once you have moved the older public folders across to mailboxes, everything works quite nicely as long as you have clients that understand how to access the new content. For now that means Outlook 2013 or Outlook Web App.

The first public folder mailbox created in an organization holds the primary or writeable copy of the folder hierarchy. All other public folder mailboxes contain a not-writeable or secondary copy of the hierarchy that are updated by an Exchange mailbox assistant every 24 hours at a minimum, or every 15 minutes if clients are connected to a public folder mailbox that contains a secondary copy of the hierarchy.

Synchronization ensures that all of the public folder mailboxes present the same hierarchy to clients. By reference to the writeable copy, public folder mailboxes that hold secondary copies learn about additions and removals of public folders and the mailboxes that hold public folder content. The older form of public folders also synchronize information between databases but use replication messages for this purpose. By contrast, Exchange 2013 synchronizes public folder mailboxes by connecting to the mailbox that contains the primary hierarchy and adjusting a secondary copy based on the primary.

Synchronization is obviously tremendously important. Normally everything happens automatically and you should never have to interfere. But if such a situation arose, as in the case when a user reports that no trace of a newly created public folder is visible to them, you can force synchronization for a mailbox that holds a secondary copy of the public folder hierarchy by running the Update-PublicFolderMailbox cmdlet. For instance, this example instructs Exchange to synchronize the hierarchy in the “PFMBX2” mailbox with the primary copy:

Update-PublicFolderMailbox –Identity ‘PFMBX2’ –InvokeSynchronizer –FullSync

[Note: this is also the solution to the bug that Microsoft discovered soon after Exchange 2013 RTM CU2 was released when permissions on secondary public folder mailboxes disappeared following a move]

If everything goes to plan, you’ll see a message saying that “Sync with mailbox that contains primary hierarchy is complete” and you can go on with your life. The more curious will want more detail and fortunately this can be gained by running the Get-PublicFolderMailboxDiagnostics cmdlet. As its name indicates, this cmdlet is designed to help debugging problems with public folder mailboxes, but it does reveal some interesting data. In this example we run the cmdlet for the same public folder mailbox and direct the output to a variable. We then look at the synchronization data that is revealed by the cmdlet.

$Info = Get-PublicFolderMailboxDiagnostics –Identity "PFMBX2"  


NumberOfBatchesExecuted          : 1
NumberOfFoldersToBeSynced        : 0
BatchSize                        : 500
NumberOfFoldersSynced            : 0
DisplayName                      : Public Folder Diagnostics Information
LastSyncFailure                  :
LastAttemptedSyncTime            : 28/08/2013 12:17:13
LastSuccessfulSyncTime           : 28/08/2013 12:17:15
LastFailedSyncTime               :
NumberofAttemptsAfterLastSuccess : 0
LastSyncCycleLog                 : 2013-05-28T11:17:13.925Z,,Entry,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,Sync started,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee362013-05-28T11:17:15.064Z,,Verbose,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,Delay:0;Iteration:1,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.064Z,,Verbose,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,BeginReconcilation,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z,,Verbose,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,EndReconcilation,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z,,Statistics,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,0 folders have been added,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z,,Statistics,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,0  folders have been updated,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z,,Statistics,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,0 folders have been deleted,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z, Success,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,Diagnostics for monitoring is successfully completed,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36

Everything seems good in this instance, which is what we’d expect.

Remember that these hints only work when the commands are run against a public folder mailbox that holds a secondary copy of the hierarchy. The mailbox that holds the primary copy doesn’t have the same need to synchronize as it already knows everything about the hierarchy. And of course, if you use public folders with Office 365, you won’t see any trace of these cmdlets because Microsoft takes care to maintain public folders in good health for its cloud tenants.

Follow Tony @12Knocksinna

Posted in Email, Exchange 2013 | Tagged , , , , , | 3 Comments

Why Clutter generates so many FAIs in user Inboxes

Playing around with the Get-MailboxFolderStatistics cmdlet the other day (as you do), I noticed that the number of items reported for the Inbox folder (8,443) didn’t match the number shown by Outlook (8,009). Of course, Outlook Web App has the good sense not to display the total number of items in a folder so as to avoid these kind of debates, but once I had noticed the discrepancy, it was time to check it out.

Checking items in an Inbox

Checking items in an Inbox

Nothing quite reveals the secrets that lurk inside Exchange mailboxes like MFCMAPI does. It’s a utility that should really be close at hand for any Exchange administrator because of its usefulness in many situations. In this case, all I wanted to do was to poke around the Inbox, but that’s only the start of what you can do with the program.

MFCMAPI works for both on-premises and cloud Exchange mailboxes. To be sure that you see everything, configure your Outlook profile to connect directly to the server rather than running in cached Exchange mode, which is the most common method used to run Outlook. A quick change to Outlook’s settings and MFCMAPI was ready to roll.

The answer is actually pretty simple. The Inbox folder is used as a convenient storage location for all manner of folder associated items (FAIs), hidden items created by Exchange and clients to store settings and all manner of configuration details. The Inbox is used because you can always be sure that it’s in a mailbox. The FAIs are stored in the folder’s associated contents table rather than the normal table used for regular mailbox items.

For example, Exchange’s Messaging Records Management (MRM) features store details about the retention policy that is assigned to a mailbox, including the retention tags in the policy, in an FAI, which is created the first time a mailbox is processed by the Managed Folder Assistant following the assignment of the policy. Another FAI is used to hold details of RSS feeds.

A client-specific example is the weather settings FAI, which is created by Outlook 2013 to store details of the location selected for weather information displayed in the Calendar.

But the biggest set of FAIs accumulated in the Inbox were those created to help the Clutter feature in Exchange Online to figure out what messages are important to a user and what are not. In my case, hundreds of FAIs hold training information gathered through observation of how I deal with messages – the ones I delete unread, the ones I move to the Clutter folder, the ones that I answer immediately, and so on. These items represent some of the “signals” gathered by Clutter to help it sort out the messages that arrive into an Inbox into those that should remain in the Inbox and those that should be redirected into the Clutter folder. See my FAQ for more information on how Clutter works.

MFCMAPI exposes Clutter FAIs

MFCMAPI exposes Clutter FAIs

The fact that Clutter creates so many FAIs isn’t really a problem because the items themselves are pretty small and anyway, given that mailboxes are so massive now, the couple of hundred kilobytes consumed to train my mailbox to behave properly seems like a good investment.

It would be nice if Clutter appeared for on-premises mailboxes too, but all the signs are that this is one complex feature that needs the kind of tender loving care that only a dedicated engineering team can provide. That doesn’t happen so often in on-premises deployments…

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Office 365 | Tagged , , , , , | Leave a comment

Exchange Online and Native Data Protection – No Backups Baby!

Cloud-based email services such as Exchange Online bring a great deal of value to the table. A service delivered for a fixed known cost that is constantly refreshed with new features (“evergreen software”) and relieves administrators of the need to perform all the mundane tasks required for day-to-day server management. In addition, the provider takes responsibility for knitting together software to deliver new services, such as Clutter, Office Delve, and People View – all of which were previewed at the MEC 2014 conference and have recently appeared in “the service”.

And when all of this is backed up by an impressive workflow to automation operations (as explained in this “Behind the Scenes session” from MEC 2014), it seems like cloud-based email is the answer for almost everyone.

By their very nature, cloud operations have to proceed within strict limits as this is the only way that automation can be applied to make everything happen in a predictable and robust manner. If automation can’t be applied to a task, you need human intervention and that’s bad because it’s a) expensive and b) prone to error. Neither of these characteristics contribute to the successful economic operation of massive services.

The Office 365 design and support team do their level best to ensure that very few situations occur that cannot be handled by their procedures. And in the general run of things, they do very well, which is why Microsoft can report better Office 365 results every quarter. More customers are in the cloud, more companies are using Microsoft cloud services, and more servers, datacenters, and network have been deployed to satisfy demand. It’s all a happy picture.

That is, of course, until you enter the realm of problems that are not catered for by cloud operations, which is when you realize just how much control you cede when your mailbox is absorbed by the cloud.

Take backups for example. Now, I know that the mere mention of backups is sufficient to make some readers curl up in a ball because they’ve been scarred when bad things happened with backups in the past. Actually, the backups probably worked or seemed to work; problems usually happen when you attempt to restore data. But in most cases, companies have their backup and restore regimes worked out and they’re happy.

But backup happens – or rather doesn’t happen – in a different way when you deal with the scale found in Office 365. Just think about how you would do backups for the 100,000 servers that run Exchange Online. It’s a nightmare just to plan backups, let alone figure out how to handle the resulting backup media.

So Microsoft doesn’t do backups for Exchange Online. And why would they? After all, there’s plenty of technology built into Exchange to allow backups to be eliminated, or so the story of Exchange native data protection goes. As explained by TechNet, Exchange Online uses multiple database copies in DAGs arranged across multiple datacenters to ensure that nothing goes wrong and data is protected. We also learned at MEC that Exchange Online uses lagged database copies to ensure that creeping corruption can’t cause total meltdown. It all sounds wonderful. And automation implemented in features such as the Replay Lag Manager makes this technology work better for on-premises customers too.

It is, as long as you want to recover data that is still around. That means that the required items are still in a database because once items are permanently removed from a database, they are gone forever. Remember – there are no backups.

By default, single item recovery is enabled for mailboxes to ensure that the Managed Folder Assistant won’t delete items until the deleted items retention period lapses. The default period is 14 days, which isn’t a lot. You can increase the deleted items retention period to a maximum of 30 days whereas on-premises Exchange allows a maximum of 24,855 days. The on-premises maximum seems a little ridiculous as it allows for more than 68 years of deleted rubbish to accumulate in a mailbox. On the other hand, 30 days seems pretty restrictive.

With the maximum set, a user has 30 days to recover a deleted message. If that period passes and a user then “remembers” that some important item has been deleted, then they are plumb out of luck and no manner of pleading to the Office 365 support desk will cause them to budge. In any case, what could support do? It’s inconceivable that they would take the lagged database copy and use it to recover the item even if it was possible to do so (i.e. the lagged period was sufficient so that the item was not yet deleted in the lagged copy). Such an operation would be intensely manual, expensive, and potentially compromise the smooth operation of the service.

All you can do to protect the mailboxes of important people who might delete important mail in a forgetful moment is to put their mailboxes on litigation hold. This is not a good answer because it’s not what litigation hold is designed to do, but it works.

The alternative is to educate users not to do stupid things. After all, they have a massive mailbox quota so why would they want to delete anything… or so the story goes. But users are humans and humans make mistakes and education might not work.

Or, you can do what Microsoft is proposing to do in the feature described in the Office 365 Roadmap which says:

The default 30-day retention period of deleted items folder on an Exchange Online mailbox will now be removed.  This means the user no longer has to worry about their deleted items folder automatically deleting emails every 30 days, but instead they can choose to empty the folder at their convenience. The admin can set a limit through Exchange Admin Console and PowerShell if they want to set a default limit on the folder.

In fact, I think the text is incorrect. What seems to be happening is that Microsoft is removing the retention tag for the Deleted Items folder from the Default MRM Policy that is automatically applied to every Exchange Online mailbox, or they are disabling the tag for the Deleted Items folder (by setting its RetentionEnabled property to $False). This would be the smarter course of action as the tag can then stay in place on items but will be ignored by the Managed Folder Assistant.

In either case, the net effect will be that items will accumulate in the Deleted Items folder for up to two years, after which the default move to archive tag will kick in and the items will be moved to an archive mailbox, if one is enabled. If not, the items will simply remain in the Deleted Items folder until the user decides to empty that folder. The items will never get into the Recoverable Items folder and so the deleted items retention period described above won’t affect their recoverability. The extra storage really doesn’t matter because most users will accumulate no more than a gigabyte or two of deleted items annually.

An item on the Office 365 roadmap means that Microsoft is working on its delivery for sometime in the future. So far the exact time for the implementation of this feature is uncertain, but it should come in the relatively near future.

If you consider that it is better for the Deleted Items folder to be cleared out on a regular basis, you can re-enable the Deleted Items retention tag. Alternatively, you can adopt a middle course and update the Deleted Items retention tag to lengthen its retention period. Something like a 120 retention period seems reasonable. If someone has not worked out that they have lost something important after four months, that information might not be so important after all.

Conceptually, in some respects, the on-premises scenario for item recovery is much simpler (for the user). Administrators locate backup media for a time when the item was known to be in the database, the backup is restored to a recovery database and the item is then transferred to a PST. The PST is then cheerfully handed over to the user with the best wishes of the administrators. All is well. Apart, that is, from the number of people hours consumed in this activity. And that’s why Exchange Online uses Native Data Protection and trades large amounts of disk space to ensure that the need to recover items for users simply doesn’t exist.

Native Data Protection is part of what you sign up to when you move mailboxes to the cloud. Like other elements of the cloud experience, you have to trust the operators to keep your data safe. Outside the unique requirements of forgetful users, the cloud works. Not maybe the way that you’d like it to work, but it does work.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Office 365 | Tagged , , , , , | 5 Comments