Corrupt health mailboxes from a flattened Exchange server


I flattened an Exchange 2013 server the other day. I don’t mean that I took the physical computer out into the parking lot and drove a large vehicle over it to reduce the server to so many random bits of metal. Instead, I did what Exchange administrators have done ever since Exchange 2000 came along when a server is proving truculent and Exchange (the product) won’t uninstall cleanly. I ran ADSIEdit and blew away the server object. End of story.

But it wasn’t really. In the old days, removing the server object with ADSIEdit was clean and efficient. You could then reinstall Exchange on the server and all would be well. Now, Exchange leaves traces of itself in many different places in Active Directory or the system registry, and generally it’s a real pain to find, remove, and validate that all vestiges of a server have truly been removed. In short, flattening with ADSIEdit is only the start of the process.

The support engineers in the Exchange product group are appalled by such behavior. It’s not polite nor the least bit subtle to remove a server so brutally. They prefer that you run the Setup program and take the uninstall option. This would be nice if it worked all the time but sometimes it just doesn’t. In my case, I had committed a major faux pas that prevented the uninstall process from completing.

When I installed the server, I told Setup to use C:\Exchange as the base directory. That installation failed (it was a beta version), so I restarted. The Exchange setup program is pretty intelligent and uses watermarks to know how far it had progressed before a problem occurred so that it can restart and not redo work. Unfortunately, I failed to input C:\Exchange when prompted by Setup and the program therefore used its default location, C:\Program Files\Microsoft\Exchange Server\V15. Setup ran through to completion but left a confused and bewildered server whose files were merrily scattered across the two directories. Hence the need for uninstall, frustration when uninstall didn’t work, and using ADSIEdit to remove the server from the organization.

A better approach might have been to rebuild Windows on the server and then use Setup’s /RecoverServer option, which takes the information held about a server in Active Directory and uses it to reinstall Exchange. Such an approach might have worked, but I concluded that the reinstalled server would probably have been as confused as the original.

It would be nice if Exchange offered a /DeleteAndRemoveNow switch for Setup that would blow away a server and remove every possible trace through brute force if necessary. Unhappily that request has fallen on deaf ears as the product group doesn’t believe that it’s necessary. But it is, especially in test labs or when administrators (like me) do stupid things to servers.

In any case, I did learn something from the experience. After reinstalling Windows from scratch and then Exchange 2013, I found that some weird results were reported when I ran the command Get-Mailbox –Monitoring to view the set of health mailboxes. You might think that this is an odd command to run and certainly not one used regularly. This is true, but I was investigating the depths of Managed Availability and this command reports the set of health mailboxes created in every database in an organization so that probes can create synthetic messages to test that mail flow and other components are working correctly.

As you can see from the screen shot, EMS reported a corrupted mailbox. In effect, some of the properties required for the mailbox were missing (database being one). Although the screen shot shows just one corrupted mailbox, I started off with many such mailboxes.

Inconsistent health mailboxes

Inconsistent health mailboxes

I noticed that the corrupt mailboxes are reported as being associated with an object stored in the Monitoring Mailboxes organizational unit, a child of the well-known Microsoft Exchange System Objects (MESO) organizational unit. This was a surprise because Exchange 2013 used to create the disabled user objects associated with health mailboxes in the Users organizational unit. Apparently the change to MESO was made in Exchange 2013 CU1, something that passed me right by. The change makes perfect sense because most installations don’t like random objects showing up in Users; it’s much better when applications have their own location for data that they use.

But why had I some corrupt health mailboxes? The answer is simple: these are lingering traces of the databases that used to exist on the server that I flattened with ADSIEdit. Because the server had gone away, Exchange was not able to associate the user objects with the mailboxes in the now-departed databases. Hence corruption.

The solution was simple – remove the corrupted objects (using ADSIEdit of course). In this case, I knew that I had a backstop because if I made a mistake and deleted the wrong objects, the Microsoft Exchange Health Service would recreate the mailboxes the next time they are needed. Think of them as zombie mailboxes – they always come back from the dead.

Follow Tony @12Knocksinna

Posted in Email, Exchange 2013 | Tagged , , , | 3 Comments

Synchronizing the Exchange 2013 public folder hierarchy across public folder mailboxes


For space reasons, this text is another bit that was cut out of my Exchange 2013 Inside Out: Mailbox and High Availability book. FWIW, here it is…

As you’re probably well aware, public folders are scrubbed and polished to become bright, new, and modern in Exchange 2013. Instead of the legacy public folder database, the folders are stored in public folder mailboxes alongside other mailboxes (both user and system) in regular mailbox databases so that they can benefit from the investment Microsoft has made over the last decade to give Exchange many high availability features.

The good news is that the new scheme works for both on-premises and cloud deployments. As usual, migration is a bit of a pain, but once you have moved the older public folders across to mailboxes, everything works quite nicely as long as you have clients that understand how to access the new content. For now that means Outlook 2013 or Outlook Web App.

The first public folder mailbox created in an organization holds the primary or writeable copy of the folder hierarchy. All other public folder mailboxes contain a not-writeable or secondary copy of the hierarchy that are updated by an Exchange mailbox assistant every 24 hours at a minimum, or every 15 minutes if clients are connected to a public folder mailbox that contains a secondary copy of the hierarchy.

Synchronization ensures that all of the public folder mailboxes present the same hierarchy to clients. By reference to the writeable copy, public folder mailboxes that hold secondary copies learn about additions and removals of public folders and the mailboxes that hold public folder content. The older form of public folders also synchronize information between databases but use replication messages for this purpose. By contrast, Exchange 2013 synchronizes public folder mailboxes by connecting to the mailbox that contains the primary hierarchy and adjusting a secondary copy based on the primary.

Synchronization is obviously tremendously important. Normally everything happens automatically and you should never have to interfere. But if such a situation arose, as in the case when a user reports that no trace of a newly created public folder is visible to them, you can force synchronization for a mailbox that holds a secondary copy of the public folder hierarchy by running the Update-PublicFolderMailbox cmdlet. For instance, this example instructs Exchange to synchronize the hierarchy in the “PFMBX2” mailbox with the primary copy:

Update-PublicFolderMailbox –Identity ‘PFMBX2’ –InvokeSynchronizer –FullSync

[Note: this is also the solution to the bug that Microsoft discovered soon after Exchange 2013 RTM CU2 was released when permissions on secondary public folder mailboxes disappeared following a move]

If everything goes to plan, you’ll see a message saying that “Sync with mailbox that contains primary hierarchy is complete” and you can go on with your life. The more curious will want more detail and fortunately this can be gained by running the Get-PublicFolderMailboxDiagnostics cmdlet. As its name indicates, this cmdlet is designed to help debugging problems with public folder mailboxes, but it does reveal some interesting data. In this example we run the cmdlet for the same public folder mailbox and direct the output to a variable. We then look at the synchronization data that is revealed by the cmdlet.

$Info = Get-PublicFolderMailboxDiagnostics –Identity "PFMBX2"  

$Info.SyncInfo

NumberOfBatchesExecuted          : 1
NumberOfFoldersToBeSynced        : 0
BatchSize                        : 500
NumberOfFoldersSynced            : 0
DisplayName                      : Public Folder Diagnostics Information
LastSyncFailure                  :
LastAttemptedSyncTime            : 28/08/2013 12:17:13
LastSuccessfulSyncTime           : 28/08/2013 12:17:15
LastFailedSyncTime               :
NumberofAttemptsAfterLastSuccess : 0
LastSyncCycleLog                 : 2013-05-28T11:17:13.925Z,,Entry,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,Sync started,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee362013-05-28T11:17:15.064Z,,Verbose,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,Delay:0;Iteration:1,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.064Z,,Verbose,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,BeginReconcilation,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z,,Verbose,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,EndReconcilation,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z,,Statistics,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,0 folders have been added,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z,,Statistics,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,0  folders have been updated,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z,,Statistics,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,0 folders have been deleted,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36 2013-05-28T11:17:15.204Z, Success,,f58fcb52-7d9e-44c1-9647-0e6b12a19823,Diagnostics for monitoring is successfully completed,,2e42c4a7-5ed9-4b42-b08c-ebd4e79eee36

Everything seems good in this instance, which is what we’d expect.

Remember that these hints only work when the commands are run against a public folder mailbox that holds a secondary copy of the hierarchy. The mailbox that holds the primary copy doesn’t have the same need to synchronize as it already knows everything about the hierarchy. And of course, if you use public folders with Office 365, you won’t see any trace of these cmdlets because Microsoft takes care to maintain public folders in good health for its cloud tenants.

Follow Tony @12Knocksinna

Posted in Email, Exchange 2013 | Tagged , , , , , | 3 Comments

Why Clutter generates so many FAIs in user Inboxes


Playing around with the Get-MailboxFolderStatistics cmdlet the other day (as you do), I noticed that the number of items reported for the Inbox folder (8,443) didn’t match the number shown by Outlook (8,009). Of course, Outlook Web App has the good sense not to display the total number of items in a folder so as to avoid these kind of debates, but once I had noticed the discrepancy, it was time to check it out.

Checking items in an Inbox

Checking items in an Inbox

Nothing quite reveals the secrets that lurk inside Exchange mailboxes like MFCMAPI does. It’s a utility that should really be close at hand for any Exchange administrator because of its usefulness in many situations. In this case, all I wanted to do was to poke around the Inbox, but that’s only the start of what you can do with the program.

MFCMAPI works for both on-premises and cloud Exchange mailboxes. To be sure that you see everything, configure your Outlook profile to connect directly to the server rather than running in cached Exchange mode, which is the most common method used to run Outlook. A quick change to Outlook’s settings and MFCMAPI was ready to roll.

The answer is actually pretty simple. The Inbox folder is used as a convenient storage location for all manner of folder associated items (FAIs), hidden items created by Exchange and clients to store settings and all manner of configuration details. The Inbox is used because you can always be sure that it’s in a mailbox. The FAIs are stored in the folder’s associated contents table rather than the normal table used for regular mailbox items.

For example, Exchange’s Messaging Records Management (MRM) features store details about the retention policy that is assigned to a mailbox, including the retention tags in the policy, in an FAI, which is created the first time a mailbox is processed by the Managed Folder Assistant following the assignment of the policy. Another FAI is used to hold details of RSS feeds.

A client-specific example is the weather settings FAI, which is created by Outlook 2013 to store details of the location selected for weather information displayed in the Calendar.

But the biggest set of FAIs accumulated in the Inbox were those created to help the Clutter feature in Exchange Online to figure out what messages are important to a user and what are not. In my case, hundreds of FAIs hold training information gathered through observation of how I deal with messages – the ones I delete unread, the ones I move to the Clutter folder, the ones that I answer immediately, and so on. These items represent some of the “signals” gathered by Clutter to help it sort out the messages that arrive into an Inbox into those that should remain in the Inbox and those that should be redirected into the Clutter folder. See my FAQ for more information on how Clutter works.

MFCMAPI exposes Clutter FAIs

MFCMAPI exposes Clutter FAIs

The fact that Clutter creates so many FAIs isn’t really a problem because the items themselves are pretty small and anyway, given that mailboxes are so massive now, the couple of hundred kilobytes consumed to train my mailbox to behave properly seems like a good investment.

It would be nice if Clutter appeared for on-premises mailboxes too, but all the signs are that this is one complex feature that needs the kind of tender loving care that only a dedicated engineering team can provide. That doesn’t happen so often in on-premises deployments…

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Office 365 | Tagged , , , , , | Leave a comment

Exchange Online and Native Data Protection – No Backups Baby!


Cloud-based email services such as Exchange Online bring a great deal of value to the table. A service delivered for a fixed known cost that is constantly refreshed with new features (“evergreen software”) and relieves administrators of the need to perform all the mundane tasks required for day-to-day server management. In addition, the provider takes responsibility for knitting together software to deliver new services, such as Clutter, Office Delve, and People View – all of which were previewed at the MEC 2014 conference and have recently appeared in “the service”.

And when all of this is backed up by an impressive workflow to automation operations (as explained in this “Behind the Scenes session” from MEC 2014), it seems like cloud-based email is the answer for almost everyone.

By their very nature, cloud operations have to proceed within strict limits as this is the only way that automation can be applied to make everything happen in a predictable and robust manner. If automation can’t be applied to a task, you need human intervention and that’s bad because it’s a) expensive and b) prone to error. Neither of these characteristics contribute to the successful economic operation of massive services.

The Office 365 design and support team do their level best to ensure that very few situations occur that cannot be handled by their procedures. And in the general run of things, they do very well, which is why Microsoft can report better Office 365 results every quarter. More customers are in the cloud, more companies are using Microsoft cloud services, and more servers, datacenters, and network have been deployed to satisfy demand. It’s all a happy picture.

That is, of course, until you enter the realm of problems that are not catered for by cloud operations, which is when you realize just how much control you cede when your mailbox is absorbed by the cloud.

Take backups for example. Now, I know that the mere mention of backups is sufficient to make some readers curl up in a ball because they’ve been scarred when bad things happened with backups in the past. Actually, the backups probably worked or seemed to work; problems usually happen when you attempt to restore data. But in most cases, companies have their backup and restore regimes worked out and they’re happy.

But backup happens – or rather doesn’t happen – in a different way when you deal with the scale found in Office 365. Just think about how you would do backups for the 100,000 servers that run Exchange Online. It’s a nightmare just to plan backups, let alone figure out how to handle the resulting backup media.

So Microsoft doesn’t do backups for Exchange Online. And why would they? After all, there’s plenty of technology built into Exchange to allow backups to be eliminated, or so the story of Exchange native data protection goes. As explained by TechNet, Exchange Online uses multiple database copies in DAGs arranged across multiple datacenters to ensure that nothing goes wrong and data is protected. We also learned at MEC that Exchange Online uses lagged database copies to ensure that creeping corruption can’t cause total meltdown. It all sounds wonderful. And automation implemented in features such as the Replay Lag Manager makes this technology work better for on-premises customers too.

It is, as long as you want to recover data that is still around. That means that the required items are still in a database because once items are permanently removed from a database, they are gone forever. Remember – there are no backups.

By default, single item recovery is enabled for mailboxes to ensure that the Managed Folder Assistant won’t delete items until the deleted items retention period lapses. The default period is 14 days, which isn’t a lot. You can increase the deleted items retention period to a maximum of 30 days whereas on-premises Exchange allows a maximum of 24,855 days. The on-premises maximum seems a little ridiculous as it allows for more than 68 years of deleted rubbish to accumulate in a mailbox. On the other hand, 30 days seems pretty restrictive.

With the maximum set, a user has 30 days to recover a deleted message. If that period passes and a user then “remembers” that some important item has been deleted, then they are plumb out of luck and no manner of pleading to the Office 365 support desk will cause them to budge. In any case, what could support do? It’s inconceivable that they would take the lagged database copy and use it to recover the item even if it was possible to do so (i.e. the lagged period was sufficient so that the item was not yet deleted in the lagged copy). Such an operation would be intensely manual, expensive, and potentially compromise the smooth operation of the service.

All you can do to protect the mailboxes of important people who might delete important mail in a forgetful moment is to put their mailboxes on litigation hold. This is not a good answer because it’s not what litigation hold is designed to do, but it works.

The alternative is to educate users not to do stupid things. After all, they have a massive mailbox quota so why would they want to delete anything… or so the story goes. But users are humans and humans make mistakes and education might not work.

Or, you can do what Microsoft is proposing to do in the feature described in the Office 365 Roadmap which says:

The default 30-day retention period of deleted items folder on an Exchange Online mailbox will now be removed.  This means the user no longer has to worry about their deleted items folder automatically deleting emails every 30 days, but instead they can choose to empty the folder at their convenience. The admin can set a limit through Exchange Admin Console and PowerShell if they want to set a default limit on the folder.

In fact, I think the text is incorrect. What seems to be happening is that Microsoft is removing the retention tag for the Deleted Items folder from the Default MRM Policy that is automatically applied to every Exchange Online mailbox, or they are disabling the tag for the Deleted Items folder (by setting its RetentionEnabled property to $False). This would be the smarter course of action as the tag can then stay in place on items but will be ignored by the Managed Folder Assistant.

In either case, the net effect will be that items will accumulate in the Deleted Items folder for up to two years, after which the default move to archive tag will kick in and the items will be moved to an archive mailbox, if one is enabled. If not, the items will simply remain in the Deleted Items folder until the user decides to empty that folder. The items will never get into the Recoverable Items folder and so the deleted items retention period described above won’t affect their recoverability. The extra storage really doesn’t matter because most users will accumulate no more than a gigabyte or two of deleted items annually.

An item on the Office 365 roadmap means that Microsoft is working on its delivery for sometime in the future. So far the exact time for the implementation of this feature is uncertain, but it should come in the relatively near future.

If you consider that it is better for the Deleted Items folder to be cleared out on a regular basis, you can re-enable the Deleted Items retention tag. Alternatively, you can adopt a middle course and update the Deleted Items retention tag to lengthen its retention period. Something like a 120 retention period seems reasonable. If someone has not worked out that they have lost something important after four months, that information might not be so important after all.

Conceptually, in some respects, the on-premises scenario for item recovery is much simpler (for the user). Administrators locate backup media for a time when the item was known to be in the database, the backup is restored to a recovery database and the item is then transferred to a PST. The PST is then cheerfully handed over to the user with the best wishes of the administrators. All is well. Apart, that is, from the number of people hours consumed in this activity. And that’s why Exchange Online uses Native Data Protection and trades large amounts of disk space to ensure that the need to recover items for users simply doesn’t exist.

Native Data Protection is part of what you sign up to when you move mailboxes to the cloud. Like other elements of the cloud experience, you have to trust the operators to keep your data safe. Outside the unique requirements of forgetful users, the cloud works. Not maybe the way that you’d like it to work, but it does work.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Office 365 | Tagged , , , , , | 5 Comments

Exchange Unwashed Digest – January 2015


January 2015 proved to be quite a varied month in my Exchange Unwashed blog on WindowsITPro.com.  Everything from technology transfer from the cloud, new mobile clients, some issues I had with Delve, the new Office for Windows, and Azure witness servers, all washed down with a good helping of opinion and insight. Here’s what happened during January:

Do the ex-Acompli now Outlook clients really compromise security or is everyone overreacting? (Jan. 30): Lots of fun and games resulted after Microsoft released a rebranded version of the Acompli mail apps that they bought in late November as the new Outlook for iOS and Outlook for Android clients. It’s true that not much had been done to make the clients any different, apart of course from the corporate makeover, so they still store data on Amazon Web Services servers and they still don’t have as much functionality as Outlook Web App for iOS delivers, but that’s not the point. These are just beta versions that Microsoft has released to show their new mobile email client strategy. But that didn’t stop people worrying… a lot.  All overreaction in my book, but you’re entitled to your opinion.

The underappreciated Exchange Replay Lag Manager (Jan. 29): Many complain that Microsoft is totally fixated on the cloud these days and with good reason, if you take their marketing and public statements as the truth. In fact, a lot of good technology is being transferred back to on-premises customers after it is developed and debugged to work at incredible scale within Office 365. The Replay Lag Manager is just one of those features. It’s been available since Exchange 2013 SP1 but hardly anyone knew…

What wasn’t revealed in the next chapter of Office on Windows (Jan. 27): Microsoft did a nice job telling everyone about what was happening with Windows 10 and then followed up by revealing what was going to happen with Office for Windows 10. But what they didn’t cover was Outlook 2016, which I think will be terrifically important because it is still the major client when it comes to enterprise deployments. Maybe Outlook doesn’t quite fit when it comes to major PR bashes. Or something like that.

Compliance iceberg awaits Facebook @Work (Jan. 22): Facebook came out and told the press that they are working on a version that is suitable for use by employees who wish to network (socially) with others in the same company. I’m sure that this will be a real treat for some, but it is also likely to be a compliance nightmare for others. After all, if you’ve invested heavily in software that is capable of protecting and preserving valuable corporate data, you’re absolutely going to rush out to embrace Facebook. Aren’t you?

Reporting Office 365 (Jan. 20): Office 365 is obviously a great success, but it would be nice to know what happens under the covers from time to time, especially in terms of what your users are doing. As it turns out, Microsoft has an Office 365 reporting web service and a data mart where it gathers information that might be of interest to tenants, but the reports provided through the Office 365 portal aren’t great. Which is why you might turn to a third party company that specializes in reporting…

Control needed as Office Delve introduces even more potential for confusion (Jan. 15): I like Office Delve a lot, but the way that Microsoft introduces new features is sometimes pretty scary for administrators. Take “boards”, which seem pretty good from a user perspective. But where’s the control over the terms used? Or the boards themselves?

Good week for Exchange on-premises customers as Microsoft makes some important updates (Jan. 13): A number of interesting changes arrived at much the same time. On-premises customers might have thought they were waiting for a bus. Not much happened for a long time and then… The biggest news was support for a witness server on Azure. Just a witness server. Not a complete production deployment. That would be pretty expensive, don’t you think?

Behind the scenes with the Managed Folder Assistant (Jan. 8): The Managed Folder Assistant (MFA), which runs on Exchange 2010, Exchange 2013, and Exchange Online mailbox servers, is a black box. It runs, it processes mailboxes, items disappear from mailboxes, everyone is happy. But you can find out what MFA is doing if you go looking in the right place.

Exchange administrative tools and Active Directory: Not as close as they once were (Jan. 6): I must be getting old when I start writing about the way things were. At one time Exchange and Active Directory were in each other’s pockets. Not any more…

The World of Exchange in 2014 and What’s Likely in 2015 (Jan. 1): Proving that I was busy on New Year’s Day, I published some reflections on what happened in 2014 and what we might see in 2015. One of these days I shall check the accuracy of the predictions.

That’s all until next month. Stay in touch at the Exchange Unwashed blog or at Twitter (below). All I can guarantee is that February will bring some interesting nuggets to discuss.

Follow Tony @12Knocksinna

Posted in Active Directory, Cloud, Email, Exchange, Exchange 2010, Office 365 | Tagged , , , , , , , , , | 11 Comments

Why Exchange 2013 doesn’t need the Microsoft Office Filter Pack


For space reasons, this text is another bit that was cut out of my Exchange 2013 Inside Out: Mailbox and High Availability book. FWIW, here it is…

If you’ve ever installed Exchange 2010, you’re probably aware of the need to install the Microsoft Office Filter Pack 2.0 (the latest version is SP2). The iFilters in the filter pack are important to Exchange as they are used by the MSSearch component to create the content indexes for mailbox databases. The indexes are used for searching by online clients (Outlook configured in cached Exchange mode uses Windows Desktop Search) and eDiscovery. They’re also used by the transport system if access is necessary to message content, such as when a transport rule needs to identify content as messages pass through the pipeline.

Exchange 2013 swaps MSSearch for the Search Foundation, a component shared with SharePoint 2013. The Search Foundation has no need of the Office Filter Pack because it includes its own filters. Unfortunately, until SP1 came along, the Exchange 2013 Setup program overlooked this fact and stated that the Office Filter Pack was a prerequisite. Setup would install if the Office Filter Pack was missing, but who’s going to ignore a warning issued by Setup. In any case, that warning seems to have finally been suppressed.

Search Foundation includes an impressive array of filters. As you’d expect, all of the common Microsoft Office formats are supported, and then you find pretty well all of the other formats that are usually encountered as attachments to messages, including Adobe PDF and ZIP files.

Even graphic formats such as JPEG, GIF, and TIF are included. However, Search Foundation doesn’t index the graphic contents (which would be a good trick) and opts to settle for the metadata instead.

Exchange marks items that cannot be processed by Search Foundation as “unsearchable items.” You can view the items in a mailbox, database, or server that are deemed unsearchable by running the Get-FailedContentIndexDocuments cmdlet. The metadata for unsearchable items is stored in content indexes, which allows those items to be found during eDiscovery searches.

A set of cmdlets are provided to allow administrators exert some control over the formats supported by Search Foundation. The Get-SearchDocumentFormat cmdlet lists all of the formats known to the Search Foundation. You’ll see output like that shown below if you run the cmdlet.

[PS] C:> Get-SearchDocumentFormat

Identity     Name                                Enabled
--------     ----                                -------
zip          ZIP Archive                         True
gif          Graphics Interchange Format         True
jpeg         JPEG                                True
html         Web Page                            True
mhtml        Web Archive                         True
eml          Email Message                       True
msg          Outlook Item                        True
obd          Microsoft Office Binder             True

...

The Set-SearchDocumentFormat cmdlet is used to enable or disable a format. By default, all known formats are enabled when a mailbox server is installed. For example, to disable any attempt to index information for JPEG images, you’d run the command:

[PS] C:> Set-SearchDocumentFormat –Identity JPEG –Enabled $False

The other cmdlets are New-SearchDocumentFormat and Remove-SearchDocumentFormat and were added in Exchange 2013 SP1. I doubt that many administrators will add new formats or remove existing formats. This might be done by a third party software vendor who provides a filter to support a specific format used by their product.

The bottom line is that the day of the Office Filter Pack is past. Search Foundation is now king of the hill and you can drop this particular task from Exchange deployments.

Follow Tony @12Knocksinna

Posted in Exchange, Exchange 2013 | Tagged , , , , , , , | 2 Comments

A year with a Fitbit Flex


I have been using a Fitbit Flex for a year. Well, I should say that I have used two Fitbit Flex in the last year, mostly because I lost one while collecting a hire car soon after starting to use this handy little device to track my activity levels.

I lost my Flex because the strap fastener is truly horrible. It is all too easy for it to come undone and allow the Flex to fall off, especially when you take a coat or sweater off. I’m sure that it’s not beyond the wit of men (or women) to come up with a better fastener. I don’t think much of the way that the Flex measures sleep (time awake, time asleep, and so on) as the need to tap to tell the device that you plan to go asleep and tap again to tell it that you’ve woken up seems superfluous. I am not at my best just after waking up and often forget to tell the Flex that I am in fact awake. It happily then registers blissful sleep when I am working at my PC. Which might just be true.

But that ends the complaints, which is not normal when I discuss an electronic gadget (apart from the Bose QC20 in-ear noise cancelling earphones, which I think are just great, especially when travelling). Generally speaking I like the Flex very much and it has proven successful in getting me to exercise more in the past year.

IT people have a horrible habit of being sedentary. It goes with the territory of sitting down to work with computers and when you’re sitting down, it’s all too easy to stay nice and comfortable, sipping your favorite beverage, and the hours trickle by. The best thing about devices like the Flex is the prompts they provide to get up and do something.

And the data too. Data is very important to IT people and even though the Flex is very much an entry-level activity monitor, it collects enough data to create and measure activity against targets and to accumulate statistics over time. Seeing the miles and kilometers mount up over the weeks provides the necessary motivation to get up and do something energetic daily.

Or maybe it’s the five lights that the Flex lights up as progress is recorded towards the daily target. Doing enough to make the five lights flash as the target is attained becomes a goal and the encouragement to take some more exercise.

All of the data gathered by the Flex is synchronized up to your Fitbit.com account and can be reviewed through a web dashboard. Here I find that I have walked a total of 2,805.63 km in the last year, or some 3,705,520 steps, which seems a lot. And that proves the benefit of being motivated by a little flashing wristband. I reckon that I probably walk 2.5 km/day just doing bits and pieces, so the additional distance walked because I have been motivated to meet the daily target is some 1,892 km.

When I walk, I typically use a pedometer running on my Nokia Lumia 1020. Because it’s GPS-based, its data is more accurate in terms of distance walked. The Flex operates on the basis of steps, which is an imprecise science when it comes to measuring distance. You can input data from external sources into the Fitbit.com portal to update your statistics.

Overall, I can’t complain. I’ve enjoyed the walks and dropped 4kg in weight. Now I am ready to use a more sophisticated activity monitor. Perhaps the Microsoft Band or even the new Fitbit Surge? Time to consider my options.

Follow Tony @12Knocksinna

Posted in Technology | Tagged | Leave a comment