Exchange 2013 modern public folder limitations – what next?


I’m sure that you all enjoyed the breaking news about the restrictions that exist for public folder mailboxes and the public folder hierarchy in the new, improved, and much-hyped modern public folders implemented in Exchange 2013 (and Exchange Online in Office 365). That is, if you like having the wind taken out of your sails, which happened to me because the news completely undermined advice that I had given to others for the best possible reasons (i.e. I knew nothing about the restrictions until I saw the TechNet article).

Since the news broke last Tuesday, I have received many messages from companies that are worried about their migration to Exchange 2013 or Office 365 (both on-premises and cloud platforms share exactly the same limitations) and the prospect that they now face of having to conduct a major “find and prune” exercise in their public folder hierarchy before they can move off Exchange 2010. It’s pretty common to find companies that have hierarchies spanning tens of thousands of folders and some that have hundreds of thousands of public folders. Clean-up could take some time.

You might wonder why any company would have so many public folders. I think there are three main reasons.

First, when they were introduced, public folders were the cornerstone of Microsoft’s collaborative answer to Lotus Notes. Notes had terrible email but great application development capabilities. Exchange was exactly the reverse. But we only discovered this after the weakness of the public folder application development environment (a 16-bit electronic forms development tool) and the problems that existed in public folder replication. Remember, early Exchange functioned in a world where pervasive networking was not yet in place. But many companies who deployed Exchange early bought into the collaboration story (Microsoft has never been incompetent at selling a great story) and applications such as travel requests, time recording, requests for equipment, and so on were churned out. The applications might have gone away but the public folders that held their data might still be present.

Second, the advice provided to people who deployed public folders was non-existent in the early days. The result was that public folders became a Wild West kind of place with folders being created at the drop of a hat with no logic driving their creation, position in the hierarchy, access control, use, or lifecycle. Because mailbox quotas were so small at the time, some saw public folders as a great place to dump material that they couldn’t keep in their mailbox. Folders proliferated as repositories for email discussion groups, teams, collaborative document creation (a very bad use due to the total inability to lock documents against changes), and other reasons. The average lifespan of a public folder resembled that of the mayfly in some cases as folders were created, used once, and then left to fester forgotten in the hierarchy.

Third, Microsoft never provided good management facilities for public folders. Some tools came out of the Microsoft support groups, like PFDAVAdmin and ExFolders, but in the early days of public folders there was nada. Once public folders started to expand, they did so like rats procreating, and no one was aware of the true nature of the problem until performance started to suffer or replication refused to work (all too often). It would have been nice to have had a tool that was capable of generating reports on the public folder hierarchy and listed all the folders, their owners, some view on their content, last accessed date, and so on, but this was never part of Exchange. You could browse the hierarchy using the management console (later a separate console), but this was only satisfactory for demo situations when the hierarchy spanned less than a few hundred folders.

Given these factors it is truly surprising that public folders have retained so much support within the Exchange community. I can only conclude that the reason is that public folders have been the only supported collaboration platform within Exchange since 1996. It’s sad but true.

In any case, there is no doubt that a great deal of rubbish exists in most public folder hierarchy. By this, I mean:

  • Folders that no one owns (they have all left the company)
  • Folders that no one wants to use (the information is long since obsolete)
  • Folders used by applications that no longer work
  • Folders that might contain useful information but no one has accessed it in over a year (or more)

The good news about the restrictions for modern public folders is that it will force companies who want to migrate off Exchange 2010 or Exchange 2007 to Exchange 2013 to go through a clean-up process. The only reason why this is good news is that the outcome will be a slimmed down hierarchy that will then be easier to move and should, if under the limits dictated by Microsoft, be supportable.

The bad news is that the process to identify and prune the unwanted folders is intensely manual and excruciatingly boring. Apart from PowerShell scripts, no tools exist to parse the public folder hierarchy and generate reports. And the PowerShell scripts that are in the public domain are simple so the reports that they generate are not filtered and do not focus on the need to clean.

You can write your own scripts to look for the kind of folders described above. However, in many cases some investigative work will be necessary before a decision can be reached to prune a folder. Why was a folder created? Is the data contained in the folder useful in any way? Who can tell IT whether the information is valuable or not? These are all questions whose answers are highly specific to the company who owns the folders and it is hard for a tool to offer any more than an indication that a folder is a candidate for pruning. For instance, a folder that has not been accessed for three years might contain information required by a department in case it is audited by a regulatory authority. The IT Department might not know this and a script would certainly only identify the folder as a candidate for clean up because of its lack of activity, but there would be hell to pay if you deleted it and the relevant department had an audit.

Of course, you can decide to wait on the basis that Microsoft will fix the performance problems that obviously impose the restrictions in a future cumulative update for on-premises Exchange 2013 or for Exchange Online in Office 365. This is certainly a reasonable tactic to adopt for now as I’m sure that Microsoft will respond to some of the pain now emerging from customers who realize the impact of the restrictions.

And if you attend the Microsoft Exchange Conference (MEC) in Austin (March 31-April 2), you might find that these sessions are interesting places to ask questions about the limitations and how to move forward with migrations.

Modern Public Folders migration and Office 365 (Monday, 4:30pm)

Experts Unplugged: Public folders and site mailboxes (Tuesday, 9am)

Unfortunately I won’t be able to get to the first session as I’ll be chairing “Exchange Unplugged: Top Support Issues” at the same time. Given its title, it would be no surprise if the topic of public folders came up there too. By or at MEC, Microsoft needs to answer questions like:

  • Why do these limitations exist? Is it a fundamental weakness of the new architecture or some performance bottleneck that can be addressed in the code.
  • Why has the problem only emerged now and why didn’t Microsoft communicate this knowledge to customers before the TechNet page appeared in February 2014 (the limits were documented for Exchange Online but not for on-premises)
  • What steps are now being taken to address the issues and raise the limitations to more acceptable values? When will customers be able to take advantage of this work?
  • How will Microsoft validate that any changes that are made to modern public folders will be able to successfully deal with the largest anticipated public folder migration by customers to either on-premises Exchange 2013 or Exchange Online?

I’m sure that you can think of a few more salient questions that could be addressed to the Exchange product group. Thinking caps on!

Follow Tony @12Knocksinna

Posted in Cloud, Exchange, Exchange 2010, Exchange 2013 | Tagged , , , , , | 9 Comments

Exchange 2013 public folders crippled by newly revealed limits


A page appeared on TechNet on February 24, 2014 titled “Limits for Public Folders.” Not much news there, I thought, we all know that public folders have limitations, even the modern variety as used in both Exchange 2013 and Exchange Online. For instance, their lack of any ability to manage conflicts makes public folders a pretty horrible repository for multiple collaborators to work on a document.

But the information in the TechNet article carries some really bad news for enterprise Exchange customers because it includes the news that modern public folders are limited to 100 public folder mailboxes in an organization and (gasp!) 10,000 public folders in the public folder hierarchy.

Update (July 15, 2014): Microsoft announced today that they have increased the limit to 100,000 public folders for Exchange Online. The change is effective immediately for all tenants. Exchange 2013 on-premises customers will have to wait until Exchange 2013 CU6 is delivered.

I had never heard of any such limitation before and this came as an unpleasant shock because there are many deployments of old-style public folders that comfortably pass the 10,000 threshold. Apparently the page was available beforehand but only referenced Exchange Online, so those who read the information might conclude that the limits only referred to that platform. All cloud services impose restrictions to avoid tenants using too many resources to the potential detriment of others and although problems caused by the limitations have surfaced in Office 365 support forums (like this example from August 2013), these were overlooked by the on-premises community. After all, the situation is different when you run an on-premises deployment and control how resources are used. At least, that was the theory.

Some comfort might be gained by the note that “Although you can create more than 10,000 public folders, it isn’t supported”, which I assume to mean that customers can feel free to go ahead and create tons of public folders to allow Microsoft to observe what happens. Microsoft can then decline responsibility if all hell breaks loose. One company who contacted me after this post appeared told me that if you have more than 20,000 folders in a hierarchy modern public folders break comprehensively and totally. This is unverified information, so treat it as such.

The ramifications of this information is that companies who are making plans for the migration of their public folders to Exchange 2013 will now have to prune their existing public folder hierarchy to bring it down under the 10,000 limit before they can begin the move. The process of figuring out what’s good and what’s not so good in a public folder hierarchy is difficult enough as it stands, mostly because of the lack of comprehensive tools in the space, but now it’s impossible to take the attitude of “it’s doesn’t matter – migrate everything.” The work to gather information about public folders, figure out what can be removed, delete unwanted folders (perhaps after saving the data in the folders to a PST first), and then validating that everything is OK before starting the migration simply has to be done.

It’s surprising that Microsoft has taken so long to bring the limitations to the attention of enterprise customers. It’s true that public folder migrations are the very last step in moving older Exchange versions to Exchange 2013, so many companies might not yet have hit the limit. However, you would imagine that performance and scalability testing would have been done and the results documented and shared as part of the development process for the new-style public folders.

Modern public folders use a completely architecture to their ancient cousins that is firmly based on the mailbox database. Lots is known about the performance characteristics of mailbox databases and how well they scale in different circumstances, so I can’t quite work out where the problem might lie. Pure speculation on my part is that the primary public folder mailbox and its role as the host of the only writable copy of the hierarchy might be a bottleneck, but we shall just have to wait until more is known as to where problems lurk.

According to Microsoft sources, the development team is well aware that these limitations will pose problems for enterprise customers who want to move to either Exchange 2013 on-premises or Exchange Online. They’re working on the problem and hope to have some better news “soon.” At least, that’s what the rumor mill says. We live in hope.

The good news (if there is any) is that the same page tells us that modern public folders can store up to 1 million items each. The thought of being able to store quite so many items in a single public folder certainly eases the pain of the other limits.

In the light of this news, I might just have to change my “Ten predictions for the world of Exchange in 2014” because no one might be able to complete a (supported) migration of more than 10,000 public folders. I received some comments after I published “The dirty little secret about migration to modern public folders” to say that companies had completed some migrations, including one spanning 17,000 folders. Is that company now supported?

Follow Tony @12Knocksinna

Posted in Cloud, Exchange 2013 | Tagged , , , | 13 Comments

The tyranny of Flexbit’s five little lights


Like most people, I probably could do a better job of maintaining my personal level of fitness. It’s not so long ago that I refereed rugby professionally (for instance, check out this highly pixelated video from a 1999 European Challenge Cup match between RC Toulon (France) and Ebbw Vale (Wales) – the clue is that the referee is the guy in the green shirt) but the slope downwards since I retired from active refereeing has been steep. Some action was therefore required.

Having done my fair share of treadmill miles over the years, I was in no hurry to join a gym and acquaint myself with any machines of torture again. In fact, I wasn’t in a hurry to do very much at all, which is the reason why I needed a certain impetus to get going again.

That impetus came from a device called the Fitbit Flex, which I’ve used for a couple of months. Unlike treadmills, which apply their own form of physical torture to the body, the Flex is all about mental torture – the need to satisfy its demand that five lights show when you tap the device to report on your daily progress towards your personal goal. This could be 10,000 steps (the default) or 10km or a certain number of calories burned. You see five lights when your daily goal is achieved. Until that point arrives, a certain pressure exists to do more to move toward the goal, which I guess is the reason why the Flex exists.

Happiness! Five lights are lit - I can go and have a pizza...

Happiness! Five lights are lit – I can go and have a pizza…

As a version 1.0 example of wearable physical measurement technology, the Fitbit Flex is not perfect and the online dashboard that is available to review the data that the device collects is spartan, unless you pay for the premium version (which I do not). However, it’s enough to provide a constant reminder of activity and progress and that’s enough for me. The Flex reports its progress by synchronizing over Bluetooth with an application that’s loaded on my PC. A small USB gadget is plugged in to facilitate the synchronization, which works well.

The Flex comes with two sizes of rubberized wristband. The device itself is waterproof, so I wear it all the time, including when sleeping. The device does its best to track and report on how well you sleep, as long as you remember to tap it (correctly) to put the Flex into sleep mode.  I haven’t tried it when swimming but it certainly doesn’t mind going into the shower.

One thing that is good is that the Flex does a reasonable job of measuring distances that you walk or run. I’ve checked it several times against other pedometers, including the Pedometer Master app that I have on my Nokia Lumia 1020 (a good app of the kind if you’re looking for one on Windows Phone). The Flex is certainly better than other similar devices, including the Misfit Shine that my wife wears. Despite many attempts to calibrate the Shine, it seems to under-record distance by between 10 and 15%. On the other hand, the Shine is a very attractive device and includes a watch function (that can be impossible to view in bright light).

Devices like the Flex and the Shine exist to help give people like me a reason to exercise. For me, it’s been helpful – so much so that when I lost my first Flex, I immediately bought a replacement. I think the Flex fell off my wrist when I took off my coat and dislodged the catch, which taught me a lesson to pay attention to its fastening in future.

You’ll probably never need something like the Flex if you’re a disciplined type of individual who finds it easy to organize exercise into your life. I find it helpful, even if doing enough on a daily basis to satisfy those five blessed lights is sometimes a real pain.

Follow Tony @12Knocksinna

Update 12 March: Fitbit has announced a recall of the Force model (not the Flex described here) due to an ongoing problem that causes skin irritation for users. You cannot buy a Force from the Fitbit web site but they are still available elsewhere. Don’t buy one until Fitbit fixes the problem!

Posted in Technology | Tagged , , , | 1 Comment

Twenty-five years chasing the dream of enterprise social networking


The fuss around Microsoft’s announcement of “the enterprise network and the future of work” at the SharePoint Conference (SPC14) in Las Vegas this week reminded me that we have been seeking to extract better information about the complex interconnections that exist between humans working in large corporations for many years. In many respects, it’s an attempt to gather the tacit knowledge that you gain through experience and exploit that knowledge for commercial purposes.

The Office Graph (source: Microsoft)

The Office Graph (source: Microsoft)

A long time ago (in computing terms), a brilliant guy named John Churin (creator of the Digital Equipment Corporation (DEC) ALL-IN-1 Office system in the early 1980s) made an attempt to build a knowledge navigator in 1988. This was an artificial intelligence program that ran on a VAXstation to allow people to input details of who knew what and whom within the company and why they were connected. Email was used heavily within DEC to connect its 120,000+ employees, albeit not at the volume that we have today. The Internet was still in its infancy but DEC had other ways to communicate internally, such as VAX VTX (videotext) and VAX Notes (online discussion). With so much information (or so we thought) flowing through the company, an obvious need emerged to help people understand who did what across all the different business units.

John’s program forced you to enter details of people you knew, their expertise, and position. Gradually a map of connected people emerged that was built upon through contributions from multiple people, including the connections made by inference (Tony knows that John is a great COBOL programmer; John considers Don a fine programmer too, so Tony must as well). It was an interesting exercise that was too manual in terms of data capture to be of much real value, but it showed what could be done.

The next attempt to capture tacit knowledge within a large organization appeared in HP Labs, which worked on a program called “Shock” (social harvesting of community knowledge) in the 2001-2003 period. Shock was implemented as an add-in for Outlook because the knowledge that it used was largely acquired by examining the content of email to determine the expertise of people and the connections that they had within the organization. Using Outlook was a good idea because it was the predominant email client used within HP at the time. It also solved the problem of automating data collection because the data was harvested on an ongoing basis as people worked. The results were promising until we ran into the two issues of privacy and commercial viability.

The Shock project led to some discussions between HP and Microsoft to see whether it would be possible to transfer the technology and embed it into a future version of Outlook. As it happened, an advanced technology team within the SharePoint development group at Microsoft (headed by Bobby Kishore) was working on a program called the “Knowledge Interchange” in the 2003-2004 timeframe.  I recall that this program was going to collect information from email, SharePoint sites, SQL databases, and web sites to create the kind of Office Graph or Enterprise Graph that appears to be envisaged in this week’s announcements. Craig Samuel, Chief Knowledge Officer for HP Services at the time and Jeff Teper, the “father of SharePoint,” were involved in the discussions between HP and Microsoft too.

Nothing much came of the talks between HP and Microsoft. The HP Labs project remained of purely academic interest and Microsoft’s knowledge interchange never appeared. I recall that privacy concerns around how people would feel about having their personal email and other contributions harvested for analysis was an issue that was never resolved, including all of the implications that exist when information travels around multi-national companies.

Ten years on, we live in a different world. We have more ways to exchange information and knowledge than ever before and the sheer volume of data that is available for consultation is staggering. People are freer about how they share information, especially those who have entered the workforce in the last ten years. And the impact of Twitter and Facebook makes some of the features brought to Microsoft by Yammer more compelling in the eyes of those who want to make a breakthrough in enterprise social networking. Perhaps privacy will be less of a concern, but I can’t help thinking about what will happen when a hidden fact that’s important to the business or a person “leaks out” or is inadvertently disclosed by social networking. There’s good and bad in everything.

I guess that we shall just have to wait and see whether the Enterprise Graph and the “Oslo” application make the anticipated impact. After 25 years of trying, it will be nice to see some success in this area.

Follow Tony @12Knocksinna

Posted in Email, Technology | Tagged , , , , , , , , , , , , | 1 Comment

Exchange Unwashed Digest – February 2014


The big news in February 2014 was the release of Microsoft Exchange Server 2013 Service Pack 1 (build 847.32), the much-awaited version long deemed as “the” software worthy for deployment in many corporate environments. SP1 includes many new features and enhancements that are worthy of debate and I’ll be covering them over the next few weeks in the lead up to the Microsoft Exchange Conference (MEC) in Austin at the end of March. I will be speaking at MEC and chairing a panel session too. Hopefully I’ll meet up with many of this blog’s readers there!

In the interim, here’s what appeared on my “Exchange Unwashed” blog on WindowsITPro.com in February 2014.

Exchange 2013 SP1 introduces simplified DAGs (Feb 27): I really like the way that the Exchange developers attempt to simplify core parts of the product. Taking advantage of the capability of Windows 2012 R2 Failover Clustering to remove some of the components previously required to form a Database Availability Group is sensible and practical. It’s also the second-best new feature in Exchange 2013 SP1 (MAPI over HTTP is my candidate for best new feature). You’ll be using these DAGs in the future, so it’s good to get to know them soon.

Exchange Server 2013 SP1: A Mixture of New and Completed Features (Feb 25): It might have surprised some that I was ready to deliver a 1,400 word assessment of Exchange 2013 SP1 soon after Microsoft made the software available for download, but that’s because Microsoft runs a program to allow MVPs access to preliminary code builds so that we can try out new features and find bugs. The good news is that I think Exchange 2013 SP1 is pretty solid, even if there are some acknowledged problems with third-party products (like anti-spam solutions) that depend on transport agents (you can read about the issues in the comments to the EHLO post announcing SP1). The well-tried advice to carefully test any new software version against a realistic representation of your operational environment before deployment, including any third-party products that you use, holds for any update of Exchange).

Four synchronization issues between Outlook and site mailboxes (Feb 20): You might not use site mailboxes, the new integration point between Exchange 2013, SharePoint 2013, and Outlook 2013, but I do. And they have value in their own way, which might or might not work in your environment. I’ve found four places where synchronization is “interesting” between Exchange and SharePoint. It’s good to know these things before you deploy, unless you like surprises.

Exchange’s most annoying and confusing error message (Feb 18). Have you tried to remove a mailbox database from Exchange 2010 or Exchange 2013? If so, you might have had the intense pleasure of meeting the most annoying and confusing error message that I can find in the product. The challenge exists to find a better candidate. Please let me know if you do.

The raging debate around the lack of NFS support in Exchange (Feb 13): I wrote an article in October 2013 outlining why Exchange does not support NFS-based storage. Not much was said then but January produced a fair amount of heat in my Twitter feed as NFS advocates reacted strongly to what they perceive to be Microsoft’s unfair and technically unjustified stance on the topic. So I assembled the arguments from both sides and attempted to summarize them in this piece. And afterwards everything went quiet… too quiet for my liking. Perhaps things will hot up again at MEC.

The Outlook 2013 slider and its potential effect on archive mailboxes (Feb 11): So here’s the thing – Microsoft introduced archive mailboxes with a great deal of hype in Exchange 2010 and pronounced them to be the place where you put items that don’t need to be available all the time. Long-term storage in other words that reduces the size of primary mailboxes. But people are not organized and having archive mailboxes complicates their environment a tad, including not being able to access information from mobile devices. And then Outlook 2013 arrives with some extra smarts that allows control over what information is synchronized to the OST, which addresses the major problem with the OST (performance when it gets too large). So we can now have everything in the primary mailbox again and ignore archive mailboxes. Or can we?

Viewing administrator audit entries – a start made, more to do (Feb 6): I know many of you do not concern yourselves with the details of audit entries. They are, after all, only there to provide insight into matters when something goes wrong. And nothing ever goes wrong on your watch… right? But things sometimes go wrong for me and when they do, I like to know what happened. Up to now it’s meant a somewhat torturous interaction with PowerShell. But from Exchange 2013 CU3 on, you can use EAC. The implementation isn’t too bad, but more work is required to make it really useful.

The unhappy mixture of Office 365, Outlook Web App, and Windows XP (Feb 4): A public service announcement to warn Office 365 users who still have Windows XP around that Outlook Web App is going to turn its nose up in disgust at their habits on April 8. Well, something like that.

March takes us to MEC and there’s lots of work to do to prepare for that event. Stay connected with Twitter or your favourite social media (like the Exchange 2013 Facebook group) to keep up to date with what’s going on.

–          Tony

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2010, Exchange 2013, Office 365, Outlook, Outlook 2013 | Tagged , , , , , , , , , , , | 2 Comments

On-premises Exchange and OWA for Devices


Everyone got all excited yesterday with the announcement that Exchange 2013 Service Pack 1 (SP1) was available for download together with updates for Exchange 2010 SP3 and Exchange 2007 SP3 to allow those versions to play nicely with (but not be installed on) Windows 2012 R2 DCs and GCs. Microsoft also announced Office 2013 Service Pack 1, including the update necessary for Outlook 2013 to make use of the new MAPI over HTTP protocol. Over the long term, MAPI over HTTP will become the de facto connectivity standard for MAPI clients to Exchange servers, but that’s another day’s work…

Although Outlook Web App for Devices (or OWA for iOS as it’s sometimes called) received a minor update in terms of its new-found ability to display Data Loss Prevention (DLP) policy prompts, I doubt that this will excite the on-premises Exchange community, who would have much rather seen an announcement of formal support for connection of the app in on-premises deployments. That didn’t happen, so we remain in a situation where formal support is only available when OWA for Devices is used with Office 365. You can make the app work with on-premises servers, but it’s unsupported.

Some reasons why this situation exists might be gleaned from the TechNet article “Configuring Push Notifications Proxying for OWA for Devices” last updated in December 2013. This article explains:

“Enabling push notifications for OWA for Devices (OWA for iPhone and OWA for iPad) for an on-premises deployment of Microsoft Exchange 2013 lets a user receive updates on the Outlook Web App icon on his or her OWA for iPhone and OWA for iPad indicating the number of unseen messages in the user’s inbox. If push notifications aren’t configured and enabled, a user with OWA for Devices has no way of knowing that unseen messages are in the inbox without launching the app. When a new message is available, the OWA for Devices badge is updated on the user’s device and looks like the following badge. “

Push notifications are what makes the little number light up on the OWA for Devices icon to indicate the presence of new messages. The article explains that push notifications are achieved by subscribing to the Office 365 notification service, something that is probably automatic for Office 365 tenant. Apparently, if you run Exchange 2013 CU3 or later and have a hybrid connection to Office 365, you also subscribe to the notification service, which was news to me.

The article uses the following diagram to illustrate the flow of notifications. The mention of a third party notification service is interesting – I assume that this is Apple’s Push Notification Service, the use of which by Office 365 to signal the arrival of new messages to iOS devices would make sense. You’d assume that OWA for Devices on Android (should the much-rumored and natural evolution of OWA for Devices appear) would then use Google’s equivalent Cloud Messaging service for the same purpose.

Office 365

Office 365 push notifications and Exchange 2013

Perhaps the use of the third-party notification service is the component that makes it difficult to support OWA for Devices in pure on-premises deployments. It would certainly seem easier for a major player like Office 365 to make the necessary arrangements to push notifications to services run by Apple and/or Google than it would be for individual companies. If this is true, then Microsoft might have to come up with another way of pushing notifications to devices before everything would work nicely in an on-premises deployment.

All of this is pure speculation on my part. Kind of par for the course…

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Exchange 2013 | Tagged , , , , , , | 4 Comments

Office 365 message encryption and HABs


Some interesting things happened in the Office 365 world this week. First, Microsoft announced that message encryption is now being rolled out to customers to fulfil their goal of providing a method to encrypt messages sent to any recipient rather than just within a bounded organization. Second, the much-awaited hierarchical address book (HAB) is available to Exchange Online. As always, it’s interesting to poke a little behind the formal announcements and speculate on what value lies for customers in these advances.

Office 365 message encryption was originally announced in November 2013. The interval between then and now has been filled with work to prepare the Office 365 infrastructure for the introduction of the new feature, brief the support personnel who will have to deal with customer queries, update online documentation, and so on.

I see three critical points that should be understood about Office 365 message encryption. First, the ability to encrypt messages is provided by Windows Azure Rights Management (WARM). This means that you need to subscribe to a plan that can use WARM before you can use Office 365 message encryption. Customers who previously used Exchange Hosted Encryption (EHE) will move automatically to the new service; if you haven’t used encryption services before you’ll need to add WARM to a plan ($2/month/mailbox in the U.S.). The enterprise-focused E3 and E4 plans already include WARM. Subscribers, like me, to the small business plans cannot use WARM and so are out of luck when it comes to encryption [but you can sign up for a free trial of an enterprise plan to test out the new functionality, which is what I did].

The second point is that the content of encrypted messages are sent as protected HTML attachment. Naturally, Microsoft clients do a fine job of opening and processing the protected attachment as planned, which means that they open a browser to invoke an Outlook Web App (OWA)-like interface to expose the content, which is only revealed after the user authenticates themselves by signing in with an Office 365 identity (tenant log-on) or a Microsoft account (LiveID). The identifier needs to correspond with the email address to which the message was sent.

Not everyone has one of these identifiers and I assume the intention is that someone who doesn’t will proceed to get a Microsoft account as soon as they start to receive encrypted messages. You can certainly criticize Microsoft for taking this approach to recipient authentication but it’s hard to see what other scheme they could have come up with to allow messages to go to anyone using any other email system.

Some ability to customize the instructions (“branding”) provided to recipients is provided through the Set-OMEConfiguration cmdlet. It is possible to include some instructions to recipients in the branding.

The requirement to read encrypted messages through an OWA-like interface means that mobile support is only possible for clients that are capable of invoking a browser to run OWA. As explained here, the exact requirement is “Office 365 Message Encryption works on mobile devices as long as the attachment can be opened in a viewer that supports form posts.” Again, everything works perfectly on Windows Phone but I hazard a guess that some iOS and Android clients might run into choppy waters when faced with the need to process that protected HTML attachment.

The third point is that encryption is performed by policy. Users don’t control what messages are encrypted and what are not as there is no control over this feature exposed through client UI. Transport rules are used to establish the policy for message encryption across a complete organization so that, for example, all messages sent to a particular set of users (or by a set of users) are encrypted while all other traffic is not. To include message encryption in a transport rule, you select the “Modify the message security” action and select “Apply Office 365 message encryption.”

The use of transport rules lends some flexibility to the situation because it is possible to conceive an exception being included in a rule whereby a user could override the organizational policy and send an unencrypted message. However, I can’t imagine many companies allowing this kind of exception. It’s more likely that a rule would be created to look for a keyword in a message subject (like “Encrypt”) and, if the keyword is detected, the message would be encrypted.

Protecting messages against those who might pry into their contents is a good thing, especially at a time when we might not trust government agencies to always do the right thing when snooping is concerned. It will be interesting to see how well Office 365 message encryption is received by customers and how widespread its use becomes over the next year or so. If you need more information about setting up the feature, have a look at this post by Jesper Stahle.

Viewing the Hierarchical Address Book

Viewing the Hierarchical Address Book

Getting back to the HAB, I sincerely doubt that many Office 365 tenants will use this feature. It’s been available to on-premises Exchange customers since Exchange 2010 SP1 and I don’t know many companies that have even been remotely interested in implementing an address book that requires a lot of background work to construct the views of different layers within the company from CEO downward. I have done it to test the feature for both my Exchange 2010 Inside Out and Exchange 2013 Inside Out books and it works… but…

A feature like this is probably only valuable in large enterprises. Providing a view of company structure is a good thing when there are many operating units with different responsibilities and ever-changing personnel, which is the reason why many enterprises have their own home-brewed version of a tool to navigate the business. However, everyone knows each other in smaller companies and understands where they stand in terms of status and level so the value of these kind of utilities is lessened. And in newer companies, where less importance is given to someone’s position in the hierarchy and their job title and more focus is placed on what they actually do and the success they have had, the availability of the HAB is so-so news.

My assumption is that Microsoft has made the HAB possible for Exchange Online customers because some of the larger and more traditional companies want it, perhaps those who are based in regions where a huge import is placed on status and rank. If so, I wish those who implement a HAB well. Just make sure not to attempt to use the HAB with any other client than Outlook 2010 or Outlook 2013. Nothing else works.

Follow Tony @12Knocksinna

Posted in Cloud, Exchange, Exchange 2010, Exchange 2013 | Tagged , , , , , , , , , , , | 11 Comments

Outlook Web App 2013 and mail public folders


Following up the post about migrating public folders to the modern version introduced in Exchange 2013, an entry in a TechNet forum asked whether Outlook Web App (OWA) supported shared calendars held in public folders. This came as a kind of throwback to me as it’s a long time since I put anything other than a message in a public folder.  However, public folders have long supported the ability to store calendars, contacts, and tasks so that these data can be accessed by groups of user. Outlook detects the type of information stored in the folder and displays the appropriate user interface, such as a grid to show a calendar.

Specifying what a new public folder will contain

Specifying what a new public folder will contain

You define the kind of information held in a public folder when it is created with Outlook. I can’t remember if this was ever possible with any version OWA; it certainly is not with OWA 2013 running against either the on-premises or cloud version of Exchange 2013. Once set, there doesn’t seem to be a good way to change the type of information held in a public folder. Certainly there is no parameter in the Set-PublicFolder cmdlet to do the job. Equally, there is no way to set the type of information for a new folder when the New-PublicFolder cmdlet is run as this will create a public folder capable of storing messages.

Microsoft upgraded OWA in Exchange 2013 CU1 to allow it to display public folders. However, two major restrictions exist. First, OWA can only display modern public folders. In other words, you have to complete your migration to Exchange 2013 and then execute a public folder migration before OWA can be used. No access is possible to legacy public folders. Second, OWA can only display “mail public folders”, or public folders that contain “mail and post items” (as they are referred to by Outlook). Other folders (calendars, contacts, tasks, notes, InfoPath, and journal items) are unsupported.

Note that “mail public folders” are different to “mail-enabled public folders.” The former are public folders that hold mail and post items while the latter are public folders whose properties have been updated (for instance, by adding an SMTP address), so that they can receive email (you can also send messages on behalf of a mail-enabled public folder). You can mail-enable a public folder using the Enable-MailPublicFolder cmdlet or through the Exchange Administration Center (EAC).

The problem that we face is therefore to know whether any non-mail public folders exist in a hierarchy so that we know if users need to be told that they have to use Outlook to access these folders. It’s better to know this kind of stuff up front rather than have users swamp the help desk demanding to know how to get to their favourite shared calendar or contact list with OWA.

This command will return any public folder that stores anything other than plain old mail and post items. You can run it against legacy public folders on an Exchange 2007 or Exchange 2010 server or against a migrated set of folders on Exchange 2013. It even runs against public folders created on Exchange Online (Office 365).

Get-PublicFolder –Identity “\” –Recurse –ResultSize Unlimited | Where-Object {$_.FolderClass –ne “IPF.Note”} | Format-Table Name, FolderClass

Scanning for public folders

Scanning for public folders

I give no guarantees as to how quickly this command will run against a large public folder hierarchy. All I can say is that it will be quicker than you can find them manually.

Microsoft might update OWA in a future release of Exchange 2013 to support the display of public folders containing calendars, tasks, and so on. We live in hope!

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2010, Exchange 2013, Office 365, Outlook | Tagged , , , , , , , , , , , , , | 5 Comments

Adapting Exchange on-premises scripts for Exchange Online


Since the introduction of PowerShell in the form of the Exchange Management Shell (EMS) in Exchange 2007, many scripts have been written to ease the burden of Exchange administration by automating common operations. Microsoft has broadened the scope and depth of EMS in every release by enabling management of more and more parts of the product through scripting and the current on-premises version (Exchange 2013 CU3 as I write this post) makes more than 950 cmdlets available.

Libraries of useful EMS scripts are available for free download from web sites (check out the TechNet gallery of Exchange scripts) and literally thousands of examples can be found to deal with almost every conceivable management challenge that you’ll encounter on an Exchange server.

Of course, EMS is very different when it comes to Exchange Online running in Office 365. One of the reasons why companies sign up for Office 365 is so that they never have to deal with the mundane details of day-to-day server administration. So when you connect to Office 365 with PowerShell and run the necessary commands to establish a remote Exchange administration session, far fewer cmdlets are made available. Depending on the Exchange plan you have signed up for (which dictates the functionality that you can use and therefore the underlying cmdlets that are exposed), you’ll can run 340 and 400 cmdlets. This shouldn’t come as a surprise as all the cmdlets that deal with servers (like Get-ExchangeServer) or high availability (Get-MailboxDatabaseCopyStatus) or transport (Get-Queue) lose their meaning when you use a hosted service and have no control over these objects.

The question then arises whether you can usefully take a PowerShell script written for on-premises Exchange and use it with Exchange Online. The answer is “it depends” and the dependencies include:

    • The cmdlets used in the script. If they’re not supported by Exchange Online, you are out of luck. A variation on the theme is where a cmdlet has been replaced with a new cmdlet. In this case, you might decide to upgrade the script to use the new cmdlet so that it is not obsolete by changes made to the service.
    • The environment that the code was written to run within. When you run a script inside EMS, you take advantage of the environment created by the work done by the EMS initialization script. For instance, credentials are established and you are connected to an Exchange server, so you can do work immediately. Neither of these conditions are true if you simply run PowerShell. You will have to connect to Office 365, provide credentials, and then make sure that the credentials are available should the need arise to run cmdlets later on in the session.
    • The version of PowerShell. Exchange 2013 currently uses PowerShell 3.0 but Office 365 uses PowerShell 4.0. Does this matter? Well, it might, if the code is sensitive to version changes. The point is that you don’t have control over what version of anything Microsoft choses to run within Office 365, so prepare to be surprised.

Office 365 uses PowerShell 4.0

With all that in mind, there’s still a huge amount of value to be extracted by adopting on-premises scripts to run against Exchange Online. To prove the point, let’s take a popular script called EASDeviceReport.ps1 published by MVP Paul Cunningham and examine what needs to be done to make it run against Exchange Online (you have to register with the ExchangeServerPro.com site before you can download the script; Paul assures me that he only uses the email addresses that he gathers for a particularly well-chosen form of spam).

You can download a script and run it against Exchange Online to see what happens. That’s possibly as good a way as any of finding out where problems lurk. The caveat is that you should always examine the code first to see if anything bad or potentially damaging is present as you never know what weirdness might have been inserted into a script. In any case, after an examination I ran the code and PowerShell output some problems.

Oops - some problems when running against Exchange Online

Oops – some problems when running against Exchange Online

Nothing too bad was revealed and some data was output. The script has a simple purpose: to report on the Exchange ActiveSync (EAS) devices that are connected to an Exchange organization. To do this, it runs the Get-CASMailbox cmdlet to identify mailboxes that have EAS partnerships. As you might recall, every time you connect a mobile device to Exchange with EAS, a partnership is created to identify the connection between device and mailbox. Managing those partnerships can be a challenge, as explained in this post.

Running Get-CASMailbox against a large Exchange organization will take a long time to complete. If you have more than a couple of thousand mailboxes in the organization (tenant domain), you might want to break up the processing by using a filter to select particular groups of users.

After Get-CASMailbox creates a collection of mailboxes that have EAS partnerships, the script then processes the set of mailboxes and uses the Get-ActiveSyncDeviceStatistics cmdlet to fetch information about each device partnered with each mailbox. In passing, let me note that this cmdlet is due to be deprecated in the future and has been replaced by Get-MobileDeviceStatistics in Exchange 2013 and later (including Exchange Online). For this reason it’s best to replace the cmdlet when running against Exchange Online.

Information is captured for each device and written into a CSV file. After all the devices are processed, the script either exits and you have the joy of reviewing data in a CSV file, or, you can specify the SendEmail parameter and have the script dispatch a message containing the data to you. Helpfully, the data is first converted to HTML, which makes it much easier to read.

Sending email from a PowerShell session connected to Office 365 is not hard – it’s just different. Mail stubbornly refused to flow when I tried it, so some further investigation was required.

Paul’s on-premises script uses the Send-MailMessage cmdlet to send its message as follows:

Send-MailMessage @smtpsettings -Body $htmlreport -BodyAsHtml -Attachments $reportfile

Outlook Web App reveals Office 365 SMTP settings

Outlook Web App reveals Office 365 SMTP settings

The first error was the reported lack of an SMTP server. We have to know whether Office 365 makes an SMTP available to clients and how to connect to it. Office 365 certainly supports both POP3 and IMAP4 clients, both of which need an SMTP server to send outbound messages, so the easiest way to find this information is to use Outlook Web App to connect to your mailbox, select Options, and then the account section, and then click the link to reveal POP3 and IMAP4 settings. I saw that my tenant domain uses smtp.office365.com, so I tried to use that and found that the attempted connection was rejected due to an authentication failure.

Looking at the settings again, they show that an SSL connection is required to port 587. Aha! The original code attempts to create a non-SSL connection. Also, it depends on the credentials of the user sending the message already being available within the PowerShell session. To make the code work with Office 365, I changed it to:

Send-MailMessage @smtpsettings -Body $htmlreport -BodyAsHtml -Attachments $reportfile -Credential $O365Cred -Port 587 –useSSL

You’ll notice that the code doesn’t specify the SMTP server. That’s because the script allows this to be passed as a parameter. The mail server name is then stored in a variable in the @smtpsettings reference above. I could also have passed it as a value to the –SmtpServer parameter and added it to the call to Send-MailMessage.

How were credentials passed to the script? You can see that the are specified to Send-MailMessage in the $O365Cred variable. This is declared as a global variable in the Connect-ExchangeOnline function in my PowerShell profile. Every time I run Connect-ExchangeOnline (the instructions to build the function are here), the Get-Credential cmdlet collects my credentials (Office 365 account name and password) and uses them to establish a PowerShell session with Exchange Online. The credentials are retained in the $O365Cred global variable and are therefore available to other scripts.

EAS Partnerships report delivered by email

EAS Partnerships report delivered by email

After making the changes to Send-MailMessage the script ran perfectly and I received a nicely formatted email message containing details of all of the ActiveSync partnerships known within my tenant domain.

Downloading the script and debugging it to make it run successfully on Office 365 took me no more than 30 minutes, including time to check details of various cmdlets and play around a bit. I am not an accomplished scripter by any means and a more practiced individual would do the job much quicker. Then again, another script might pose a much tougher challenge.

The point is this. Exchange on-premises and Exchange Online share a common heritage and code base. Code that this written for on-premises deployments can be useful when run against its cloud cousin. Libraries of EMS scripts are available to be downloaded free and played with to your heart’s content. What’s not to like about this situation?

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Exchange 2013 | Tagged , , , , , , , , , , | 2 Comments

Do LinkedIn smartphone apps harvest contact data from Gmail?


Is anyone else irritated by the way LinkedIn appears to harvest email addresses in an attempt to persuade you to transform correspondents into LinkedIn contacts? I’ve often wondered where LinkedIn got its information about people that I might like to contact but things came to a head yesterday when it suggested that I should make my mother a contact. She’s certainly a great personal contact, but a 79-year old woman hardly rates as a suitable professional contact.

So I began poking around to find out where some of these contact suggestions originate. I know that I have never allowed LinkedIn access to my Office 365 or Gmail accounts, so LinkedIn should not be rummaging through my email to uncover potential contacts. At least, I’m positive that I have never allowed the browser application to browse my personal data.

But the sheer number of suggestions generated by LinkedIn’s “People you may know” feature that are people with whom I have exchanged email, maybe only once or twice, makes me very suspicious that LinkedIn is getting at the data somehow. Right now I’m looking at five or six suggestions for people whom I last send email to in 2009 when we shared a common interest (rugby) in the Bay Area. Those messages are in my Gmail account but I have never looked at them in the last five years nor have I contacted those people by phone, Twitter, Facebook, or any other of the mechanisms that are now available.

In a certain bizarre way, it’s nice that LinkedIn cares enough about me to dig up elements of my past in an earnest attempt to build out my professional network. I imagine that a huge amount of programming effort has been expended to create efficient harvesting algorithms that are capable of making sense from the huge amount of email addresses and other personal data that accumulates in email accounts. On the other hand, it’s kind of creepy that LinkedIn is examining old messages to extract its suggestions.

I don’t pay LinkedIn for its service as I have never seen the need to cough up for an enhanced subscription and I realize that we enter into a certain “compact with the devil” when signing up for cloud-based services where you provide data and the service basically makes whatever use of the data that they can. It is, after all, the way that Gmail operates – Google provides the email service, you generate messages, and Google uses the content to decide what ads are displayed.

Getting back to the original question, I do not know how LinkedIn determined that my mother and old correspondents are suitable contact candidates. Some web searches reveal that I’m not the only one who is concerned about the same issue, which led me to this thread on the LinkedIn community support site. One of the contributions speculate that it must be the LinkedIn phone app that is the culprit and the theory makes sense when you think about it.

I use the LinkedIn phone app on my Nokia 1020 Windows phone (similar apps are available for the iPhone and Android). It’s not the greatest app in the world and it exhibits some annoying bugs at times so I don’t use it often. Nevertheless, I installed the app and clicked through the warnings that told me that the application required access to various data, including contacts. This seems to be the source of the suggested contacts – the LinkedIn phone app has access to the contact data on the phone and is able (because I said so) to use that data. I didn’t anticipate that the data would be used by the browser app too, but that’s probably down to my own ineptitude.

To be fair to LinkedIn, I removed the app from my phone and then reinstalled it from the Windows Store to see what warnings are displayed.

The data that the LinkedIn Windows Phone app can access

The data that the LinkedIn Windows Phone app can access

No attempt is made to conceal the fact that the LinkedIn application is allowed to access contact data (above). Once you install the application, it informs you that your contacts are indeed safe with LinkedIn and then proceeds to offer to import your address book to LinkedIn in order to suggest connections (below).

LinkedIn offers to import your address book

LinkedIn offers to import your address book

Notice the relative size of the “continue” (to import the address book) and “skip” (to decline the opportunity) buttons. Guess which one is more likely to be clicked by the unwary user.

I’m not saying that importing contacts to suggest connections is a useless or underhand feature because it is obviously not. I am sure that many people extract great value from this feature. For instance, someone who is new to LinkedIn can use their address book as the basis to build out their LinkedIn professional network.

Having some 1,350 LinkedIn contacts already (many of which I can’t remember why we connected), I never felt that using my email contacts was a good way to suggest even more connections, so I didn’t use this feature. But LinkedIn still has access to my contacts and seems to use a pretty liberal interpretation of what a contact is.

Take the rugby contacts from 2009 mentioned above – these people send me email and I replied to it using Gmail. However, I never made them email contacts in the way that I understand a contact to be (an email address that you want to remember and associate with an individual because you correspond with them on a regular basis). Nor do these email addresses show up in the Windows Phone “People” hub, so not even the phone that the app runs on considers them as “contacts.”

It seems likely that the LinkedIn smartphone app harvests email addresses that it finds in Gmail accounts that are known to the devices on which it runs and feeds them back to LinkedIn where the addresses can be used as potential suggestions. It doesn’t seem to do the same with my Office 365 contacts but might access other email systems for the same purpose. The iPhone and Android variants of the LinkedIn app might behave differently too.

I’ve been trying to figure out how Gmail correspondents become LinkedIn suggested connections for some time and can’t come up with a better theory. Maybe you can…

Follow Tony @12Knocksinna

Posted in Email | Tagged , , , , , | 5 Comments