Visiting Omaha Beach (WN62 and the American Military Cemetery)

For anyone interested in military history, a visit to Normandy provides an excuse to visit some of the World War II sites in the area. I have been visiting the area on and off since 1973 and think I have seen most of what is available to see (like St Marie-du-Mont, but there’s always something to be found.

Last year, we stayed at the Casino hotel at the western end of Omaha Beach (in Vierville-sur-Mer, the end closest to Pointe du Hoc) and enjoyed wandering the beach there. This is the part of Omaha captured in the film “Saving Private Ryan”, which forms the basis of many opinions about the battle. The film was actually made using Curracloe Strand in County Wexford, Ireland as a substitute for Omaha Beach.

This week I was passing and decided to explore the other end of Omaha Beach and so found my way to Colleville-sur-Mer, the location of the American Military Cemetery and the site of some of the fiercest fighting on D-Day.

In part, I was motivated by reading “Omaha Beach: D-Day, June 6, 1944” by Joseph Balkoski (a great overview from the American side) and “The Dead and Those About to Die: D-Day: The Big Red One at Omaha Beach” by John C. McManus, which provides detailed accounts of the fighting around the positions close to where the American cemetery is now.

Google Maps overview of WN62 position on Omaha Beach

Google Maps overview of WN62 position on Omaha Beach

The best German account I have read of these actions is “WN 62: A German Soldier’s Memories of the Defence of Omaha Beach, Normandy, June 6, 1944” by Hein Severloh, who manned one of the MG42 machine guns in a foxhole in the Widerstandsnest “literally, resistance nest” 62 (WN62) fortified position directly opposite the Easy Red and Fox Green sectors. Omaha was protected by a set of these nests from WN60 in the east to WN74 in the west. WN62 was perhaps the largest and most effective of the positions in terms of the damage inflicted on the invaders. Together with WN61, WN62 protected the “E-3″ (Colleville) draw or gap, one of the few ways off the beach that could be navigated by wheeled vehicles (after the engineers had created the necessary roads).

WN62 observation post (rear entrance to the left) facing Omaha Beach

WN62 observation post (rear entrance to the left) facing Omaha Beach

Severloh claimed to have fired over 13,500 MG42 rounds and 400 rifle rounds at the attacking forces to great effect. He was eventually forced from WN62 and was captured in Colleville-sur-Mer on June 7. No one can be certain as to exactly how many casualties were caused by his fire, but given the elevated position of WN62 and the command it had over the beach, it’s likely that he killed and wounded many of those who landed in the Easy Red and Fox Green sectors on D-Day.

American Military Cemetery, Colleville-sur-Mer

American Military Cemetery, Colleville-sur-Mer

Lots of people go to visit the American Military Cemetery, which occupies a fine position overlooking Omaha Beach and has a nice visitor center. All of the areas can be crowded on sunny days, especially when a few tour buses arrive together, but that’s no reason to miss seeing the impressive layout and serenity found at the cemetery.

Memorial to the U.S. 1st Division at Omaha Beach

Memorial to the U.S. 1st Division at Omaha Beach

Following a tour of the cemetery, it seems like relatively few of the visitors go on to visit the site of WN62, which is now dominated by a memorial to the U.S. 1st Division (the “Big Red One”). If you do visit, it is well worth your while to stroll down the hill towards the beach to view what remains of the German installations. Two H669-class casemates are still there. These originally were the base of 75mm guns, but only one was present on D-Day. Both casemates show evidence of being hit by many U.S. missiles, most probably a combination of offshore shelling by destroyers, the guns of the Sherman tanks (only two of the Duplex Drive tanks were able to swim ashore to support the first wave of the 1st division, but several other Shermans were landed later) that were operating in the sector, and mortars.

View from one of the WN62  casemates towards the western part of Omaha Beach with Pointe du Hoc in the distance

View from one of the WN62 casemates towards the western part of Omaha Beach with Pointe du Hoc in the distance

Various other installations can be explored including a bunker where the troops rested and some observation posts, potentially used to fire upon the attacking forces. A number of Tobruks are present (small fortified positions to hold a machine gun or mortar) and concrete platforms where guns were positioned before the casemates were completed. The lines of trenches that connected the various positions are also visible.

Front of lower casemate at WN62 showing evidence of shell damage

Front of lower casemate at WN62 showing evidence of shell damage

Although cramped at times, it’s relatively easy to get into the casemates and observation posts. The bottom casemate is flooded with a couple of inches of water, a fact that is all too easy to miss until you plonk your feet down into the pool. Apart from some swallows nesting in gaps in the corroding steel reinforcing girders, there’s not much to be seen inside the casemates, but the views that they have demonstrate just how dangerous these guns were to the D-Day invaders. Notice that the casemates do not face onto the beach. They are positioned to provide flanking fire along the beach and their openings are not exposed to direct fire from the sea.

Remains of a WN62 concrete gun platform. Note the magnificent view over the landing beach at Omaha

Remains of a WN62 concrete gun platform. Note the magnificent view over the landing beach at Omaha

You can also walk down to Omaha Beach from WN62 (and walk back up again) to gain a view of the ground that the attacking forces had to cover to get to grips with the defenses. The weak spot was to the west of WN62 where the Americans found it possible to exploit some narrow trails through minefields to get around WN62 and reach the top of the bluff where the military cemetery is now located.  It is also possible to walk up to the cemetery from the beach and arrive at the platform viewing area. This path essentially follows the original track taken by the first American forces (under the command of Lt. John Spalding) to penetrate the German defenses and get behind WN62.

Omaha Beach cliffs cleared of vegetation on D-Day

Omaha Beach cliffs cleared of vegetation on D-Day

Of course, the area around WN62 is quite different to the way it looked on D-Day as the vegetation has been allowed grow to cover the gullies and bluffs. Paths are cut through to allow people to walk but it’s nothing like the clearance made by the Germans to open fields of fire, not to mention the effect of the bombardment before and during D-Day.

Apart from its historical resonance, Omaha Beach is a pleasant spot to spend some time. It is sandy and peaceful now and a good place for a picnic, meaning that those in the party who have no interest in military history can be left alone to enjoy other pursuits while you explore the surroundings. All-in-all, a good place to visit.

Follow Tony @12Knocksinna

Posted in Travel, Writing | Tagged , , , , | 2 Comments

The scourge of autosignatures

Have you ever wondered just how much valuable storage is occupied in email databases by totally useless autosignature content? You know, logos and other tasteful adornments to the bottom of email, repeated ad nausem on every message, internal and external, unregarded and unwanted by recipients.

Autosignatures serve a useful purpose when they are used correctly. I don’t have any real problem with simple text blocks containing the sender’s contact details. Things start to become a little hairier when people insist on including corporate logos or other graphic information to tell recipients just how wonderful the sender’s company really is. Or how much better their corporate logo is since the most recent (and expensive) redesign.

Things can be taken to the extreme, as in the case of the senior executive at Digital Equipment who insisted on including a digital snapshot of his most recently arrived child in his autosignature. Of course, senior executives tend to have larger brains than the norm and the thought of sharing his good fortune with all and sundry seemed a good one, until someone (bravely) pointed out that the 1 MB graphic was slowing email down.

That, of course, was in the world of the late 1990s when email flowed across less capable networks, but the point is that users can insert just about anything they care to in an autosignature and email will continue to work as long as the graphic isn’t extraordinarily large. Administrators have no idea of what users do in this respect unless they receive a graphically-intense missive from someone.

Looking through recent messages in my inbox I conclude that a large percentage of email is infected with graphic autosignatures. The latest fashion appears to include Twitter and Facebook links in an attempt to demonstrate that the company has mastered social media. In any case, it’s all too much and the average size of messages continues to grow.

The economic downside of this phenomena is the cost of storing all the duplicated graphic rubbish cluttering up user mailboxes. How much does it cost to provide the extra 10-15% of storage necessary to hold literally millions of corporate logos in email autosignatures? And to back them up, if that’s what you choose to do, or to have the additional database copies if you elect to invest in Exchange native data protection. Or even to move the blessed logos around from database to database in mailbox moves. Or, if you’ve decided to embrace the cloud, to migrate your logo collection from on-premises mailboxes to the cloud. Think of how much longer a migration takes to transfer all those graphics across the Internet. Not good.

But there is a better way. Exchange 2010, Exchange 2013, and Exchange Online support access to Active Directory information from transport rules. If you have a well-maintained Active Directory that holds information such as telephone numbers about users, you can build a transport rule to automatically apply a standardized, low-impact autosignature to outgoing messages. Even better, the same rule can check for the presence of an autosignature in a message thread and not add it again if the information is already present, thus avoiding the stupidity of multiple instances of “graphicitis” in a thread.

Here’s an example taken from my Exchange 2010 Inside Out book of a transport rule to apply a standard autosignature based on Active Directory data. (I didn’t cover this in my Exchange 2013 Inside Out book because that volume is focused on managing mailboxes and high availability; Paul Robichaux covers transport in Exchange 2013 Inside Out: Connectivity, Clients, and UM). However, the code works for Exchange 2010, Exchange 2013, and Exchange Online.

New-TransportRule -Name 'Company disclaimer' 
-Comments 'This transport rule applies the approved company disclaimer to every outgoing message' –Priority ‘0’ -Enabled $true -SentToScope 'NotInOrganization'
-ApplyHtmlDisclaimerLocation 'Append' -ApplyHtmlDisclaimerText
'<h4 style="font-family:verdana">Contoso Corporation</h4>
<p style="font-family:verdana; font-size:70%;color:green">
This message is the property of <b>Contoso Corporation.</b> If you receive this message in error, please delete it <u>immediately</u> and inform us at 827-1176 about its delivery.
<p style="font-family:Arial; font-size:80%; color:blue">
<i>%%FirstName%% %%LastName%%</i>
<p style="font-family:Arial; font-size:70%; color:red">
Phone: %%PhoneNumber%%
<p style="font-family:Arial; font-size:70%; color:red">
Email: %%Email%%' -ApplyHtmlDisclaimerFallbackAction 'Wrap'
-ExceptIfSubjectOrBodyContainsWords 'This message is the property of Contoso Corporation'

The rule only fires for messages sent outside the organization (the scope is set to ‘NotInOrganization’). It applies even if a user has their own autosignature as it would be terribly difficult to detect the many varied types of autosignature that might be inserted by a human. Feel free to customize it as you like. There are no prizes for being inventive, just satisfaction. Reply to this post with whatever you come up so that others share your innovation.

Other options such as incorporating a graphic file (if you must) or time-limiting a particular form of an autosignature are also possible. In fact, I bet there are lots of possibilities available with transport rules that you might not have considered. And if you don’t feel that you want to meddle with rule magic yourself, commercial products such as Exclaimer Signature Manager or Code Two’s Exchange Rules Pro are available.

Users like autosignatures because they can put what they want into their messages. It can be a struggle to move to an automated standardized version, but wouldn’t it be a good thing if doing so saved some disks as well as sparing our eyeballs from yet more corporate logos and other offending nonsense?

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Exchange 2010, Exchange 2013, Office 365 | Tagged , , , , , , , , | Leave a comment

Managing offline access for Outlook Web App

Offline access is one of the premier new features offered by Outlook Web App (OWA) in Exchange 2013 and Exchange Online. I have had the need to use OWA offline many times and think it is a very usable client, especially over low-speed or flaky Wi-Fi connections. Of course, Outlook’s adoption of MAPI over HTTP is an effort to improve that client’s ability to cope with the same kind of connections. It remains to be seen how this really works out in practice, but first signs are promising.

When I first wrote about OWA offline in December 2012, I described how different browsers implement the databases used to cache mailbox data and how this information needed to be protected because it could be exposed by an attacker who managed to gain access to a PC. BitLocker, which can be enabled on a PC even if the system is not equipped with a Trusted Platform Module (TPM) chip, provides a certain level of protection, but it’s still true that someone who gains access to a logged-in PC will be able to access the data. Then again, the same is true for Outlook.

User awareness is therefore an important part of deploying OWA offline. As is the case for all software, there’s no point in letting people use a new feature if it creates a security risk.

The warning that something will be stored on your computer

In any case, unless you disable the option to use OWA offline, users will be able to turn on the feature themselves by clicking “Offline options” in the drop-down menu to the right of the screen. The process of setting up offline access is very straightforward and the only thing that might cause a user any concern is the request to allow the browser to use some extra storage. I don’t think the words used really explain the need. For example, IE11 asked if could use additional storage. I understood the request, but would the average user? Chrome, on the other hand, saw no need to request any storage.

Once enabled, OWA will download data from mailbox folders. Up to 150 most recent items are cached for folders accessed in the last week (this EHLO post explains what data is downloaded), so the amount on disk differs according to user behavior. Each browser has its own implementation of how data is stored on disk and I was curious whether this made a difference, so I compared how much data was downloaded from my Office 365 mailbox by IE11 and Chrome (version 43). The results were interesting.

OWA offline databases

On the surface, IE uses an ESE database – like Exchange, but it is very different because it supports the HTML5 standard. The database (Internet.edb) occupied 22,592 KB. Chrome stores its data in a WebSQL database splendidly named “9” and took just 36,696 KB. This information was extracted at the same time when the mailbox was as static as I would make it (a Sunday afternoon) after enabling offline access for both browsers and leaving them to download the data.

Your mileage might vary and the storage requirements of Safari (for Mac) or Firefox (for Mac or Windows) might also differ as I did not test these platforms (this page describes the current OWA support status for different browsers). The point is that OWA allows each browser to use its own storage in its own way and hides the difference from users.

You can stop individuals or groups of users accessing OWA offline mode. The easiest method is to create a new OWA mailbox policy (using EAC) that doesn’t allow offline access and then apply the policy to whatever mailboxes you want to restrict. Alternatively, you can disable offline access for an OWA mailbox policy by running the EMS Set-OWAMailboxPolicy cmdlet (the same settings work for both Exchange 2013 and Exchange Online in Office 365). For instance:

Set-OWAMailboxPolicy –Identity “Default OWA Mailbox Policy”       –AllowOfflineOn NoComputers

Once an OWA mailbox policy is amended to prevent offline access, you can apply it through EAC or by using the Set-CASMailbox cmdlet. For example:

Set-CASMailbox –Identity TRedmond –OWAMailboxPolicy ‘Restricted’

Note that if someone else logs onto a different account with a browser that is configured for offline access, offline access is disabled to ensure that the person who has just connected is unable to access the data in the offline cache.

OWA offline access is a useful feature. Make sure that you use it in a safe manner and it is even better.

Follow Tony @12Knocksinna



Posted in Cloud, Email, Exchange, Exchange 2013, Office 365 | Tagged , , , , , , , | 1 Comment

Announcing a Kindle version of “Office 365 for Exchange Professionals”

After a certain amount of struggle, mostly associated with the need to provide book files formatted in a certain manner, the “Office 365 for Exchange Professionals” team is happy to announce that we now have a Kindle version of the book available on

Our original intention was not to create a Kindle version. The work necessary to format a large and complex book (many tables, graphics, and footnotes over the 630-odd pages) didn’t seem worth the effort, especially when we had a perfectly good EPUB version already available. In particular, we weren’t happy with the way that code examples are treated. And the way that Amazon publishes Kindle books through its Kindle Direct Publishing (KDP) platform didn’t seem to match our desire to create frequent updates for the book.

However, we continued to receive a number of requests to support Kindle and so resolved to attack the problem again. After working through some “interesting” conversions, a Kindle edition is now available in Amazon stores worldwide.

We will continue to sell the book on, where you can download PDF and EPUB versions. Amazon is easier for those who only want to read the book on a Kindle and like the way that Amazon wirelessly delivers content to Kindle devices. We actually believe that PDF on a PC is the best reading experience, but we want to support choice.

As mentioned above, we intend to issue frequent updates. The next edition should be available in September 2015 to coincide with the IT/DEV Connections conference in Las Vegas when all of the author team will be speaking at the event. When a new edition is available, we will release first on and then work on the Kindle version. Once the new Kindle version is ready, we will publish it and withdraw the current edition from sale. The versions will be clearly marked as “May 2015 edition”, “September 2015 edition”, and so on, and we will include a description of the changes that are present in each version.

Right now, we are busy preparing the September 2015 edition. Many updates and new material have been incorporated in a number of chapters (35 additional pages to date) based on recent developments inside Office 365. More information will come as we have the chance to use some of the new technology that Microsoft announced at the recent Ignite conference, assuming that technology is available to Office 365 tenants by the start of September

Based on our experience to date, it seems like three-times-a-year might be a good cadence to attain for updates. Of course, that depends on having sufficient material to warrant an update, but signs are that Microsoft will continue to pump out changes into “the service” and those changes need to be examined, analyzed, and documented. That’s the task we have taken on and intend to see through. Hopefully you’ll join us on the journey.

Follow Tony @12Knocksinna

Posted in Cloud, Office 365 | Tagged , , , , | 1 Comment

Exchange Unwashed Digest – May 2015

May was the month of the Microsoft Ignite conference in Chicago. I’ve already provided my impressions of that event in another post, so I won’t rehash the topic here. There’s no doubt a lot of content was delivered at Ignite (I am still listening to recorded sessions) and so it’s natural that Ignite was a big influence over what appeared in my Exchange Unwashed blog on during May 2015. Here’s what happened.

ESEUTIL is now the evil utility (May 28): Once an essential part of every Exchange administrator’s toolkit, the days of ESEUTIL now appear numbered. Microsoft has changed its support policy to positively discriminate against the use of ESEUTIL (there’s a lot of goodness in that decision), so why is ESEUTIL so bad all of a sudden. Well, there are good reasons…

Microsoft claims 35% of Exchange installed base is now on Office 365 (May 26): For the first time (that I know of) a Microsoft representative came out and said how much of the Exchange installed base they believed had gone over to the service. I don’t think it is 35%, but they do…

Updates make Office 365 Groups more useful (May 21): A number of long-awaited updates have appeared to make Office 365 Groups more useful. My particular favorite is the PowerShell support, which is now adequate. The updates to the document libraries are quite good too!

New engineering philosophies drive innovation within Office 365 (May 19): The old way of engineering products was to focus on just that product. Integration with other products happened almost as a result of blessed serendipity. Things are changing in the world of Office 365 as products become contributions to a software parts bin and new applications are built with the full spectrum of the service in mind. It’s a whole new way of doing business.

Eradicating EV stubs from Exchange mailboxes isn’t easy (May 14): An article published earlier in May discussed how Microsoft is aiming at Symantec Enterprise Vault with the new Office 365 import service. Well, there’s just one thing to spoil the party, and that’s how to get rid of all the stubs that are left behind in user mailboxes if you remove Enterprise Vault. Some third party products will do it for you. Others won’t. Or you can just annoy users with defunct stubs.

Why we shouldn’t care that Exchange 2016 really is Exchange 2013 SP2 (May 12): It’s hard for a 20-year-old product to keep on innovating as it did in the past. Exchange 2016 is on the way, but in effect it’s really a service pack (albeit a large one) for Exchange 2013. I really believe that there’s goodness in this approach, if only because of the continuing large-scale transfer of technology from the cloud to on-premises software.

Also published on in May 2015 was an article describing the support of Office 365 Groups by Outlook 2016 (preview – build 4027). Groups are an interesting and valuable new entity only found in Office 365 whose use has been constrained by the lack of support by Outlook. Now it’s arrived, but there is still work to be done.

Why the power of Office Graph and Delve frightened some Ignite attendees (May 8): There’s no doubt that the Office Graph database is a huge unifying influence across Office 365. But the problem seen by some is in the amount of signals that the Graph gathers. There’s just too many. And some of that data might be misused, at least in the eyes of the privacy advocates. Reasonable fear or nothing to worry about?

Microsoft declares war on Symantec Enterprise Vault and looks to bring back data into Exchange Online (May 7): The new Office 365 Import service allows tenants to gather up PSTs and send the data (on SATA drives or over the network) to Microsoft, whereupon the data is ingested into Azure and made available to be imported into Office 365 mailboxes. All sounds good, except for Symantec and other third-party archive vendors, whose market seems to be contracted as Microsoft pursues a campaign to “bring back data into Exchange mailboxes”… Should be an interesting battle to observe.

News about Data Loss Prevention for SharePoint Online revealed at Microsoft Ignite (May 6): More coverage from the Ignite conference, this time describing how Microsoft is implementing the Data Loss Prevention feature into SharePoint Online and OneDrive for Business. All to stop users messing around with sensitive data.

News from Ignite: How Exchange 2016 benefits by technology transfer from the cloud (May 5): Microsoft took the opportunity at the Ignite conference to provide a lot more information about Exchange 2016. The biggest impression was not created by what seems to be a rather paltry list of new features but rather on quite how much technology is being transferred from Exchange Online to its on-premises counterpart.

Roadmap reveals potential for Office 365 Groups (May 5): As might be understood from the amount of coverage that I have afforded to this topic, Office 365 Groups are a big thing at the moment and they received a lot of air time at the Ignite conference. This piece covers the roadmap laid out by Microsoft at the event.

Satya Nadella launches Microsoft Ignite (May 4): In the longest blog post I have ever written for “Exchange Unwashed”, I covered the massive and everlasting (or so it seemed) keynote for the Ignite conference. I am still numb at the thought of quite how long it went on for, but at least some of the content and most of the announcements were pretty interesting and gave a solid pointer to the direction in which Microsoft is now heading.

Now on to the sultry month of June. No conferences to attend, but still lots of work to be done.

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Exchange 2013, Office 365 | Tagged , , , , , , , , , | Leave a comment

Write some code and you can influence DAG failovers (for now anyway…)

A recent debate on the Exchange 2013 (unofficial) Facebook group started off with the question “can I built my own failover criteria in a DAG?” and pointed to the TechNet page on Active Manager.

The debate began with sheer denials, mostly on the basis that it didn’t seem to make sense for someone to attempt to second-guess the Exchange development engineers who have been working on this problem for many years. As the erudite Boris Lokhvitsky remarked: “In your car, do you have the desire to modify the combustion sequence or rearrange the valves in the engine so that it would run faster?”

In fact, Exchange 2013 evolved the failover criteria used by Exchange 2010 to take account of server health when Active Manager makes a decision about what target server to select to host a failing database in BCSS, or “best copy and server selection.”

But after a while, the esteemed Scott Schnoll weighed in to say that there is a way because Exchange accommodates a method called an Active Manager Extension, part of the third-party replication (TPR) API that exists in both Exchange 2010 and Exchange 2013. The TPR allows storage vendors to write their own continuous replication code and then stitch it together with the rest of the DAG components so that everything works together seamlessly. At least, that’s the theory.

TechNet says: “Exchange 2013 also includes a third-party replication API that enables organizations to use third-party synchronous replication solutions instead of the built-in continuous replication feature. Microsoft supports third-party solutions that use this API, provided that the solution provides the necessary functionality to replace all native continuous replication functionality that’s disabled as a result of using the API. Solutions are supported only when the API is used within a DAG to manage and activate mailbox database copies.”

On the surface, TPR seems like a wonderful idea. But the sad fact is that only EMC has ever implemented TPR in a solution called “Zero-Loss Protection for Exchange”, where they distinguish between “Native Database Availability Groups” (the normal kind) and “Synchronous Database Availability Groups” (the kind you’d use with an EMC CLARiiON SAN). The EMC Replication Enabler for Exchange is the component that leverages TPR.

I’m sure that EMC was very excited when Microsoft told them about the TPR because it must have seemed like a great way for EMC to defend their SAN installed base at a time when Microsoft was telling customers that they were engineering Exchange to exploit low-cost storage solutions. Since then the evidence is that not many people have actually used EMC’s solution and no other storage company appears to have been too interested in taking on the cost to develop and maintain their own replication solution for a DAG.

Indeed, given the hype around JBOD-type storage for Exchange, especially in the two years since Microsoft shipped Exchange 2013, anyone who proposed building a third-party replication solution for expensive SANs might be regarded as a candidate for lying down in a cool dark room until the idea passed. Even EMC is quite on the topic of using their code with Exchange 2013 and I imagine that the Replication Enabler is heading to the great byte wastebasket soon, if it hasn’t already reached there.

So Scott was right in his assertion that there is a way for someone to affect the way that Active Manager handles database failovers. You simply have to crack open your favorite IDE and write the code to leverage TPR. Simple. Just like that. Or maybe not. But the bad news is that your code will only work for Exchange 2010 and Exchange 2013 because Microsoft announced their intention to deprecate the API at the recent Ignite conference. It seems that Exchange 2016 will be the last version to support DIY DAG failovers.

As for me, I think I’ll let the Exchange developers take care of how replication happens inside DAGs. It just seems easier all round.

Follow Tony @12Knocksinna

Posted in Exchange, Exchange 2010, Exchange 2013 | Tagged , , , , , , | Leave a comment

Using PowerShell to convert Exchange Distribution Groups to Office 365 Groups

At last week’s Microsoft Ignite conference, I had the chance of attending a session called “Evolving distribution lists with Office 365 Groups” (a recording of the session is available online). The session described the integration with Outlook 2016 (no plans exist to back port the technology to Outlook 2013) and then went on to investigate how Microsoft views Office 365 Groups as a better alternative to old-style distribution groups. There’s no denying this fact. Distribution groups have been around since the dawn of email and haven’t evolved too much since. The last time Microsoft did anything to improve matters was the introduction of dynamic distribution lists in Exchange 2003.

Of course, the big issue with Office 365 Groups is that they will only ever live in the cloud. Microsoft is not going to incorporate them into the on-premises version of Exchange. You’ll be able to synchronize Office 365 Groups with on-premises Exchange via AADConnect if you operate a hybrid environment, but you won’t be able to create these groups on-premises.

One reason why this is so is the position that Office 365 Groups are moving to occupy within the Office 365 ecosystem as a single entity that permits access to many different forms of data available within the service. Today, membership of an Office 365 Group allows a user to access a shared mailbox, shared calendar, shared OneNote notebook, and a document library. The signs are that more resources will be accessible in the future, all granted through group membership.

Anyway, if you want more information about Office 365 Groups, you can read it in chapter 7 of “Office 365 for Exchange Professionals”.

Speaking of which, keeping the content of an eBook about Office 365 requires you to pay a lot of attention to what Microsoft is saying to customers at conferences such as Ignite. You never know when a speaker provides some information that should be included in the book or requires a change to the book’s content. In this case, I was interested in how Alfons Staerk approached the topic of migration from old-style distribution groups to Office 365 Groups.

It looks very much as if customers will be left to their own devices to migrate distribution groups as they wish. Of course, distribution groups bring their own complexities to the table. How should you deal with nested groups, for instance. What do you do with groups that include mail-enabled public folders, mail contacts, and shared mailboxes in their membership as none of these objects are supported in Office 365 Groups. And, of course, Office 365 Groups use their own mechanism to access resources across different parts of the service, so what do you do with mail-enabled universal security groups?

Microsoft demonstrated a program called “Hummingbird” that will soon be available (when the lawyers are happy) that can migrate a distribution group to an Office 365 group, subject to the caveats expressed above. Apparently the source code of the program will also be available to allow you to do your own thing.

But the approach taken to migrate a distribution group with a PowerShell script was more interesting. Up to recently, the PowerShell support for Office 365 Groups was just plain bad. You couldn’t create a new group or update group membership, both of which seem like fundamental operations. Microsoft is currently rolling out a new set of cmdlets to Office 365 tenants that address the problem. These are the *-UnifiedGroup cmdlet set to maintain group objects and the *–UnifiedGroupLinks cmdlet set to maintain group membership.

The script shown by Alfons was rudimentary but effective, but only for very simple distribution groups whose membership is solely composed of mailboxes. Everyone loves a challenge, and I decided that it would be a good thing to learn more about how to use the new cmdlets to work with Office 365 Groups, so I set about working on the ConvertDLtoO365Group.ps1 PowerShell script. After all, we need to bring out a second edition of “Office 365 for Exchange Professionals” (probably in September) that should cover this topic.

I’m no programmer. At least, I haven’t been for many years. My COBOL and VAX BASIC skills are rusty now but, as Jeffrey Snover keeps on reminding me, the joy of PowerShell is its ability to put things together bit by bit until something really good is done. PowerShell is like Lego bricks in that respect.

I hacked my way through several versions of the script. The current version is available in the TechNet gallery for anyone to download (and hopefully improve). The script runs in the context of a PowerShell session that is already connected to Exchange Online and does the following:

  • Takes the alias or name of a distribution group as the input parameter.
  • Performs some initial checks to see whether the distribution group exists or an Office 365 Group with the same alias exists. And that it’s an object of type MailUniversalDistributionGroup, which is the only type we can convert.
  • Checks the members of the input group to strip out those that can’t be added to the target Office 365 Group.
  • Checks whether the input group has member join restrictions. If it has (the group is “Closed” or “ApprovalRequired”), the admin is prompted to decide whether they want to create a private Office 365 Group. You can’t currently change the group type, so this is an important decision.
  • Tells the admin what’s happening and asks to proceed.
  • The new Office 365 Group is created with the New-UnifiedGroup It uses the same alias as the input distribution group because a new alias is given to that group.
  • As many of the properties of the input distribution group as possible are moved to the new Office 365 Group (not all can be because there isn’t a direct 1-to-1 mapping between the two object types). The Set-UnifiedGroup cmdlet is used for this purpose.
  • In particular, the Office 365 Group is set to auto-subscribe new members so that it mimics the distribution of new content via email as members expect from a distribution group.
  • The membership is added to the new Office 365 Group using the Add-UnifiedGroupLinks
  • Group members can be in three sets of links (owners, members, and subscribers). Because the new Office 365 Group is intended to behave like an email distribution group, the members are added to the members and subscribers sets.
  • The owners/managers of the input distribution group are added as owners of the Office 365 Group.
  • The email address of the input distribution group is switched to the new Office 365 Group so that new traffic goes there.
Converting an Exchange Distribution Group with ConvertDLtoO365Group.ps1

Converting an Exchange Distribution Group with ConvertDLtoO365Group.ps1

Phew! That’s a lot of processing. There are some known issues. For example, the –AccessType parameter for the New-UnifiedGroup cmdlet doesn’t work at present, so only public groups can be created. This is a known bug and is being fixed by Microsoft. Another issue is that running Add-UnifiedGroupLinks to add mailboxes as subscribers doesn’t work. This bug is also known and a fix will be available shortly.

However, the point is that it’s a PowerShell script and because it’s a script the code is there and available for all to see – and hopefully improve.

Thanks to Alfons Staerk and Sam Koppes of Microsoft for their encouragement. I think they quite liked seeing the demo script shown by Alfons take on a life of its own…


Follow Tony @12Knocksinna

Posted in Cloud, Exchange, Office 365 | Tagged , , , , , , , | Leave a comment