Facebook page for Exchange 2010 Maestro training


The redoubtable Mr. Robichaux, showing a mastery of technology that marks him as a very special person, has created a Facebook page for our Exchange 2010 Maestro training events that we are planning for San Diego (May), London (UK – June), and Greenwich, CT (October). Not only does this demonstrate the essential “coolness” of these events, it provides a useful conduit for communication about what we hope are interesting pieces of information relating to the training.

Communication was clearly a problem for us and our marketing/organization partner (Penton Media) last time out. For example, we tried to tell attendees about the kind of PC that they would need to run the virtualized Active Directory and Exchange 2010 servers that we use. We thought that we were special (laptop running a 64-bit version of Windows 7 equipped with 8GB of memory) but it turns out that we left out all manner of important detail such as the importance of having a BIOS that supports virtualization (and turning it on – or knowing how to turn it on) and the goodness of downloading at least the evaluation edition of VMware workstation 7.0 (or later) and installing the software well before the event to become accustomed with the process of dealing with virtualized machines.

The result of our ineptitude was seen in many ways. People turned up without a laptop (one guy brought a desktop PC, which was fine, another hired a PC from the hotel, which wasn’t as an old ThinkPad was just not going to even consider running VMware); or they had only just bought a suitable laptop in preparation for the event and hadn’t quite found their way around it; or they had no idea about how to get hold of VMware workstation and how to install it. The upshot was that we spent far too much time passing USB sticks around to allow people to install VMware when we should have been diving into the labs.

I don’t think that we communicated the intent of the labs either. People came along and seemed to expect the kind of “click, click, click, and have a nice day” labs that you often experience at training events when attendees follow step-by-step instructions in lab manuals to guide themselves through some aspect of a solution. Valuable as this kind of training is in its own context, it’s definitely not what we wanted to deliver. The idea behind providing VMware servers was that students could learn at their own pace. They could run the labs at the event if that pleased them and made sense or they could take the servers away and run them elsewhere, including taking the entire environment home where it could be used as the basis for servers to test their deployment of Exchange 2010. We didn’t communicate how we thought about the labs so some students were sadly under-impressed at the lack of tender loving care that they received from us.

We also hit a rock in terms of the slowness of the USB hard drives that we used to distribute the VMware machines. These were slow (5,400 rpm). Not slow enough as to stop you doing work – and anyone who has worked with several VMware servers running on a laptop knows that things seldom proceed at the speed of light – but frustrating if you expected better. We’re looking at how to improve this aspect by sourcing faster drives for the forthcoming events but even so I think some people will still be disappointed at the speed. The solution is to transfer everything to SSD but I doubt that attendees would be willing to pay the several hundred extra dollars that this solution entails.

So it comes back to communication, communication, and more communication allied to a refreshed lab guide and faster drives. The Facebook page is just the start of our communication efforts. Feel free to visit the page to share your views with us and provide feedback on all aspects of the events. And take the time to read about the hard work that Paul is doing to improve course content… Me, I’m just tweaking and refining PowerPoint decks to keep pace with all the changes and new knowledge that appears about Exchange 2010 to make sure that we deliver material that’s as up-to-date as we can make it when we deliver.

– Tony

Posted in Exchange 2010, Training | Tagged , , | Leave a comment

Seat 25G to Seattle


I’m just back from spending the week in Bellevue and Redmond, where I attended Microsoft’s annual MVP Summit. MVP stands for “Most Valuable Professional” and essentially it’s a group of advocates for Microsoft technology organized by interest. I’m an Exchange MVP and have been since 2003 or thereabouts, but this is the first time that I was able to attend a summit.

I can’t answer the inevitable question of “how do I become an MVP” because I don’t have a comprehensive answer and Microsoft’s public guidance on the topic isn’t really too specific. The best advice is to make verifiable and valuable contributions to the technical community through methods such as TechNet forums, talks at conferences, or publish articles about Microsoft technology. Don’t expect to be recognized overnight as the process does seem to take time.

The summit is a mixture of in-depth product discussions with the engineering groups, ra-ra sessions (keynotes from Microsoft executives), and some hands-on opportunities with new technology. Hundreds of MVPs travel on their own dollar from around the world to attend and for most it’s one of the highlights of their year. It was good to meet many people whom I only knew as contributors to different email discussions. The Exchange MVPs are a good bunch of people!

Apart from the serious business of the technology briefings, a visit to the Microsoft company store is also included to allow attendees to buy hardware (keyboards, mice, X-Box stuff), software, or various items of Microsoft branded clothing and other things. Some technical books are stocked by the company store, but I was a tad dismayed to find the world’s worst Exchange 2010 book (you know who you are…) on the shelves and no trace of the Microsoft Press titles such as Microsoft Exchange Server 2010 Inside Out. Of course, I made my feelings on the topic known to Microsoft Press but there’s really nothing that can be done as the store is operated as an independent entity that makes its own decisions about what to stock.

All of the content of the sessions is under strict NDA so I can’t comment on the details of what was presented. I can say that two of the three keynotes were moderately interesting and that Microsoft introduced a bizarre half-time “entertainment” when someone dressed in a moose costume came out on stage and performed some weird dances before plunging into the audience to meet and greet some startled recipients. The locals knew that the moose was the mascot of the local Seattle Mariners baseball team but the international attendees were a tad taken aback by the moose’s gyrations.

The sessions with the Exchange development group were the real reason to come to Redmond. It was good to be able to discuss some of the finer points of Exchange 2010 and to hear about initial thinking about how the product might evolve over the next few years, especially now that the cloud (Office 365) is front and center for many. I don’t think it breaks my NDA to observe that the best comment of the event came from a PM who provided a wonderful analogy comparing the rear orifice of an elephant to the entry point for RPC-over-HTTP communications. Of course, the important point is to always check that you have the right certificate before you enter the elephant else it’s likely to be really messy. OK, it was funnier to hear in person and made a suitable impact on the MVPs who were there… I shall never be able to consider the topic of Outlook Anywhere without grinning again and that can’t be a bad thing.

Microsoft does a nice job of taking care of MVPs when the summit is in town and organizes different evening get-togethers for people to attend. I only attended the Exchange/Lync party on Tuesday night as I had other commitments, including the chance to meet up with Penton Media (the publishers of Windows IT Pro magazine and partners with Paul Robichaux and myself in the Exchange 2010 Maestro events). Penton offered drinks and dinner in the Purple Cafe in Kirkland and quite a few authors turned up, including the redoubtable Mark Minasi (whose jokes haven’t improved over time) and Mark Russinovich of Sysinternals fame.

Travel out was by British Airways on the DUB-LHR-SEA route. Aer Lingus delivered its normal bus-like service between Dublin and Heathrow. BA uses relatively old B777-200s on this route and the airplane (G-VIIE both ways) showed every sign of having had a long and busy life. Both flights were completely full, which didn’t help, and the standard of BA catering and in-flight entertainment is now well behind the standards set elsewhere, so that was a further dampener.

The transit through Heathrow was better coming back than going out. For whatever reason, BAA forces people who come through security in the Flight Connections center in Terminal 1 to go through security again in Terminal 5… but don’t do this when you land at Terminal 5. I also have no idea why BAA needs to take pictures of every transit passenger – it seems like a unique control that only BAA feels the need to do. In any case, on the other end SEATAC airport was easy to get through and offered free wi-fi to pass some time, so that was appreciated.

I also took the chance to take the 560 bus from downtown Bellevue to SEATAC to see how effective it was in comparison to the taxi or shuttle transits I’ve used in the past. The bus took a little longer because it wended its way through places like Renton, but its price point ($2.50) can’t be beaten in comparison to the $19 that Super Shuttle charges or the normal $45 or so fare that will be clocked up in a taxi.

The weather in Redmond and Bellevue reminds me a lot of Dublin – low cloud, grey skies, and lots of rain with much the same temperature. Next week I’m in France to grapple with the intracies of installing home Internet with a France Telecom “Livebox” before traveling to London for the England v Scotland Six Nations game. All in all, a reasonable change of pace from this week’s summit.

– Tony

Posted in Exchange, Technology | Tagged , , | 2 Comments

Whoops! Where’s my Inbox?


One of the weaknesses inherent in cloud-based services was displayed once again on February 28 when Gmail service was lost to some 500,000 users, or as Google expressed it, 0.29% of the total user base (I love the way that small percentages are used in an attempt to disguise the impact of the outage – or maybe it’s a way to make those affected believe that they are truly special). Later reports put the figure for affected users to closer to 0.2%, but that’s still a figure counted in the tens of thousands of users who woke up to find that their email was unavailable.

Any service or application can suffer a problem that renders it inaccessible to users. Normally the problem is a temporary hiccup, service is restored quickly, and users may not even realize that an outage occurred. What was curious in this case was that users lost data in that when they signed back into Gmail all of their messages, contacts, and other information was unavailable. I assume that access to the data has been subsequently restored for all users as I’ve heard no further reports to indicate otherwise.

According to some reports, the problem appears to have been caused by the installation of some new code that had an unplanned and unforeseen effect on user mailboxes. This kind of thing can happen with any software but what’s interesting here is the feeling of helplessness that it generated for users.

When a service runs in the cloud, you really have no idea where your data is held or who is maintaining it for you. In addition, you have no control over changes that the provider who runs the service wishes to make. Most of the time, changes flow smoothly as new hardware is added to take the load of new users or software is upgraded or patched. As a user, I can’t say that I have ever been affected by losing access to Gmail, but maybe I have been lucky.

But when things go wrong with a major cloud service, they go wrong for lots of users. And unlike when you depend on an application that runs in-house on your company’s own servers, it’s hard to find someone to report the problem or shout at to relieve some of your frustration. The cloud is an amorphous blob in many respects and service providers occupy a place somewhere in the blob that’s often hard to reach, especially when things go wrong.

Gmail is a free service and you can argue that the value of someone free is just that – zero charge. You can argue that a properly managed commercial hosted email service that runs in the cloud wouldn’t experience such an outage and that even if problems occur, the framework of commercial contracts and Service Level Agreements that connect companies and service providers will ensure that everyone knows about the problem and the steps that are being taken to resolve it ASAP.

All of this is certainly true, but I wonder what will happen when an outage affects the mailboxes of a major company such as some of the marquee names that providers are trumpeting as they sign these companies up to move them from on-premise to cloud implementations. The relationship between a company and a cloud provider is emphatically not the same as that which exists in a traditional outsourcing arrangement where the client has a direct connection to service and account managers that they can contact if problems arise. Indeed, if problems start to escalate, clients are usually able to reach executives of the outsourcing provider to emphasize the effect of the outage on company operations and to encourage a faster resolution. Often these interactions do exactly nothing to resolve a problem, apart that is to allow the customer to blow off some steam at the provider. I guess there’s some value in that. I certainly can attest to the unique experience of having a customer CIO tearing into me as the company I worked for struggled to restore a satisfactory level of service for an outsourced Exchange 2007 deployment. I didn’t particularly enjoy the encounter but it seemed to make the customer feel better.

But when you’re in the cloud, you’re just one voice within a very large group. Even large companies that might have contracted for 50,000 or more seats will be dwarfed by the sheer number of mailboxes that cloud services such as Google Apps or Microsoft BPOS (soon to be Office 365) support. And when you’re just one voice it’s hard to be heard amongst the cries of pain provoked by any service outage, even if one of the mailboxes that’s affected belongs to a company’s CEO.

I wonder if some of the companies who are so enthusiastic to embrace the potential of the cloud really realize the potential downside of the arrangement. There are benefits to be achieved such as faster access to newer technology, releasing IT staff from mundane activities like server maintenance to allow them to focus on higher-value activities, and a potential decrease in operational cost that is much beloved by CIOs. Nothing in life is ever all upside and much of the dark side of the cloud is still an unexplored area that we’ll really only discover as we work through future outages. Gmail has had its outages and Office 365 will have its outages – and the screams will be heard in Hades.

– Tony

Posted in Cloud, Technology | Tagged , , | 1 Comment

Multi-role or single-role servers?


Ever since Exchange 2007, Microsoft has emphasized the product’s capability to be installed in different roles on a server. The logic behind this direction is pretty simple: you should only ever install the code that you actually need on a server rather than installing a lot of stuff that you may not need and might, under the right conditions, create a security risk due to an increased “attack surface”. In other words, the more code that you lay down on a computer, the larger the chance that some of that code will contain a bug that can be exploited by a hacker.

It all sounds good and the technical community has largely bought into the premise. However, there’s a nagging doubt in my mind that in reality, the vast bulk of Exchange deployments don’t pay much attention to single server roles because they use multi-role MBX-CAS-HT servers. Indeed, the default option offered by the Exchange 2010 installation program is to create such a multi-role server! Another data point is provided by the recent launch of the HP E5000 messaging “appliance”, which provides different multi-role server packages designed to cater for 500, 1,000, and 3,000 mailboxes. Out of the box, the E5000 delivers two multi-role MBX-CAS-HT servers configured into a DAG. Sure, you can go and remove the CAS and HT roles afterwards, but I doubt that anyone will.

Another factor is the sheer speed and capability of current server hardware. In the old days, when spare CPU cycles were intensely valuable and storage came at a premium, it made sense to reduce the amount of code that executed on a server or had to be installed on a server. Today, server hardware is so powerful that CPU is usually not an issue and bountiful storage is available. No server will notice the extra code necessary to run CAS and HT functionality and the extra storage required is a rounding error.

Don’t get me wrong. I still think that the notion of separating functionality into server roles is a good thing. It’s just that in the harsh light of practical deployment, most companies elect to use multi-role servers because they don’t want to spend the extra cash to buy the hardware and software necessary to create purpose-built single-role servers. UM servers are the notable exception, but these are used in a small minority of the overall Exchange installed base.

It seems clear therefore that the separation of server roles is something that is only interesting to large enterprises who have the necessary financial and technical resources to design, deploy, and support an Exchange organization created from layers of mailbox, CAS, and HT servers.  This is certainly a very viable and worthwhile deployment model where you have the chance to build different server configurations for the different roles: memory and disk-heavy for mailbox servers while the CAS and HT servers can be stripped down boxes because they are largely stateless in terms of the data that they hold.

The bottom line is that server roles are a good thing because they provide extra flexibility to Exchange. That flexibility is important to some very large and important customers and that’s probably why Microsoft has provided the capability. But I suspect that the vast bulk of Exchange 2007 or Exchange 2010 server deployments will continue to be happy with their multi-role all-in-one servers. Why complicate things when you don’t need to?

– Tony

Posted in Exchange, Exchange 2010 | Tagged , , | 4 Comments

Personal experience: the perils and pain of losing Internet access


I thought that I was all set for the Exchange 2010 high availability webinar that I co-presented on behalf of Marathon Technologies on February 24 – or so I believed. The slides had been done, we had run through how webinars work using the GoToMeeting tool (easy to use – assuming that you have a reliable Internet connection), and I connected 35 minutes in advance of the event to ensure that connectivity was smooth. All was prepared until the gremlins struck.

I guess I should have expected the worse. Vodafone is my Internet provider and they have had an unhappy time (as have I) in terms of the service provided to my house. We moved out for a couple of months and had good service from Vodafone in the other property, but after we moved back we ran into a blizzard of connectivity problems that caused my already graying hair to rapidly advance towards total whiteness.

By mid December all seemed to have been resolved. Apart that is from an intermittent drop in service that caused the router/modem to lose connectivity with the rest of the network. There’s been a lot of discussion about why this might happen in online forums such as boards.ie ranging from the need to upgrade the firmware for my Hauwei HG556a router to Vodafone incompetence to lack of IP addresses available to Vodafone customers. A call to Vodafone technical support on February 24 to express my concern about the unexplained drops extracted the good news that they had located a problem that had been the root cause of many customer complaints and that the fix was in place. No future outages should be expected, or so I was reassured by the nice man from Vodafone support.

And so we moved to the time scheduled for the webinar. All proceeded smoothly until I started to speak and then the connection dropped. I didn’t notice for a moment or two so had the sweet experience of speaking to myself until the penny dropped. I powered the modem off and on to restore connectivity and rejoined the webinar a chastened and embarrassed speaker. Fortunately the Marathon Technology team had realized the problem and had swung into action to present another part of the webinar while I went through my connectivity crisis.

The webinar was on the topic of high availability for Exchange 2010 so I was able to make some comments about my personal lack of a highly available network connection and the need to incorporate resilience into planning for server deployment when I resumed my presentation. Without feedback from the two hundred or so listeners it’s hard to know what they thought about the outage but looking back I think that it was both funny and painful (more pain than joy however).

Losing my connection reminded me once again about our dependency on the Internet and an infrastructure that we know little about or understand how those responsible operate. This is something that I suspect will come into sharp focus for companies that contemplate the potential move to cloud-based services such as Office 365. After all, if an individual can lose connectivity to a service and spoil a webinar through a transient outage, what will be the effect of a similar outage on a company’s operations if they depend on cloud-based applications?

– Tony

Posted in Technology | Tagged , | Leave a comment

Some odd Google search results


One of my ex-colleagues at HP was doing some research into how Exchange 2010 performs replication. He plugged in a search request into Google and got some interesting results back, which he just had to share with me (below).

What relevance has this search?

I don’t know whether to be appalled or pleased with the result. It’s also difficult to understand how my photo popped up in search results about with Exchange 2010 replication, albeit just one image amongst 100,000. In any case, it was worth a smile in the end and goes once again to prove that you never know what a search of the Internet will turn up…

– Tony

Posted in Technology | Tagged | Leave a comment

Exchange 2010 Public Folders: Part 3


This content comes from a chapter removed from my book Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk (including a Kindle edition). The first part of the chapter is available in this post and the second here. This post completes the chapter on public folders.

– Tony

Permissions

Public folder permissions are divided into client permissions, which allow clients to access the public folder and work with its contents, and administrative permissions, which manage the users who have administrative rights over the folder. Public folder permissions can only be manipulated through EMS or Outlook.

Client permissions

When you create a new public folder, a default set of client permissions is applied. You can see these permissions with the Get-PublicFolderClientPermission cmdlet:

Get-PublicFolderClientPermission –Identity '\Departments\Finance\Annual Budgets'

Identity     : \Departments\finance\annual budgets
User         : Default
AccessRights : {Author}
Identity     : \Departments\finance\annual budgets
User         : contoso.com/Exchange Users/Exchange Administrator
AccessRights : {Owner}
Identity     : \Departments\finance\annual budgets
User         : Anonymous
AccessRights : {CreateItems}

The first permission means that Exchange has assigned all users the Author permission. This means that all users can post new items to the public folder. Users have read access too, so they can copy items from the folder, including from the folder into a folder in their mailbox. They can also move items that they create from one public folder to another, or back into their mailbox. Authors are able to edit items that they create, but only with clients that support this capability. OWA doesn’t, but Outlook does.

The second permission tells us that the user called “Exchange Administrator” is the owner of the folder. The user that creates a folder automatically assumes this right and has total control over the folder, including the ability to delete it if they so desire.

The third permission allows anonymous users to create items in the folder. You have to set this permission on a public folder to allow non-authenticated users to email items to the folder. If you don’t expect this to happen (and haven’t mail-enabled the folder), you can remove the permission with the Remove-PublicFolderClientPermission cmdlet. For example:

Remove-PublicFolderClientPermission –Identity '\Departments\Finance\Annual Budgets' –User Anonymous –AccessRights 'CreateItems'-Server ExServer1

Client permissions are broken down into roles and individual client access rights. A role spans one or more client access rights and is therefore a convenient way to assign multiple client access rights in one operation. In the examples that we have discussed to date, CreateItems is a client access right that explicitly allows a client to create items in a public folder. On the other hand, Owner and Author are roles that both include the CreateItems client access right. Table 1 lists all of the available client access rights and the roles to which they are assigned.

Table 1 Public folder client access rights

Role CreateItems ReadItems CreateSubFolders FolderOwner FolderContact FolderVisible EditOwnItems EditAllItems DeleteOwnItems DeleteAllItems
Owner X X X X X X X X X X
PublishingEditor X X X X X X X X
Editor X X X X X X X
PublishingAuthor X X X X X X X
Author X X X X X
Non-EditingAuthor X X X
Reviewer X X
Contributor X X
None X

Public folder client permissions are updated with the Add-PublicFolderClientPermission cmdlet. For example:

Add-PublicFolderClientPermission –Identity '\Departments\Finance\Annual Budgets' –User Redmond
–AccessRights 'Owner' –Server ExServer1

We reverse the process to remove client permissions on a public folder and use the Remove-PublicFolderClientPermission cmdlet.

Remove-PublicFolderClientPermission –Identity '\Departments\Finance\Annual Budgets' –User Redmond
–AccessRights 'Owner' –Confirm:$False

While you can cheerfully amend client permissions on a public folder, matters often are not straightforward because public folders operate in a hierarchy where child folders inherit the permissions of their parent. You might amend the permission on one folder and omit to make the same change on another and expose information to unwanted access. The code to amend client permissions on a folder and be sure that all of the permissions are respected by child folders is complicated. For this reason, Microsoft provides two scripts with Exchange to allow you to replace a user with another user for a folder and to replace a user’s permission with another set of permissions on a folder. Both scripts perform recursive updates to ensure that the same changes are applied to child folders. The scripts are:

  • ReplaceUserWithUserOnPFRecursive.Ps1 (replace complete user with another)
  • ReplaceUserPermissionOnPFRecursive.Ps1 (replace permissions for a user)

The Public Folder Settings Wizard provided in Exchange 2010 SP1 (Figure 1) makes the process even easier. You can opt to set permissions on a single public folder or on a folder and all its children. In addition, you can assign permissions to multiple users at one time and select a different set of permissions to assign to each user.

Figure 1: Assigning permissions to a public folder with the Exchange 2010 SP1 wizard

Like mailboxes and other mail-enabled objects, you can assign permission for a user to send email on behalf of a mail-enabled public folder. This requires you to grant the Send As Active Directory right to a user. Once the right is granted, the user can select the public folder from the GAL and address a message from them as shown in Figure 2.

Figure 2: Sending a message on behalf of a public folder

Replies sent in response to the message are delivered to the public folder. The Add-AdPermission cmdlet is used to grant the Send-As right, as shown in this example:

Add-AdPermission –ExtendedRights 'Send-As' –Identity 'CN=Annual Budgets,CN=Microsoft Exchange System Objects,dc=contoso,dc=com' –User 'Redmond, Tony'

Administrative permissions

Public folder administrative permissions are assigned through RBAC roles or specifically to the owner of a folder. We can use the Get-PublicFolderAdministrativePermission cmdlet to view the administrative permissions on a folder. If we do this for the folder for which we’ve been playing with client permissions, we see:

Get-PublicFolderAdministrativePermission –Identity '\Departments\Finance\Annual Budgets' |Select User, AccessRights | Format-Table -AutoSize

User                               AccessRights
----                               ------------
CONTOSO\Organization Management    {ViewInformationStore}
CONTOSO\Public Folder Management   {ViewInformationStore}
CONTOSO\Organization Management    {AdministerInformationStore}
CONTOSO\Public Folder Management   {AdministerInformationStore}
CONTOSO\Organization Management    {ModifyPublicFolderACL}
CONTOSO\Public Folder Management   {ModifyPublicFolderACL}
CONTOSO\Organization Management    {ModifyPublicFolderQuotas}
CONTOSO\Public Folder Management   {ModifyPublicFolderQuotas}
CONTOSO\Organization Management    {ModifyPublicFolderAdminACL}
CONTOSO\Public Folder Management   {ModifyPublicFolderAdminACL}
CONTOSO\Organization Management    {ModifyPublicFolderExpiry}
CONTOSO\Public Folder Management   {ModifyPublicFolderExpiry}
CONTOSO\Organization Management    {ModifyPublicFolderReplicaList}
CONTOSO\Public Folder Management   {ModifyPublicFolderReplicaList}
CONTOSO\Organization Management    {ModifyPublicFolderDeletedItemRetention}
CONTOSO\Public Folder Management   {ModifyPublicFolderDeletedItemRetention}
CONTOSO\Exchange Servers           {AllExtendedRights}
CONTOSO\E14Admin                   {AllExtendedRights}
CONTOSO\Organization Management    {AllExtendedRights}
CONTOSO\Exchange Trusted Subsystem {AllExtendedRights}
CONTOSO\Enterprise Admins          {AllExtendedRights}
CONTOSO\Domain Admins              {AllExtendedRights}

This data means:

  • Members of the Organization Management and Public Folder Management RBAC role groups have the rights to manage the properties of the public folder and perform tasks such as modifying the client permissions list, the quota for the folder, the administrative permissions, the expiry settings for the folder and its content, add or remove replicas from the folder replica list, and change the deleted items retention setting for the folder. To discover the users who have these rights through their membership of a role group, type:

Get-RoleGroupMember –Identity 'Public Folder Management'

  • Specific permissions are assigned to individual accounts or groups. These rights allow the accounts or groups to work with the folder at the specified level. AllExtendedRights is specified here, meaning that all administrative permissions are available, but you can allow or deny individual permissions as required. The folder owner is automatically included (E14Admin in this example) as are the members of the Enterprise Admins and Domain Admins group. You can also see that the Exchange Trusted Subsystem is included to allow Exchange to perform background maintenance on public folders. To check the owners assigned to a public folder, type:

Get-PublicFolderAdministrativePermission –Identity '\Departments\Finance\Annual Budgets' –Owner

The easiest way to assign public folder administrative permissions is to add users to the Public Folder Management role group. For example, to add user Smith to the Public Folder Management role group:

Add-RoleGroupMember –Identity 'Public Folder Management' –Member Smith

Likewise, the easiest way to remove administrative permissions is to remove users from the Public Folder Management group.

Remove-RoleGroupMember –Identity 'Public Folder Management' –Member Smith

Membership of the public folder management role group allows a user to manage the entire public folder hierarchy. You only need to resort to assigning specific administrative permissions if you want to allow someone to manage a sub-section of the hierarchy or anything from one folder to a complete set of folders and their children. In this scenario, the Add-PublicFolderAdministrativePermission and Remove-PublicFolderAdministrativePermission cmdlets provide the way to control permissions. You have to add or remove permissions; there is no way other way to update or change existing permissions.

For example, if we wanted to allow a user to have administrative permissions over the Finance folder, we’d use a command like this:

Add-PublicFolderAdministrativePermission –Identity '\Departments\Finance' –User 'Redmond' –AccessRights 'AllExtendedRights'

The permission is removed with:

Remove-PublicFolderAdministrativePermission –Identity '\Departments\Finance'
–User 'Ruth' –AccessRights 'AllExtendedRights' –Confirm:$False

OWA and public folders

MAPI clients like Outlook have been able to access public folders since the earliest version of Exchange and remain the most functional client in that you can set permissions for a folder, view the storage used for a folder, and so on. OWA has had a somewhat erratic history with public folders with missing features or even support for public folders completely non-existent, as happened in the RTM version of Exchange 2007. Public folders are supported by OWA 2010 and present much the same attractive interface as mailbox folders do (Figure 3). It’s important to understand that before OWA 2010 can connect to a public folder, a replica of that folder must exist on an Exchange 2010 mailbox server. OWA 2010 is not able to access public folders on Exchange 2003 or 2007 servers.

Figure 3: Accessing public folders with Outlook Web App

The biggest limitation is that public folders are presented in a separate window fromthe rest of OWA. This means that while you can post items directly into public folders, you cannot drag and drop items from mailbox folders into a public folder. The workaround is to go to the mailbox folder, select the item that you want to move, right-click, and then select the Move To Folder or Copy To Folder option followed by the target public folder (Figure 4). And of course you can always email items to public folders, provided that they are mail-enabled.

Figure 4: Moving an item into a public folder with Outlook Web App

Apart from not being able to drag and drop items or files into a public folder, the other minor irritant that users notice most often is the delay that can exist before new content is available across the different replicas. Remember, the default replication schedule updates replicas every 15 minutes, so it is possible that a user will have to wait up to 15 minutes before they see an item. The solution is to decrease the replication interval so that items are copied more quickly, if available bandwidth allows. I’ve also noticed that new folders don’t always show up quickly in the hierarchy despite immediate replication and that I have to refresh the folders to get them to show up. This isn’t a big problem and it’s certainly one that is very livable with, given that not many new public folders are created today.

Public folder connections and the CAS

Incoming connections from Outlook clients to mailboxes are served by the RPC Client Access layer running on a CAS server and not by the server that hosts the mailbox. However, if an Outlook client needs to connect to a public folder, they continue to connect directly to the nearest mailbox server that hosts a replica of the public folder. The logic in excluding public folders from the new database replication mechanism is not because the data held in public folders is any less important than mailboxes. Instead, it’s because public folders have had their own multi-replica replication model since the first version of Exchange. This replication works well and there’s no real need to replace it at this point. However, if you want to provide high availability for public folders, you have to configure multiple replicas and ensure that more than one replica exists in each site. Microsoft’s view is probably fair as they avoided a great deal of engineering effort that might potentially have introduced some instability in Exchange 2010 by leaving public folder replication alone, but it does create an interesting split between two separate replication mechanisms in the product.

Fault-tolerant public folders

Public folders have their own form of replication that is item-based rather than using asynchronous log replication as in a DAG. Some commentators have asked why Microsoft did not change public folder replication to use log shipping. The answer varies depending on whether it is delivered by an engineering or marketing spokesperson or if you care to read between the lines, but basically you can interpret Microsoft’s position as follows:

  • Public folders have had multi-master replication since Exchange 4.0; replication today essentially works in the same way that it worked in 1996, so given that this mechanism has stood the test of time, why would you change it?
  • Changing the replication mechanism would introduce unnecessary complication, costs, and risks into the engineering schedule for an area that is non-core for Exchange.
  • The existing replication mechanism works smoothly between Exchange servers of different versions. Changing the mechanism would require some method of reconciling the differences between the “old” and the “new” versions, including the issue of dealing with multiple changes applied to a single item from different public folder servers.

The replication mechanism for public folders is unlikely to change anytime soon. Bearing this point in mind, you can upgrade the advice about deploying public folders in a fault-tolerant manner that has been used since 1996 as follows:

  • Whenever possible, deploy separate servers for mailbox and public folders.
  • Deploy at least two public folder servers per site and replicate important folders to allservers to ensure that clients can access a local replica even when one server is unavailable.
  • Analyze public folder usage on a regular basis and do not allow folders to proliferate without good reason; eliminate unused folders as soon as you can to restrict replication traffic and remove network load. Many other platforms (such as SharePoint) provide superior features for requirements such as shared document management, so always consider the alternatives  before you create a new public folder.

The ExFolders utility

Over the years, Microsoft has released a number of tools to help administrators manage public folders. The tools were originally developed and used by Microsoft Support to help them debug problems with public folder permissions, replication, and so on. PFAdmin for Exchange 2003 was followed by PFDAVAdmin in Exchange 2007. However, the deprecation of WebDAV means that the PFDAVAdmin tool does not work with Exchange 2010 and so it has been replaced with the ExFolders tool for Exchange 2010. ExFolders is essentially a port of PFDAVAdmin, updated and improved to deal with the Exchange 2010 database structures.

ExFolders doesn’t come with a very sophisticated installation procedure. You download it from the Microsoft website and place the executable in the Exchange binaries directory and launch the program. You then connect to either a mailbox or a public folder database and a domain controller. The domain controller provides access to the Exchange configuration data. After that, it’s a matter of navigating to the point in the folder or mailbox hierarchy that you’re interested in. ExFolders does a good job of revealing information in a more user-friendly and graphical manner than EMS. Figure 5 shows how to list permissions on a public folder. It’s the same data as you see with the Get-PublicFolderClientPermission cmdlet, but it’s a lot easier to view and understand.

Figure 5: Viewing public folder permissions with ExFolders

Unlike Exchange’s Public Folder management console, ExFolders is able to list some information about items that are in a public folder (Figure 6). Even better, ExFolders is able to view deleted folders and restore them. Sometimes the restore isn’t possible, but when it is, it’s a real life-line to rescue the embarassment caused by a mistaken folder deletion.

Figure 6: Listing items in a public folder with ExFolders

In addition to public folder databases, ExFolders can open and explore mailbox databases. Many interesting items can be discovered in mailboxes, such as the structure of the mailbox and the many hidden items that are stored in the mailbox root to be used for Exchange internal processing. Figure 7 is a good example. You can see the two move reports that Exchange holds for a mailbox. You can also see the dumpster folder structure (under Recoverable Items), the folders used for Reminders and Views, and the client-visible folder structure under Top Of Information Store.

Figure 7: Viewing mailbox data with ExFolders

Even if you’re not interested in public folders, ExFolders is a very valuable tool for an Exchange administrator. It doesn’t come with formal support and it doesn’t work so well from time to time, but these small deficiencies are absolutely unimportant in the context of the value that the program delivers.

Scripts provided for public folders

The Exchange development group includes a set of scripts in the installation kitthat they have developed to help with public folder management. When you install a mailbox server, the scripts are placed into the \Scripts directory under the Exchange \V14 root. The scripts demonstrate that some public folder management tasks are complex, especially those that deal with deep folder hierarchies. All of the work done by the scripts is accomplished by the cmdlets that we have discussed in this section and, while they might not solve all the problems that you encounter, the scripts can be customized to extend their functionality or to address some important but esoteric detail of your deployment.

Table 2 offers a brief description of the scripts provided with Exchange and their use.See the Exchange help file for further information on the input parameters or simply browse through the script code to see how it works.) The Exchange scripts directory should be in the path that Windows PowerShell uses to locate scripts, so you should be able to type in the name of the script to execute it. If not, position yourself in the directory where the scripts are located and prefix the name of the script that you want to execute with ‘.\’ or provide the full path to the script. For example:

C:> .\AggregatePFData.ps1

Or

C:> 'Program Files\Microsoft\Exchange Server\V14\Scripts\AggregatePFData.ps1'

Table 2 Public folder administration scripts

Script Use
AddReplicaToPFRecursive.ps1 Adds a new server to the replica list for a folder and all its child folders. A public folder database must be present on the target server.
AggregatePFData.ps1 Aggregates public folder data for all replicas within an organization to provide a view of the amount of data held in the folders, the folder owners, and the last access time. The output for this script is best directed to a text or CSV file for later analysis. The version provided with Exchange 2010 RTM extracts data from just one public folder server while the version in Exchange 2010 SP1 extracts information from all folder replicas.
RemoveReplicaFromPFRecursive.ps1 Reverses the work of AddReplicaToPFRecursive.ps1 by removing the public folder database on a server from the replica list of a folder and all its child folders.
MoveAllReplicas.ps1 Replaces the public folder database hosted by a server with another in the replication list for all public folders. This script is typically used when you want to decommission a server that hosts a public folder database and you first need to move all the replicas from that server.
ReplaceReplicaOnPFRecursive.ps1 Replaces the public folder database on a server with another in the replica list for a folder and all its child folders.
AddUsersToPFRecursive.ps1 Add a user and specified permissions for that user to a folder and all its child folders.
ReplaceUserWithUserOnPFRecursive.ps1 Removes one user and replaces them with another in the permissions list for a folder and all its child folders.
ReplaceUserPermissionOnPFRecursive.ps1 Replaces the permissions for a specified user with new permissions for a folder and all its child folders.
RemoveUserFromPFRecursive.ps1 Removes a user from the permissions list for a folder and all is child folders.

Migrating public folder content

No one has yet developed a magical silver bullet to migrate the data and applications that companies have accumulated in public folders since 1996. Every deployment uses public folders differently and every company needs to develop their own plan.

The first challenge is to understand the inventory of public folders that exist and then identify the folders that are actually in use. You can use the AggregatePFData script, or indeed the Get-PublicFolderStatistics cmdlet, to create a report about the current set of public folders. Such a report is a good first step to understand the folders that exist and the ones that are actually in use. However, you’ll still be missing some data such as why the public folder was created, who created the folder and who manages it now, whether the folder is associated with an application and what functionality is represented by the application, or anything about the business value of the data held in the folder. These questions have to be answered through patient and detailed investigation.

As part of the inventory process, it is helpful to categorize public folders according to their use as this may determine whether it is worthwhile to move the data and what suitable platforms exist as a migration target. Common uses for public folders include:

  • Repositories for forms-based applications, sometimes workflow in nature. These applications are much more common in companies who deployed the first few versions of Exchange (pre-Exchange 2000) because of the heavy emphasis that Microsoft then placed on electronic forms linked to public folders.
  • Project document repositories. These folders store an eclectic set of items in varying formats from mail messages to Word documents to Excel worksheets. Users post items to the folders on an on-demand ad hoc basis by including the folder in project distribution groups, creating new post items in the folder, or dragging and dropping from Outlook or Windows Explorer.
  • Archives for discussion groups. Exchange has never offered list server functionality, but people created their own basic implementation by using distribution groups as the list and a public folder (automatically copied on every post by including the SMTP address of the folder in the list) as the list archive.

One option is to ignore the data in public folders on the basis that it is old and of no relevance to current company operations. This might be true, but it might also be true that some data is required to comply with regulatory or legal requirements, so you need to know if any data falls into this category. The only cost for this option is some storage to maintain the public folder databases.

Another option is “degrade in place.” This means that you leave the data and applications in place until users stop accessing them. Perhaps you run a report every month to identify folders that have not been accessed in the last 180 days and then export the contents of these folders to a PST and write the PST to DVD before you remove the folder and all its replicas.

The third option is to seek another platform for the public folder and move all content and applications there to allow you to decommission public folders and remove them completely from Exchange. While Microsoft doesn’t provide any public folder migration utilities, you can find them such as Quest Software’s Public Folder Migrator for SharePoint. As the name implies, this utility moves content from public folders to SharePoint team sites or portals. Over the last few years, SharePoint has become the most popular platform considered for public folder migration, partly because tools have slowly become available to enable the move, partly because the Microsoft Exchange and SharePoint engineering groups have given hints that they prefer this option. See this post for some evidence on this point. It’s also true that migrating to another Microsoft solution often aligns well with the IT strategy of companies.

If you have developed code for use with public folders, you are going to have to redevelop the application for use on a new platform. While it’s relatively easy to move data to a new platform, it’s much more difficult to handle the accompanying logic embedded in code that was probably written several years ago. No tools exist today to address this need.

The bottom line is that public folders are well past their sell-by date as an application platform. There is no good reason to continue to invest in public folders and you need a plan to move off them before the curtain comes crashing down. Chose a new platform and invest your time into tailoring it for your purpose. It will be time that generates more rewards than trying to extract the last possible gasps out of public folders.

Posted in Exchange, Exchange 2010 | Tagged , , , | 49 Comments

Join me at the Marathon Technologies webinar on 24 February 2011


Many companies ask me to look at their technology, especially when it’s relevant to Exchange 2010. Typically there’s a request to get involved by assessing the technology and perhaps writing about it.  Most of the time I politely decline because I don’t have the time or I don’t care to become involved for another reason. I do like to understand “interesting” technology and position it in a practical sense for others, which leads me to the reason why I’ll be presenting at a webinar at 16:00 (Ireland), 11:00am (Eastern), and 8:00am (Pacific) on Thursday, 24 February.

The webinar is sponsored by Marathon Technologies, who have an interest in bringing their EverRun MX technology to the attention of the Exchange community, especially companies in the small-to-medium business sector who are planning to migrate to Exchange 2010 from a legacy version. I’ve spent some time understanding the value proposition that Marathon brings to the table and believe that it is sufficiently interesting for me to participate in the webinar. Basically, I’ll talk about the major developments I expect in the Exchange market in 2011 and why I think there’s value to be gained in assessing the deployment of high availability technology like EverRun to complement the application-specific HA features of Exchange 2010.

If you’d like to join me at the webinar, please go to this site and register. The webinar is free so all you need to invest is some time. I don’t pretend that this technology is for everyone, but given the prediction of Ian Hameroff of Exchange product management fame that 60% of organizations currently running Exchange 2003 or Exchange 2007 plan to upgrade in the next year, it’s certainly an appropriate time for technology to be assessed so that it can be incorporated, when appropriate, into the migration plan.

The text from the email invite sent by Marathon is given below as a public service!

– Tony

2011 is a year of change and choice for organizations that depend on Exchange Server. Exchange is mission-critical business infrastructure; as essential to the continuity of your business as electricity and telecommunications. Think back to the last time your organization lost email access.  Not a good scene, was it?

So what’s ahead for Exchange Server?

Join Tony Redmond, one of the world’s leading Microsoft Exchange Server experts, as he shares predictions and practical advice for selecting, optimizing and protecting Exchange Server.

Marathon Technologies Microsoft MVP Series Webinar:
“Microsoft Exchange Server: Predictions, Practicalities and Availability”

Date: Thursday, February 24th
Time: 11 a.m. ET

Register For Our Webinar

Tony will elaborate on four major Exchange Server developments in 2011:

  • Migration. Mainstream support for Exchange 2003, the majority of the Exchange installed base, ended in 2009. Do you move to Exchange 2010? And when? What are the trade-offs?
  • Refresh. Migrations always mean some refreshes, but Exchange 2010 will require more than previous versions, including potentially significant core infrastructure optimization. What else should you think about when upgrading front- and back-end servers, OS and storage?
  • Virtualization goes mainstream. Consolidation is only the beginning. Where do you start?
  • Availability is a key business requirement. What are the benefits of the Database Availability Group (DAG)? What other steps can you take to fully-protect Exchange Server?

Eliminate Unplanned Exchange Server Downtime
No measure of Exchange performance is more important than availability. Exchange 2010 provides better native availability options than earlier versions, but DAG addresses mailbox database availability only. There’s much more to consider. A majority of the Exchange installed base are on 2003 which has even more limited availability features. Version upgrade presents the chance to improve Exchange Server availability. Why settle for usually available when you can achieve continuous, uninterrupted availability?

Upgrading to Exchange 2010? Now is the time to upgrade availability.

Already running Exchange 2010? Complement DAG with broader system protection.

Learn How By Attending,
“Microsoft Exchange Server: Predictions, Practicalities and Availability” Webinar

Register Today!

Hosted by Marathon Technologies
Only everRun MX with ComputeThru™ technology can ensure continuous availability of Exchange Server. Marathon Technologies enables Exchange pros to provide uninterrupted Exchange Server uptime by preventing downtime, not just reacting to it.

Seeing is Believing
Watch a live, on-camera demo of everRun® MX from Marathon Technologies, the only fault-tolerant software that unconditionallyensures Exchange Server availability. Watch as an Exchange Server pair protected by everRun MX experiences failure of component architecture elements, culminating with total disruption of power, but computes through without data corruption or a single lost transaction.

Posted in Exchange, Exchange 2010 | Tagged , , , , , | 1 Comment

First look: HP E5000 Messaging System


On January 21, I blogged about the joint HP-Microsoft announcement of the HP E5000, the first messaging appliance-type system specifically designed to run Exchange 2010. This week I’ve had the chance to spend some time in Cupertino, CA working with HP and Microsoft representatives from the team that created the E5000, which will ship on March 1, 2011.

In fact, HP is already shipping production units to channel partners in preparation of the formal unveiling on March 1. Full details of the available configurations will be available on the HP web site then. However, you won’t be able to buy these systems online from hp.com as they are only available through resellers. This makes a lot of sense as the reseller can help customers to prepare for the deployment of Exchange 2010 if customers don’t have sufficient knowledge to prepare for and then support the software.

At this stage of its development, a “messaging appliance” is still a tightly-bundled package of software and hardware designed to support a specific version of Exchange rather than something like a Tivo or other consumer-grade appliance that typically just works (or not) when it’s turned on. An administrator still has to prepare Active Directory and an Exchange organization to allow the E5000 to be deployed. And of course, if you’re migrating from a legacy platform such as Exchange 2003, you’ll have to deal with all of the tasks necessary to connect Exchange 2003 to Exchange 2010 and move mailboxes to the newly-built databases in the Database Availability Group (DAG) that’s running on the E5000.

Paul Robichaux, Brian Desmond, and Tony Redmond under blinding studio lights during filming of the HP E5000 video

Fellow MVPs Paul Robichaux and Brian Desmond joined me in Cupertino to look over the E5000 and discuss the most likely deployment scenarios with HP and Microsoft. We also considered potential enhancements that might be incorporated into future versions of a messaging appliance. We had many good discussions and HP took the opportunity to make some videos of us quizzing HP and Microsoft product management about the E5000 and testing its resilience against common failure conditions such as a complete failure of a disk holding a mailbox database plus removing one of the blades to mimic the failure of a complete server. I believe that you’ll be able to see the finished videos at trade shows and various places on the web. There maybe even some “blooper” videos to be viewed – believe me, quite a few mistakes were made during the taping.

Karl Robinson (HP) and Paul Robichaux being filmed while working with the E5000

Getting back to the subject in hand, the basic E5000 is built around a new 3U form factor chassis that is equipped with two ProLiant c-class blade servers, which are built using HP’s G6 blade platform rather than the latest G7 model – the reason being that the G7 model only became available late on in the E5000 development cycle. There’s plenty of power and flexibility available in the G6 platform so using it won’t make a practical difference in production.

Different CPU, disk, and memory configurations are available in three different models that make up the E5000 range. The E5300 is designed to support 500 mailboxes, the E5500 to support 1,000 mailboxes, and the E5700 to support 3,000 mailboxes. These numbers are based on a relatively heavy user profile (200 messages per day).  Obviously, the servers will be able to handle the load generated by different user populations if they send more or less messages daily.

The E5500 and E5700 both allow you to select variants with 1TB or 2TB drives depending whether you want users to have 1GB or 2GB mailboxes. The E5300 uses 12 internal SFF drives while the E5500 uses 16. The big E5700 expands its storage with two additional building blocks (to form a 7U unit in the rack) to accommodate 40 disks, 36 of which are available for email. The drives are configured with RAID-1 and are hot swappable. The storage is configured by a deployment assistant and laid down to support the necessary databases and database copies within the DAG shared by the two Exchange servers. Each model is configured with different numbers of databases to support the desired number of mailboxes.

If you still need to install a public folders database to support Outlook 2003 clients the database can be placed on the system drive. This database will contain the system folders used by Outlook 2003 such as free/busy and the OAB. The E5000 isn’t intended to support a fully-populated public folder database that might contain thousands of replica folders but there’s certainly nothing to stop you creating such a database and placing it on one of the disks that are intended to support a mailbox database.

You’re not limited to installing the E5000 to form a two-server DAG as it’s easy to join appliances together with other appliances or standard servers to build out a larger DAG. In fact, one of the potential uses of the E5000 is to provide the hardware for a disaster recovery (DR) site that’s part of a large DAG. The appliance certainly has enough power to handle a pretty significant user load that might normally be spread across larger servers.

Out-of-the-box, the E5000 runs Exchange 2010 SP1 with RU1 on Windows 2008 R2. HP has built in the capability to check for and apply firmware updates to keep the hardware updated, but the administrator will have to keep the software updated with appropriate hot fixes, roll-up updates, and service packs using their chosen deployment mechanism. In particular, you’ll want to update Exchange with the latest roll-up update to make sure that you’re running the latest and greatest software.

Some might question this and ask why a device designed as an appliance can’t be more automated in the management of software updates. This is a fair question but perhaps the framework of messaging appliances built to run Exchange is still not mature enough to deal with many varied ways that customers keep software updated on servers. Some companies would welcome total automation of updates and would view this as a real positive for appliances like the E5000; other companies will want to control the software versions that run within their IT infrastructure and would resist any attempt to automate updates. In any case, the bottom line is that you have to apply whatever updates you require to the E5000 as the updates are released by Microsoft.

Microsoft and HP have worked together to optimize Exchange 2010 on the E5000. HP clearly did a lot of work on the hardware, including the development of a new storage controller to fit into the 3U form factor. Along with the chassis, the storage controller is the only new component in the E5000 – the vast majority of the other bits that make up the appliance come from the HP parts bin and are well known to anyone who has worked with c-class ProLiant blades. Microsoft helped HP to tune the controller to handle the I/O profile for Exchange 2010. Both companies say that there’s enough disk and I/O capacity to allow a single server to handle the number of mailboxes for which the various E5000 models are configured. In fact, given that it will take time for users to accumulate enough email to fill their 2GB+ mailboxes, the E5000 should be able to handle more than the rated number of mailboxes without breaking sweat.

HP also developed the deployment assistant that sets everything up on the E5000 and removes a lot of the mundane, boring, but essential to get right tasks that make setting up an Exchange server a less than interesting activity after the first time you do it. Automation is usually always a good thing and having a deployment assistant to take care of all of the work to configure Exchange, set up the disks (LUNs, mount points, etc.), create the DAG including the configuration of both MAPI and replication networks on the two servers, name the mailbox databases, set up database copies, and so on is a very nice feature of the E5000.

The E5000 is really designed to be a mailbox server but both the CAS and HT roles are installed to provide a fully functional server out-of-the-box. The CAS servers are also incorporated into a CAS array. If you want to run pure mailbox servers it’s a simple matter of running the Exchange setup program to remove the CAS and HT roles. It’s possible to install and run the UM role on the E5000 but this hasn’t been tested by HP.

If you’re looking to create a truly HA mail server, you need to put a hardware load balancer in front of the E5000 as a CAS array only really provides a single point of contact for incoming clients and doesn’t do anything like automatic load balancing or detection of a failed server. A low-end load balancer would probably be sufficient unless your company has access to more sophisticated devices such as an F5 BIG-IP, which is capable of handling tens of thousands of inbound client connections.

The E5000 includes three years worth of support from HP in its price. The idea is that you have a single point of contact for hardware and software rather than having to ring both Microsoft and HP if you have a problem.

As a product in its first version, the E5000 isn’t perfect and the MVPs certainly had a number of comments and feedback for the HP and Microsoft teams to take into account as they plan future enhancements. However, it’s a really well put-together Exchange 2010 server based on solid and well-known hardware components that is easy to deploy and run. The E5000 will probably not feature in many large-scale Exchange 2010 deployments, but I bet that it will be a success in its target market of 500 to 10,000 seat implementations, especially for Exchange 2003 customers seeking to deploy Exchange 2010 and achieve high availability in an easy and cost-effective manner.

– Tony

Read Paul Robichaux’s report for another view on the E5000. It covers some different points to the ones described above.

Posted in Exchange, Exchange 2010, Technology | Tagged , , , , | 11 Comments

TMO’ing Six Nations rugby in Twickers


It’s been a while since I reported on rugby activities, so here goes. Saturday last (February 12) saw me at Twickenham, London for the England vs. Italy match in the RBS Six Nations championship. Whilst I have been the TMO for other internationals, this was my first Six Nations game and the first visit to “HQ” or “Twickers”, so it was something to which I had been looking forward.

As things turned out, I didn’t have much to do as England ran out easy 59-13 winners. Despite nine tries being scored during the game the referee (Craig Joubert from South Africa) had no need to refer any of them to me for review, so all I had to do was keep time.

Unlike Heineken Cup games, there is no separate timekeeper and the TMO is given a rather splendid Heath Robinson device to control the stadium clock. The device features a large green button to start the clock and a red button to stop the clock together with a PC running a Windows program that’s used to set the clock so that it either counts down a half or shows the elapsed time.

You might think that it’s an easy task to follow the referee and stop the clock when he says “Time Off” and restart it when he says “Time On” but referees (and TMOs) are human and mistakes are easily made if you don’t concentrate. For example, the referee might forget to say “Time Off” when an obvious injury has happened and the game is stopped to allow a player to be treated. Equally, the timekeeper might forget to restart the stadium clock when play recommences, something that happened in a Heineken Cup game in France earlier this season. Players and coaches tend to use the stadium clock to know how much time remains to play and while the referee keeps time themselves and will always be the final arbiter of how much time remains, it’s definitely not appreciated when errors occur in the game time. Total concentration was the order of the day.

Rather remarkably, we also had another PC beside the stadium clock PC that ran yet another timing system to show the clock on TV.  The two PCs were not connected and the times were only synchronized through the efforts of one of the BBC outside broadcast crew who followed me as I stopped and started the stadium clock. It seems wasteful to have two separate clocks and the human nature of the synchronization makes it all too easy for a difference to appear between stadium and TV clocks. Thankfully, we kept the difference to a couple of seconds and all was well.

Given the professional nature of top-class rugby and the excellent systems that operate in stadia such as Twickenham it’s hard to understand why we can’t have a single timing system that all can use. I guess that it’s a small thing and that the large display clocks built into the different stadia use varying control systems – but maybe there’s an opportunity for a PC (or even iPhone or iPad) application to create a unified time signal that all could use. I must add it to the list of things to do… one of these days.

– Tony

Posted in Rugby | Tagged , , , , | Leave a comment