Does a suspended mailbox move request expire?


A discussion on this point recently took place via the Exchange MVP mailing list. It doesn’t contain anything secret and I think it’s worth documenting for the benefit of others, so here goes…

As you might know from this post or other sources, you can ask the Mailbox Replication Service (MRS) on an Exchange 2010 server to auto-suspend a mailbox after the bulk of its content has been copied over to the database that will host the new mailbox. This occurs when you pass the SuspendWhenReadyToComplete parameter to the New-MoveRequest cmdlet. Administrators often use this feature to have MRS copy mailbox content overnight and to allow them to check the move report before they release the request with the Resume-MoveRequest cmdlet to complete the mailbox move.

Essentially what happens is that MRS takes an initial enumeration of all the content in the source mailbox and copies it to the database that will hold the new mailbox. When the copy is complete, the new mailbox contains all of the data that existed in the old mailbox when the copy commences. Of course, changes might have occurred in the source mailbox after MRS enumerated the content and that’s why MRS performs an incremental synchronization to discover and copy the changed content just before it switches the Active Directory pointers for the mailbox to complete the move. The incremental synchronization (or ICS) is done using the same techniques as Outlook exploits to populate a local OST with slave copies of the folders in the server-based mailbox.

Auto-suspension happens before the final synchronization. A mailbox move request that is auto-suspended can remain in this state as long as you desire. Exchange 2010 doesn’t include any functionality to “clean up” old suspended move requests after a particular period. It’s entirely possible that you might have some move requests suspended and awaiting release for weeks at a time (see this post for more information about cleaning up move requests). However, there are two points that need to be considered:

  1. The longer you leave a move request suspended, the more changes are likely to occur in the source mailbox. The final incremental synchronization will therefore take longer. In reality, this shouldn’t be a major problem for intra-organization moves as MR should be able to quickly synchronize the source and new mailboxes to complete the move. Things might take a little longer with a cross-forest move.
  2. If a move request is suspended for more than the deleted mailbox retention period, the Store will remove the new mailbox in the target database. This is because the Store considers this mailbox to be “disconnected” as no mailbox owns it. Disconnected or deleted mailboxes are automatically removed by the Store after their retention period expires (usually 30 days). You can remove a mailbox from the Store when you delete it with the Remove-StoreMailbox cmdlet (in SP1) or by passing the Permanent switch to the Remove-Mailbox cmdlet (previous versions). Removing the new mailbox isn’t a huge disaster as MRS will recreate it if it finds that the new mailbox doesn’t exist when the move request is eventually resumed. If this happens, MRS will recreate the new mailbox from scratch and populate it as before before proceeding directly to the final synchronization.

I doubt that many people will leave move requests in a suspended state for more than a few days, except by accident or after they have forgotten that a move request was created with the SuspendWhenReadyToComplete flag. But it’s nice to know how MRS deals with the issue should it arise.

– Tony

For more information about how MRS works in Exchange 2010, you might consider reading chapter 12 of Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition. Other e-book formats for the book are available from the O’Reilly web site.

Posted in Exchange, Exchange 2010 | Tagged , , , | 22 Comments

Spring Connections 2011


I’ve been attending Connections events for years, mostly since the demise of the late lamented Microsoft Exchange Conference (MEC). While MEC was an official Microsoft conference and so benefited from the Microsoft marketing muscle and access to engineers and program managers, it eventually foundered on the twin rocks of a competing TechEd and event and a lack of funding, not to mention the lack of support given by engineering groups for non-U.S. events.

In any case, Connections has provided an interesting meeting place for the Exchange community since the demise of MEC. While mostly still a U.S.-centric event (I believe that conferences will be announced for Germany and the U.K. to run sometime in summer 2011), the numbers of people who attend Connections has gradually grown over the years. Many people regard Connections as a place to go to feel the pulse of the Exchange community as well as having the chance to meet folks with high reputations in the field such as Paul Robichaux, Jim McBee, Michael B. Smith, Mark Minasi, Kevin Laahs, and Kieran McCorry. Note that I don’t say how these reputations were earned – that topic is worth a separate session at Connections.

I am signed up to do a keynote at Spring Connections 2011 in Orlando, FL. I’ve done a reasonable number of keynote talks at conferences over the years so getting up and talking won’t be a problem. The real challenge is determining content that will be of interest to the thousand or so people who typically come to a keynote. Sometimes Microsoft makes the problem go away by releasing a new version of Exchange – everyone is interested in listening to an assessment of what’s good and what’s not so good in a new version.

If a new version isn’t on the horizon, the sessions tend to be more commentary in nature. A review of best practice is usually well accepted as are sessions that offer solid frameworks for deployment, implementation, and support. Because the session is a keynote, the content has to stay at a reasonably high level – the conference sessions do a great job of diving into the weeds and providing the detailed tips and techniques that attendees need to get their job done.

Seeing that we are in the fallow period between releases of Exchange (Exchange 2010 SP1 contains a lot of new stuff, but it’s not a major release), I think that it’s time for a session that focuses on the major roadblocks that companies face as they plan for Exchange 2010. I’ve done some thinking about this and have concluded that there are six major areas that almost every company encounters:

  1. What considerations need to be taken into account when a migration to Exchange 2010 is considered?
  2. What clients should I use?
  3. Does the DAG really deliver high availability for Exchange?
  4. What new hardware do I need?
  5. What’s the impact on third-party software? (a lot of people are concerned about their investment in third-party archiving or compliance software)
  6. Should I factor Office 365 into the equation?

Of course, every company will have one or more topics that are specific to their own circumstances and need to be dealt with before a migration to Exchange 2010 is possible. But you can’t cover every single question that might arise in sixty minutes or thereabouts so the list presented above seems to cover a wide audience (based on what I am hearing).

I’m curious as to what people would include in the list or what they’d like to hear discussed. The floor is yours…

– Tony

Posted in Exchange, Exchange 2010, Technology | Tagged , | 11 Comments

Beware the effects of enabling an Exchange 2010 archive mailbox


The introduction of archive mailboxes is one of the major new features offered by Exchange 2010. Matters are substantially improved in Exchange 2010 SP1 as it supports the separation of a user’s primary mailbox and their archive mailbox across different databases, giving administrators the ability to consider schemes such as dedicated archive databases or even dedicated archive servers.

Office 365 then offers the prospect of allowing users to maintain their archive mailboxes “in the cloud”. In this scenario, the primary mailbox will be located in a database running on an “on-premise” server and the user will connect to a database running on a server in a Microsoft datacenter whenever they want to access the contents of their archive. All of this depends on the deployment of Exchange 2010 SP1 as the basic platform for Exchange Online within Office 365; this work is ongoing and Office 365 is currently in beta. See this EHLO post for some information on how this setup works.

Of course, you need a client that knows about archive mailboxes before you can use the feature. Outlook 2010/2013 and Outlook Web App (OWA) 2010 are clients that Microsoft has purpose-built to exploit archive mailboxes and the other compliance features supported by Exchange 2010, but it’s a pain (and expensive) to have to deploy new Outlook clients just to be able to use an archive mailbox.

The cumulative update for Outlook 2007 SP2 helps by providing the code that Outlook needs to be able to recognize that an Exchange 2010 mailbox has been enabled with an archive and to then open the archive. Unfortunately the code is incomplete insofar as it doesn’t round out the scenario by providing the ability for users to manipulate archive tags. I guess the impact on the Outlook user interface would have been too large to have the work done for an update. “Large” in this context meaning that Microsoft would have had to change too much code to add all the bits necessary to reveal and manipulate archive tags. Once you start to make extensive code changes, you increase the engineering and testing cost dramatically and create all sorts of other issues such as the requirement for internationalization and translation into all of the languages supported by Outlook 2007. The upshot is that if you want to access archive (and retention) tags then you have to use Outlook 2010/2013 or OWA 2010.

Don’t even think about ever seeing archive support for Outlook 2003 as it’s a dead duck client as far as Exchange 2010 is concerned…

Assuming that you’ve got the right clients and you’re running Exchange 2010 SP1 (or later), all should be well and you can start to enable user mailboxes with archives just as soon as you have moved them to an Exchange 2010 server (and remembered that you need enterprise client access licences or CALs because archives are an extended feature not covered by the standard CAL).

But here’s the rub. Enabling a mailbox to have an archive takes a matter of seconds, or even less if you have an EMS session open and are fast to type and run the Enable-Mailbox -Archive command. It’s the consequences of enabling an archive of which so few administrators are aware. In a nutshell, by enabling a mailbox to have an archive, you stamp the mailbox with Exchange’s default archive and retention policy and you deliver the mailbox into the jaws of the Managed Folder Assistant (MFA).

MFA is an invaluable ally for administrators when it is well understood and used correctly. The problem with MFA is that it can have an unexpected (and unwelcome) effect on user mailboxes after a retention policy is applied. Sometimes administrators want this to happen, as in the case of a well-planned campaign to “help” users control the sprawl of their mailboxes through the deployment of retention policies and tags. But in this case, you’ve just told MFA that you want it to process the newly-enabled mailbox and apply the set of archive tags that Microsoft has thoughtfully included in the “Default Archive and Retention Policy” that the installation of Exchange 2010 SP1 adds to every organization. You can see the policy listed by EMC in the screen shot below. In fact, there are two policies used by archive mailboxes listed here. The other (Default Archive Policy) is an older version used by Exchange 2010 RTM. It is retained after the installation of SP1 but is no longer applied to mailboxes after they are enabled with an archive.

The default archive and retention policy listed in EMC

In the screen shot show below, you can see that a mailbox that is not enabled with a archive does not have a retention policy. However, once we run Enable-Mailbox to assign an archive to the mailbox, Exchange fills in a number of relevant properties including the GUID that links the default mailbox to the archive. We can also see that the Default Archive and Retention Policy has been assigned to the mailbox.

Using EMS to enable a mailbox with an archive

MFA will do its business very effectively the next time the newly-enabled mailbox appears in its list. The MFA in current versions of Exchange operates on a daily workcycle basis that aims to process every mailbox in a database at least once daily. This means that the newly-enabled mailbox will be discovered and processed by MFA within one day of the archive being enabled.

When MFA processes a mailbox, it checks items in the mailbox to see if they have been stamped with the appropriate tags. Retention and archive tags can be applied to individual items. folders, or even conversations. And then you have the magic of a default tag, which is applied to any item in a mailbox that is under the control of no other more specific tag. Of course, your newly-enabled mailbox has never been processed by MFA and the mailbox owner is highly unlikely to have ever applied specific tags to items, so the net effect is that the default archive tag becomes all-powerful and MFA will merrily stamp it on all and sundry within the mailbox.

Properties of the default tag in the Default Archive and Retention Policy

If we look at the tags included in the Default Archive and Retention Policy, we can see that the default tag (the one that “applies to all other folders”) is called “Default 2 year move to archive” (shown above). The action specified in this tag tells MFA to move items into the archive mailbox when they are two years old. The user won’t notice that this has happened if their mailbox is new and no item is more than two years old. They might not notice for other reasons – boredom, lack of attention, devotion to controlled substances, or whatever… but if a mailbox is more than two years old and the user keeps reasonable tabs on what it contains, they’ll probably notice that items have started to disappear as MFA processes them, discovers that they are more than two years old, and promptly moves them into the archive. Of course, “disappear” is a very emotional word and it’s somewhat inaccurate in this context. The items have disappeared from the primary mailbox but MFA has stored them quite safely in the user’s archive.

At the end of the day, the user gets a nicely cleaned mailbox and a populated archive and all is well if that’s what the user wanted and expected. The situation is less happy if the user panics because they think they have lost data and the help desk is unaware of what happens when archives are enabled (maybe you forgot to tell them). Loud words and swearing ensues and no one is happy.

The moral of the story is that archives are good provided that they are deployed with care, attention to detail, and with due warning to all concerned.

– Tony

Follow Tony @12Knocksinna

Update: See my latest note on retention policies and some changes that Microsoft has made in their interpretation of what you can do with the default MRM policy. Also, it’s worth noting that Office 365 automatically enables the default MRM policy for new mailboxes when they are created – no archive is created, so the archive tags are ignored and only implemented if an archive is subsequently enabled.

Update: Exchange 2010 SP2 RU4 introduces the ability for the Managed Folder Assistant to process calendar and task items for the first time. So after you update to Exchange 2010 SP2 RU4 or later, you run the risk that users will see their calendar and tasks folders cleaned out, which isn’t nice. Pay attention to the advice in the article to disable this behavior until you’re ready to unleash the full power of the Managed Folder Assistant. Note that no client exists – even Outlook 2013 – that allows users to manipulate retention policies for the calendar and task folders. The UI simply doesn’t exist.

For those running Exchange 2013, exactly the same degree of care has to be taken when you enable an archive mailbox and so automatically apply the default retention and archive policy. There is a small difference in that Exchange 2013 calls its default retention policy the “Default MRM Policy” but it has the same effect on mailboxes when applied.

For more information about points that need to be considered for a successful deployment of Exchange 2010 archives plus the workings of the MFA, see chapter 15 in Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition. Other e-book formats for the book are available from the O’Reilly web site. The topic of retention policies and tags is also covered in chapter 11 of Exchange 2013 Inside Out: Mailbox and High Availability.

Posted in Exchange, Exchange 2010, Exchange 2013, Outlook, Outlook 2013 | Tagged , , , , , , , , , , , , , , , , | 136 Comments

Exchange in the Cloud


I wrote this text as a “web-only chapter” for Microsoft Exchange Server 2010 Inside Out, (also available at Amazon.co.uk and as a Kindle edition, with other e-book formats for the book are available from the O’Reilly web site). Unfortunately the chapter doesn’t seem to have appeared on the O’Reilly web site and several people have asked me about the content so I am publishing a modified and updated version of it here – all comments are gratefully received.

– Tony

Moving Exchange into the Cloud

Over the last few years and gathering pace from the point where Microsoft announced Azure, Online Services, and new web-based version of some of the Office applications at the Professional Developers Conference in October 2008, one of Microsoft’s strategic imperatives is to be a major player in cloud-based computing. Chris Capossela, the Microsoft senior vice president with responsibility for Microsoft Office, has been widely quoted in interviews as predicting that 50 percent of the Exchange installed base will be online by 2013. Some of these mailboxes will come from the existing installed base and some will come through migrations of other email systems such as Lotus Notes. Other commentators such as the Radicati Group reckon that hosted email seats will grow by 40% by 2012, largely driven by deployments by small and medium companies but that large enterprises will increasingly analyze the value that hosted email can deliver, especially for regional offices.

Microsoft’s challenge during a transition to cloud-based services is to build and deliver their online services (then known as Microsoft Business Productivity Online Suite or BPOS and subsequently renamed in October 2010 to be Office 365) in a cost-effective manner and attract customers to use the new service while not eroding the installed base and the rich income streams that flow from traditional on-premises deployments of Office, Exchange, and SharePoint. Microsoft isn’t the only company that’s engaged in a balancing act. IBM is going through much the same process as it brings the Lotus suite into the cloud with LotusLive, including the iNotes hosted email platform.

Microsoft certainly has very good knowledge of some parts of the financial equation such as the cost of building and running the datacenters that they require to deliver online services, but the wild card is the cost of migrating customers and then maintaining service levels to meet tough service level agreements (SLAs). If they spend too much on migration, maintenance, and support, it will place enormous strain on Microsoft’s most profitable unit (the Business Division, responsible for $19.4 billion income in 2008). On the upside, if Microsoft gets online services right, they’ll generate ongoing and profitable success for the Office franchise that will help them fend off the competitive pressure from Google and other companies. Microsoft’s first progress report in November 2009 announced the acquisition of their first one million paying customers across a variety of market sectors. The success or perhaps increasing competitive pressure enabled Microsoft to decrease the monthly cost per user to $10 (in the U.S.) for the Office 365 suite and to eventually offer 25GB mailboxes running on Exchange 2010 servers at that price point. At the time of writing, Microsoft has not deployed Exchange 2010 as part of their online offering; this upgrade is expected sometime towards the middle of 2011 as part of the launch of Office 365. At that time online users will be able to take advantage of the enhanced functionality delivered by Exchange 2010. An article by Mary Foley in Zdnet.com includes an interesting graphic showing the timeline of the evolution of Microsoft’s online platform that I reproduce below:

Timeline for the evolution of the Microsoft Online platform

However, while Microsoft was reporting some success in terms of customer take-up, the financial side didn’t seem to be so pretty with some commentators such as Scott Berkun questioning the vast deficits that Microsoft has accrued in its Online Division (see the chart below). Some investment to build out the datacenters required to support cloud services is inevitable; the problem seems to be that the income from customers isn’t yet getting near to matching the expense line. We shall see how this story develops over the coming years.

Microsoft Online Division operating income

On the customer side, the increasingly tough economic conditions since the economic meltdown of 2008 have created an environment where companies are eager to pursue potential cost savings so the potential value of replacing an expensive in-house email system with an off-the-shelf online service that comes with a monthly known cost is an attractive notion. As customers assess the impact of Exchange 2010 on their infrastructures, it’s likely that Microsoft will encourage customers to assess online services as an upgrade option for their existing email deployment, especially if the customer currently operates older software such as Exchange 2003 where the migration to Exchange 2010 will be difficult and the technical learning curve for support staff is long.

In reviewing the options that exist for customers in the 2010-2012 timeframe, four major possibilities present themselves for companies that currently run Exchange:

1.    Continue to operate an in-house deployment and upgrade to Exchange 2010. This includes variations such as traditional outsourcing to companies such as HP that operate Exchange in your datacenter or in their datacenter or using the approach taken by companies such as Azaleos, who place servers in client datacenters and manage the servers remotely.

2.    Embrace the cloud and move mailboxes to be hosted in Microsoft Office 365. You will go through a migration phase to move mailboxes to the cloud, but eventually all the in-house servers that run Exchange will be eliminated. In-house servers will still be required to host Active Directory to provide federated authentication and other features that enable secure connections and data exchange with the cloud services as well as to run other applications that do not function in the cloud.

3.    Take a hybrid approach and move users who only need the functionality delivered by a utility email service to Microsoft Office 365 while retaining part of the current infrastructure to continue to run Exchange for specific user populations who cannot move into the cloud, perhaps because of some of the reasons discussed here.

4.    Explore other paths such as using a different online service (Gmail) or moving from Exchange to a different email system such as the Linux-based PostPath, recently acquired by Cisco. While PostPath boasts seamless MAPI connectivity for clients such as Outlook, Microsoft still has an inbuilt advantage in that the costs involved in moving users from Exchange may outweigh any of the potential benefits that can be reasonably accrued in the short term. Existing Microsoft customers are more likely to explore the possibilities in Microsoft Online where they retain a familiar platform and save costs that way than to plunge into the uncertainty that any migration entails.

New companies that do not have an email system installed today should absolutely consider the cloud for email services because this approach allows them to start to use email immediately, grow capacity on an on-demand basis, and take advantage of the latest technology that’s maintained by the service provider. Other interesting deployment scenarios include:

  • Companies who want to migrate off their current email platform (for example, from Lotus Notes to Exchange 2010). The advantage here is that they can move to the latest version of the new platform technology very easily without incurring substantial capital costs. The announcement in March 2009 of the intention of Glaxo Smith Kline, a major pharmaceutical company, to move from Lotus Notes to Office 365 is a good example of the kind of platform migration that will occur.
  • Companies who run earlier versions of Exchange and want to avoid the complexity and cost of a complete migration to Exchange 2010. For example, if a company runs Exchange 2000 or 2003 today, they have a lot of work to do before they can consider a migration to Exchange 2010 including updates to hardware, operating system, associated applications, and clients. Some of this work (such as client updates) still has to be done before it is possible to move to the cloud, but a move to a cloud-based service presents an interesting opportunity to accelerate adoption of the latest technology at lower cost.

Any company that has a legacy IT infrastructure and applications will have their own unique circumstances that influence their decision about how they should use cloud based services. Let’s discuss some of these issues in detail.

So what’s in the cloud?

One definition of cloud computing is  “IT resources accessed through the Internet” where consumers of the resources have no obligation to buy hardware, pay software licenses, perform administration, or do anything else except have the necessary connectivity to the Internet to be able to access the service. Think about how you use consumer email services such as Hotmail or Gmail as both services fall into this category. The feat that Microsoft now wants to perform is to transform part of its revenue stream into funded subscription services for access to applications such as Exchange, SharePoint, and Office Communicator Server without cutting its own throat by eliminating the rich stream of software licenses purchased for traditional in-house deployments of these applications. At the same time, Microsoft knows that it has a huge competitor in Google who is driving the market to a price point that has attracted the attention of many CIOs, who now question how much they are paying for applications like Exchange. Google’s offering starts at $50 per mailbox per year and while the cost of a Google mailbox is usually higher when other features such as anti-spam (from Postini) are stacked on top of the basic mailbox, the headline figure is what attracts executive attention and causes customers to look at the Google model to see if it will work for them.

Apart from its low price point, Google paints a vision of reduced complexity, ease of deployment, and rapid innovation that creates a compelling picture especially for companies that have experienced difficulties with Microsoft software in the past. Perhaps they have had problems migrating from one version of Exchange to another; perhaps they don’t like the fact that they have to invest in new server hardware to move to Exchange 2007 and now may have to buy some more new servers to move to Exchange 2010; maybe the complexities of Software Assurance and other Microsoft licensing schemes has created an impression that they are paying too much for software. Or maybe it’s just because the CIO thinks that it’s time for a change and that a new approach is required that’s based on new technology and new implementation paradigms.

Whatever the reason that drives customers to look at competitive offerings, Microsoft has many strong points to call on when it responds. Its products are deeper and more functional than the Google equivalents and work well when deployed on-premises and in the cloud. Its client technology is more diverse and delivers a richer user experience through Outlook, Outlook Web App (OWA), and a range of mobile devices from Windows Mobile to the iPhone. By comparison, its inventor might love the Gmail interface, but it’s not quite OWA. Connecting Gmail to Outlook is possible but limited by use of IMAP. Calendars and contacts don’t quite get synchronized and the offline functionality available through Google Gears isn’t as powerful as Outlook with the OST and cached Exchange mode and the OAB. Some rightly criticize the often overpowering nature of the feature set found in Microsoft Office applications and point to the simplicity and ease of use of Google Docs, which is all very well until some of your users (like the finance department) want to use a pivot table or another advanced feature that’s ignored by 90 percent of the user community. Microsoft’s track record and investment in developing features for Office over the last 20 years creates enormous challenges for Google in the enterprise. Despite its undoubted strengths, Microsoft can’t afford to rest on their laurels and has to deliver applications that are price competitive and also feature rich. Cost is very much in the mind of those who will purchase applications in the future whether the applications are deployed in-house or in the cloud.

To achieve the necessary economics in the delivery of feature-rich applications at a compelling price point, the infrastructure to deliver cloud computing services is designed to scale to hundreds of millions of users, remain flexible in terms of its ability to handle demand, and to be multi-tenant but private. In other words, the same infrastructure can support many different companies but an individual’s data must remain private and confidential. It is difficult for developers to retrofit applications that were originally designed to function in a purely private environment to be able to work well in a multi-tenant infrastructure. For example, SharePoint’s enterprise search feature is very effective across a range of data sources within a single company. Making the same function work for a single company within a multi-tenant infrastructure requires a different implementation. Google designed its applications to work on a multi-tenant basis from the start whereas Microsoft has had to transform its applications for this purpose.

The different approaches taken by Google and Microsoft represent two very different implementations of cloud services. Google’s platform uses programming techniques such as MapReduce that they designed to build massively scalable applications that run on bespoke hardware. You can purchase services from Google but you can’t buy their code and deploy it in your own datacenter. On the other hand, Microsoft has an installed base to consider and needs to engineer applications that run as well on a hosted platform as they can when deployed by companies in private datacenters. Essentially, with the exception of some utilities required for identity management and data synchronization, Microsoft deploys the same code base (with some additional tooling) for Office 365 that a customer can purchase for applications like Exchange, SharePoint, and Office Communicator. The difference between private and hosted environment is the way that Microsoft uses automation and virtualization to drive down cost to a point where their applications can compete with Google. However, it’s fair to say that customers can adopt and use many of the same techniques in their own deployments of Microsoft technology to achieve some of the same savings, if not those open to Microsoft due to the massive scale that they operate on.

In addition to being able to keep their data private, companies can have their own identity within the shared infrastructure. In email terms, this means that a company can continue to have its own SMTP domain and email addresses and have messages routed to mailboxes that are hosted in the cloud or on in-house servers.

Cloud infrastructures are based on different operating systems (Linux is a popular choice), but their operators put considerable effort into simplifying and securing the software stack that they use to drive performance and reliability. These infrastructures focus on scaling out rather than scaling up, preferring to use thousands of low cost servers rather than fewer large servers. Applications are built using open and well-understood standards such as SMTP, IMAP, POP3, HTTPS, and TLS so that as many users as possible can connect and use them. Google uses its own version of Linux running on commodity “white box” hardware, its own file system, storage drivers, and its own applications to deliver a completely integrated and fit-for-purpose cloud computing platform. In many respects, you can compare the integrated nature of Google’s platform with that delivered by the mainframe or minicomputer in the 1980s and 1990s. As you’d expect, Microsoft’s cloud platform is based on Windows and .NET development technologies, albeit with a high degree of attention to standardization and virtualization to achieve the necessary efficiency within the immense datacenters used to host these services.

Why moving Exchange to the cloud is feasible

Over the last few years, it has become increasingly feasible for enterprises to consider including cloud platforms as part of their IT strategy. Greater and cheaper access to high quality Internet connections, the work done by companies such as Google to prove that high-quality applications function on the cloud platform, the growing share of consumer spend taken by web-based stores, and the comfort that people have in storing their personal data in sites such as Mint.com (financial data) or Snapfish.com (photographs) are three obvious examples of how cloud-based services work. In terms of Exchange, three developments are worth noting:

1.    Experience with consumer email applications such as Hotmail, Gmail, and Yahoo! Mail has created familiarity with the concept of accessing email in the cloud. Almost every person who works in a company that uses Exchange has their own personal email account that runs on a cloud platform and while the majority of client access is through web browsers, some users connect with other clients including Outlook. While some service outages have been experienced, the overwhelming experience for an individual user is usually positive (especially because these services are free). This then leads to a feeling that if it’s possible to host personal email in the cloud, it should be possible to host email for enterprises. Of course, this assertion is technically true until you run into the extra complexities that large organizations have to cater to that individuals do not.

2.    The advent of RPC over HTTP and the elimination of the requirement to connect to corporate email systems via dedicated VPNs demonstrates that it is possible to securely connect clients to email across the Internet. We’ve had this capability since Microsoft shipped Outlook 2003 and Exchange 2003 and while the early implementations took a lot of work to accomplish, Microsoft greatly improved the setup and administration of RPC over HTTP in the Outlook 2007 and Exchange 2007 combination. RPC over HTTP is widely used today to allow users to connect Outlook to Exchange across the Internet.

3.    Outlook (2003 to 2010) running in cached Exchange mode is now the de facto deployment standard in the enterprise. Cached mode creates a copy of a user’s mailbox on a local disk and insulates them from temporary network outages by allowing work to continue against the local copy that Outlook constantly refreshes through “drizzle mode” or background synchronization with the server. The nature of the Internet is that you are more likely to experience a temporary outage than inside a corporate network. Cached mode is therefore important in terms of maintaining a high level of user confidence that they can continue to get their work done while connected to the cloud.

Some companies have already explored their own variation of cloud by using the Internet to replace expensive dedicated connections between their network and a hosting provider and have a good understanding of the challenges that need to be faced in moving to the cloud. Others will never move from in-house servers because of their conservative approach to IT, fears about data integrity, the regulatory or legal compliance required of certain industries (for example, the FDA requirement to validate systems used by pharmaceutical companies for anything to do with drug trials), the unavailability of high-quality or sufficient bandwidth to certain locations, or because they operate in countries that require data to stay within national boundaries. Other companies are champing at the bit and ready to move to Office 365 as soon as they can, perhaps because they are running an earlier version of Exchange and see this as a good way to upgrade their infrastructure. In all cases, it’s wise to ask some questions about the readiness of your company to operate email in the cloud so as to arrive at the best possible decision. Let’s talk about some of the questions that are worth debating.

Types of deployments

Before anyone can move from a traditional on-premises deployment to use a cloud-based service, they need to understand the way that they use the application that they are considering for transition to make sure that the cloud-based service can deliver the same degree of functionality, reliability, and compliance with regulatory requirements. The following figure illustrates a simple categorization of Exchange deployments into four basic types of deployment.

 

Types of Exchange deployments

Types of Exchange deployments

Dedicated on-premises deployments are those that are sometimes found in very large conglomerates where different operating units have the ability to create and operate their own Exchange organization. The deployment model is very flexible because Exchange can be tailored to meet the needs of the operating unit. However, it is expensive in terms of software licenses, operations, and complexity (multiple deployments of Windows, Exchange, and third-party applications) and cannot usually be justified unless an operating unit has a real business requirement to run its own service.

Shared on-premises deployments are the more usual model within large companies. A single Active Directory forest supports a common Exchange organization. Some tailoring of Exchange may be done, such as the provision of different GALs for specific operating units, but generally all operating units share a common infrastructure that can be rationalized and tuned to deliver a reliable and robust service.

Dedicated cloud deployments mean that the service is delivered from an infrastructure that is not shared by any other customer. The infrastructure might share a common datacenter fabric but the service provider ensures sufficient security to block access to the servers to anyone but the customer. While the service provider attempts to extract economies of scale from as many common components as possible, it is clearly more expensive to setup and operate dedicated servers for a customer and this cost is passed on to the customer in the form of higher monthly charges. Because the infrastructure is dedicated, it can be customized to meet specific needs at an additional cost.

Shared cloud deployments mean that all customers access a service delivered from a multi-tenant shared infrastructure. Costs are lowest as the service provider delivers a well-defined set of functionality that cannot be customized and they can spread the cost to create and operate the infrastructure across many different customers.

Once we understand the nature of the current on-premises deployment and the kind of cloud service that may provide the target environment, we can move to a discussion of the services offered by Microsoft.

What Microsoft can deliver

The first set of hosted email services delivered by Microsoft first delivered filtering, encryption, and compliance services to supplement on-premises customer deployments using a mixture of Exchange 2003 and Forefront technology. These services are useful but not mainline. Full-scale hosted email from Microsoft began with Exchange 2007 and the pace accelerated enormously with the introduction of Exchange 2010. Microsoft gained much experience from their Live@EDU implementation that provides Office services, including Outlook Live, to the education sector. Behind the scenes, Outlook Live connects to Exchange 2010 mailboxes. Live@EDU used beta versions of Exchange 2010 and has since upgraded to keep pace with software development. The use of Exchange 2010 in Live@EDU provided Microsoft with a terrific environment to gain knowledge of how to provision, operate, and manage large-scale hosting environments.

Hosted services focused on business rather than education usually have more stringent terms and conditions, including financial penalties for non-compliance with a committed level of service. Microsoft offers two flavors of hosted email within BPOS: dedicated and standard. Both versions ran Exchange 2007 initially and are scheduled to be upgraded to Exchange 2010 during the move to the Office 365 platform in early 2011 (many of the enhancements in SP1 such as the expanded range of features accessible through ECP are particularly valuable in a hosting environment). The dedicated offering is targeted at large enterprises and is only available to customers that have 5,000 mailboxes or more, probably because it is not worthwhile to create a dedicated environment including components such as network, hardware, help desk, security, and directory synchronization for less than this amount. The dedicated version is not multi-tenant because the hardware that supports a customer is literally dedicated and not shared with anyone else. The dedicated version also permits the customer to customize the service that they receive and is more expensive than the standard version for this reason.

Office 365 Architecture

The standard offering offers mailboxes that are supported on a multi-tenant infrastructure. In other words, all of the mailboxes are hosted on the same set of servers and use the same directory with logical divisions in place to appear that the mailboxes and directory (GAL) are quite separate. In 2008, Microsoft launched hosted email services with a promised 99.9 percent scheduled uptime and the same guarantee exists today for Office 365. The standard service has a number of optional added-cost extras including BlackBerry support, archiving, and migration from another email system. Windows Mobile (6.0 and later) devices are supported along with Outlook (the minimum version required is Outlook 2003 SP2), OWA, and Entourage (the EWS version is required to connect to Exchange 2010). Because it is dedicated to just one customer, the dedicated offering is more customizable and flexible and includes aspects such as business continuity and disaster recovery.

Hosted Exchange is not something new; specialized hosting companies have offered email services based on Exchange going back as far as Exchange 5.5. These companies have worked around the various shortcomings that exist in previous versions of Exchange to provide a cost-effective service to customers. Microsoft’s entry into the hosting game creates a new competitor, which isn’t good news for hosting companies, even if it substantially validates hosted email in the eyes of many. The aggressive “all you can consume for one low price” approach being pursued by Microsoft and Google to attract customers to use their online services will also force third-party hosting companies to reduce their operating costs and prices and could impact the profitability as well as the viability of some hosting companies. On the positive side, the emergence of Microsoft’s own hosted offerings and the emphasis that Microsoft executives have given to these services in the market adds credibility to the notion of sourcing email and collaboration services from the cloud and creates a new impetus for IT managers to consider this approach as they review their long term plans for applications such as Exchange and SharePoint.

Microsoft engineering groups are busy making sure that the widest possible feature set works for large multi-tenant environments. A lot of this work involves adding new features to enable a better distributed management mode so that companies can manage their data on remote servers as easily as they can manage their own servers today. Exchange 2010 SP1 is a very manageable multi-tenant platform that blurs the boundaries of a messaging server that runs on both-premises and cloud services. The introduction of remote PowerShell has created a management framework that works equally well for local and remote servers. Other major advances to support hosted environments include:

  • Administration across on-premises and hosted objects through the Exchange Control Panel, a new SLA dashboard, EMC, and remote PowerShell.
  • A very functional web browser client (Outlook Web App – OWA) that supports many different browsers. The latest version of OWA as featured in Exchange 2010 SP1 is likely to be very similar to the browser used by Office 365 users.
  • Microsoft Services Connector to enable federated identity management required to support single sign-on for users across the on-premises and hosted environments.
  • Directory Synchronization (GALSYNC) to ensure that the GALs presented to on-premises and hosted mailboxes deliver a unified view of the complete company. This tool supports multiple on-premises forests. The out-of-the-box capabilities of GALSYNC are pretty simple and companies might have to purchase a more comprehensive directory synchronization product such as Microsoft Identity Lifecycle Manager (ILM) to handle complex environments such as the integration of addresses from multiple email systems.
  • Microsoft Federation Gateway to support calendar (free/busy) and contact sharing, federated RMS, and message tracking across on-premises and hosted environments.
  • Mailbox migration between on-premises and hosted environments, including the ability to avoid a complete OST resynchronization after moving a mailbox to the cloud platform. Mailboxes can be moved from Exchange 2003 SP2, Exchange 2007 SP2, and Exchange 2010 on-premises servers to Office 365 and back to an on-premises server again.
  • Federated message delivery to protect the transmission of messages between on-premises and hosted environments. You can think of this as a form of secure messaging channels for users in that messages are encrypted and routed automatically between the two environments. When messages arrive in the environment that hosts the recipient mailbox, they are validated, decrypted, and delivered to the final recipient. Outbound messages can be routed through an on-premises gateway.
  • Support for Outlook content protection rules (for Outlook 2010).
  • Hosted Unified Messaging enabled by connecting on-premises PBXs with Microsoft Office 365 via IP over the Internet.

Note that at least one on-premises Exchange 2010 SP1 server is required to support mailbox migration, federated message delivery, and federated free/busy. The Microsoft Services Connector is installed as an IIS website on an on-premises server. It’s also worth noting that Microsoft runs a slightly different version of Exchange 2010 SP1 in its datacenters to support multi-tenant environments. The changes between this version and the version used for classic on-premises deployments are minor and are there to allow Microsoft to manage the different Exchange organizations that function in the environment. A good example is that many of the Windows PowerShell cmdlets have an –Organization parameter to allow an administrator to select objects that belong to a specific organization or to apply a new setting to selected objects. For example, in the Microsoft datacenter, you could fetch details of the default ActiveSync policy for the ABC Corporation organization with:

Get-ActiveSyncMailboxPolicy –Id “Default” –Organization “ABC Corporation”

There are also parameters on some PowerShell cmdlets that are only accessible to Office 365 datacenter administrators. For example, the Set-UMAutoAttendantcmdlet has parameters for default call routing that only work if you have the appropriate RBAC permissions in Microsoft’s data centers.

With the release of Exchange 2010 SP1, the only major feature gaps that now exist are lack of support for earlier Outlook clients and depreciated APIs such as WebDAV. Customers who depend on these features will have to upgrade or drop them before they can move to Office 365.

Federal support

In late February 2010, Microsoft announced delivery of BPOS Federal, a special version of the online suite tailored to meet the requirements of the U.S. government. BPOS Federal includes Exchange Online, SharePoint Online, Office Live Meeting, and Office Communications Online hosted in separate secured facilities where access is limited to a small number of U.S. citizens that have been cleared through background checks complying with the International Traffic in Arms Regulations (ITAR).

BPOS Federal is upgraded to address other regulatory requirements such as compliance with the SAS-70 (type II) auditing requirement, Federal Information Processing Standard (FIPS) 140-2, and ISO 27001. Large international companies often have the same requirements and the ability to comply with SAS-70 audits and deliver an ISO 27001 level service will reassure many who are nervous about the movement of IT into the cloud. Microsoft will also support two-factor authentication and enhanced encryption to meet the requirements of the Federal Information Security Management Act (FISMA ).

Users

You should review the needs of any special user groups that exist in your organization and determine whether their needs can be met in the cloud. For example, companies often take special measures to ensure the highest degree of confidentiality and service for their most important users. They place the mailboxes of their executives on specific servers that are managed differently from “regular” systems. Administrative access is confined to a smaller set of administrators, the servers might be placed in different computer rooms, and special operations procedures might apply for backups and high availability. While you can expect Microsoft to do a comprehensive job of securing data as it moves from clients in your network to servers in the cloud, data security will remain a concern as we go through the transition from an in-house world to the cloud and this is especially true when dealing with the kind of highly confidential data that circulates in executive email.

Another issue to consider is how users work together. For example, can users continue to enjoy delegate access to calendars and other mailbox folders in a mix of in-house and cloud deployment or do you have to keep users who need to share data on the same platform? Are there limits to the number of calendars that a user can open? Are there any problems with shared mailboxes or resource mailboxes or mailboxes that are used by applications? These are questions with no good answers today because we don’t have the final version of the cloud software to test against. However, they are good illustrations of the kind of thought process that you need to go through to understand the full spectrum of user requirements from basic mailbox access to sophisticated use of advanced features delivered by Outlook and Exchange.

Support

Part of the usual negotiation with a cloud services provider is the conclusion of a Service Level Agreement (SLA) that describes the level of availability of the service (for example, 99.99 percent), how outages are handled, and the financial compensation that might be due if a service does not meet the contracted uptime. Agreements like this are very common in outsourcing contracts, so it should not come as a surprise to apply them for cloud services. However, agreeing on an SLA is easy. Monitoring end-to-end performance for email and understanding the roles that local and cloud support play is more complicated. Because of this factor, many service providers will only guarantee an SLA within the confines of their own datacenter.

Local support is provided by the company and usually covers issues such as the company’s own network, connectivity to the Internet, client software, and integration with other applications that may depend on email. Cloud support comes from the service provider. The vast majority of support activity is performed locally and contact with the service provider should really only happen when the service is unavailable for a sustained period, such as three significant outages that affected user access to Google’s Gmail service for several hours in August 2008. Other smaller problems followed to disrupt service and then Gmail experienced another major outage in February 2009 after Google introduced some new code during routine maintenance in their datacenters. The code attempted to keep data close to users (in geographic terms) so that users would not have to retrieve mailbox data across extended links (http://gmailblog.blogspot.com/2009/02/update-on-todays-gmail-outage.html). The upside in the fix was the potential for better performance, but the downside of the failed update was erosion in user confidence in the Gmail service. Further Gmail outages followed in April 2009 and September 2009, the second of which occurred after Google made some adjustments to their infrastructure that were intended to improve service!

Microsoft has not been immune from service outages and experienced some hiccups in January 2010 with some anecdotal evidence pointing to problems with the network infrastructure and Exchange 2007 being at the cause (Microsoft hadn’t upgraded BPOS to use Exchange 2010 at this point). All of this proves that even the largest and best managed companies can sometimes take actions that disrupt service; and that those errors are magnified many times when the service is being consumed by millions of users.

Given how the Internet connects your network to the cloud, you can expect transient network hiccups that cause clients to lose connectivity from time to time. After all, no one is responsible for the Internet and no one guarantees perfect connectivity across the Internet all the time. That’s why cached mode Exchange is such a valuable feature of Outlook. Problems happen inside networks, even those that are under the sole and exclusive control of a single company. You have to be able to understand where local problems are likely to occur and how to address them fast before you escalate to the service provider to see whether the problem is at their end. For example, an outage may occur in your network provider that links you to the Internet, a firewall or router might become overloaded with outgoing or incoming connections and fail, or a mistake in systems administration might block traffic outside your network. The real point here is that if you move email to the cloud, you cede the ability to have a full end-to-end picture of connections from client to mail server and only have control over the parts that continue to reside inside your network. Users will hold the local help desk accountable when they can’t get to their mailboxes and that creates a difficult situation when you cannot trace the path of a message as it flows from client to server, you cannot verify that connections are authenticated correctly everywhere, and so on. In fact, because there are so many moving parts that could go wrong, including the Internet link, it is very difficult to hold a service provider to an SLA unless there is unambiguous proof that the cloud service failed.

Experience will prove how easy it is to manage availability in the cloud. For now, there is a weakness in management and monitoring tools to allow administrators to verify that an SLA is being met whenever an application relies on connectivity outside the network boundary that the organization controls. Indeed, problems exist in simply getting data from the different entities that run the corporate network, intermediate network providers, and the hosting providers in a form that can be collated to provide an end-to-end view of how a service operates. It may be possible to get data from one entity or another, but the data is likely to be inconsistent with data from other entities, impossible to match up to provide the end-to-end view, or uses different measurements that makes it difficult to synchronize.

Service desk integration is another related but different issue.You probably need some method to route help desk tickets from the system currently in use to the service provider and maintain visibility to the final resolution of the problem. Given the wide variety of service desk systems deployed from major vendors such as IBM, CA, and HP and the lack of standards in this area, this isn’t an easy problem to solve.

Software upgrades

One advantage of online services cited by their supporters is that software is automatically updated so that you always use the latest version. You don’t have to worry about testing and applying hot fixes, security updates, service packs, or even a brand new release of Exchange. Everything happens automatically as Microsoft rolls out new software releases on a regular basis across their multi-tenant datacenters.

The vision of “evergreen” software is certainly compelling. It can be advantageous to always be able to use the latest version of software, but only if it doesn’t increase costs by requiring client upgrades. For example, Exchange 2010 SP1 doesn’t support Outlook 2000 clients and sets Outlook 2003 SP2 as its minimum supported client. Outlook 2010 is required to access the full range of features offered by Exchange 2010 SP1. Therefore, to move to a hosted service based on Exchange 2010 SP1, you have to be sure that all of your users are happy to use OWA or that you have deployed a client that is fully compatible with Exchange 2010 SP1, even if some functionality is unavailable to supported clients such as Outlook 2007. Microsoft BPOS requires companies to deploy Outlook 2003 SP2 at a minimum and I expect that Office 365 will require Outlook 2007 as a minimum when it is deployed.

Looking into the future, you can predict that the same situation will exist. OWA clients will be automatically upgraded through server upgrades but administrators will have to ensure that “fat” clients like Outlook can connect. You can only expect Microsoft to provide backwards compatibility for recent clients, so you can expect to have to plan for client-side upgrades whether you decide to use in-house or hosted versions of Exchange, especially if you want to use some of the new features in Exchange 2010 SP1 such as MailTips that are not supported by older clients.

The problem here is that you have complete control over upgrades when you run in-house Exchange servers while you cede control to the hosting provider when you use an online service. A decision made by the hosting provider to apply an upgrade on their servers might have the knock-on effect of requiring customers to either accept lower functionality or, in the worst case scenario, accept the inability to connect with the client software running on some or all of their desktops. On the other hand, a software upgrade initiated by the service provider might introduce some new features that pop up on client desktops. This is great if the feature works flawlessly and the help desk knows about it in advance so that they can prepare for user questions, but it’s not quite so good if it causes panic for the help desk. Traditional outsourcing companies are usually slow to deploy new technology because of the cost and disruption to operational processes and users, so it will be interesting to see how quickly cloud hosting companies deploy new versions and how easily their users can cope with change.

Forced upgrades to new versions of desktop software might be an acceptable price to pay to be able to exploit online services. However, no administrator will be happy to be forced to upgrade clients (or to even apply a service pack or hot fix) without warning or consultation. This is a downside of using a utility service—you have to accept that you have to upgrade systems to maintain connectivity. To use an analogy, after they purchase an older house, consumers often find that they are required by electric companies to upgrade the wiring to maintain compatibility with the electricity service. You don’t have a choice here unless you want to run the risk that the wiring in your house will cause safety problems, including the ability to self-combust. You simply have to bring wiring and fittings up to code. Another example is the changeover to digital television in the United States which required consumers to either upgrade their TV or buy a converter box to retain service.

The bigger the client population in an organization, the larger the costs to deploy or upgrade and the more difficult it will be to ensure that everyone runs the right desktop software. Enterprise administrators are aware of the need to synchronize client and server upgrades and plan upgrades to match the needs of their organization. For example, no one plans an upgrade to occur at the end of a fiscal year when users depend on absolute stability in their email system to send documents around, process orders, and finalize end of year results. A forced and unexpected upgrade at this time could have enormous consequences for a business. Enterprise administrators also know about the other hidden costs that can lie behind client software upgrades such as the need to refresh complete Office application suites (you can deploy Outlook 2007 without deploying Office 2007 if you don’t want to use Word 2007 as the editor), the need to prepare users for the upgrade, and a potential increase in support costs as help desks handle calls from users who find that they don’t know how to work with the new software.

In an online world where services are truly utilitarian in nature, you might not have the luxury to dictate when client upgrades occur unless you use browser-based clients like OWA. Consumer-oriented email services like Hotmail and Gmail have always focused on web clients and therefore haven’t had to synchronize client and server upgrades. In fact, both Hotmail and Gmail have upgraded the user interface significantly over the last few years to add different features and change screen layouts. These changes are good in that they increase functionality but they also create a challenge for corporate help desks that have to support users who don’t understand why an interface has changed. Perhaps we will all be able to use web clients in the future and eliminate fat clients like Outlook. Until this happens online service providers will have to work out how to perform software upgrades on their servers without forcing client-side upgrades on their customers.

Applications

It’s common to find that email is integrated with other applications, including Microsoft products like SharePoint Online and Office Communicator Online that might be delivered from the cloud, but also applications that have been built in house and those that rely on commercial software such as SAP or Oracle. For example, you might find that your Human Resources (HR) department has integrated PeopleSoft with Active Directory and Exchange so that the process of creating a new employee record also includes the provisioning of their Windows account and an Exchange mailbox. Understanding the points of integration and how email has been “personalized” by the organization is an important part of determining whether a cloud approach is feasible for a company. To many companies, Exchange is much more than an email server and the more a company integrates email into business processes, the harder it becomes to make a fundamental change to the email platform, whether it is to move to a new email system, upgrade to a new release of a server, or to move to a new platform. Every application needs to be tested to ensure that it will continue to work during and after the migration.

Anyone who has been through a platform upgrade will tell you that the IT department usually underestimates the number of applications in use within a company. The IT department certainly knows all about the headline applications that they have responsibility for, but they probably don’t have much knowledge about the applications developed by individual users and workgroups that become part of business processes. IT probably knows even less about how applications are connected at the workgroup level.

These applications range from Excel worksheets to Access databases to web-based systems that are used for a myriad of purposes. They often only come to light when the IT department launches a project to upgrade the client or server platform to a new software release and a user finds that their favorite application no longer works. We saw this happen when companies upgraded their server platform from Windows NT to Windows Server 2000 and from Windows Server 2000 to 2003. The same experience occurred on client platforms, including upgrades of the Microsoft Office suite where macros in Word and Excel didn’t work in the new release or Access databases needed some rework to function properly. Given this experience, part of any preparation to move Exchange into the cloud should be a careful consideration of all the ways that email is used across the entire company to ensure that everything will continue to work after the transition.

Unified communications poses real challenges for hosting providers. The delivery model for hosted services is based on standardization, yet unified communications can involve telecommunications components from many different providers, some of which are relatively new and support the latest protocols and some that are approaching obsolescence and were never designed to operate in a TCP/IP-centric world. If you want to move to a hosted environment, you can rip and replace to move your telecommunications infrastructure to supported platforms or find a hosting partner that is willing to accommodate your existing platform.

No discussion about applications and Exchange can ignore public folders, the last remnant of Microsoft’s original vision for Exchange as a platform for email and collaboration. Microsoft has consistently signalled its intention to move away from public folders since 2003 but customer reaction has forced them to keep public folders in the product and to commit to their support ten years after the release of the last product version to formally include public folders. Given that public folders are still firmly in place within Exchange 2010 SP1, you can assume support until at least 2020. Companies that have thousands of public folders holding gigabytes of data that they use for different purposes from archives of email discussion groups to the storage for sophisticated workflow applications have some work to do to figure out how their specific implementation of public folders functions in a cloud environment. For example, can a mailbox in the cloud access the contents of a public folder that’s homed on an in-house server? If the public folder uses a forms-based application, can the same mailbox access the form and load it with the right data to allow the user to interact successfully with the application? However, Microsoft does not support public folders in Exchange Online. The notion of support for customizable applications based on public folders runs totally opposite to what you want to achieve from a service delivered through a utility-based multi-tenant infrastructure – a bounded set of functionality delivered at a low price point that is achieved through massive scale because everyone gets the same service. Companies that use public folders have three options:

1.    Continue with their current deployment.

2.    Keep servers in-house but use some aspects of the cloud such as using the Internet instead of dedicated communications.

3.    Drop public folders,migrate their contents to another platform, and then move to Microsoft Online.

Much the same problem occurs when you consider aspects of other Microsoft applications such as OWA customizations (for example, changing the log-on page to display your corporate logo) or SharePoint web parts that are needed for an application. Utility services are just that – a utility. You wouldn’t expect to be able to ask the water or electric company to deliver you a custom service, so you shouldn’t expect to be able to ask Microsoft to deliver your customized form of Exchange or SharePoint from their utility service. It’s entirely possible that Microsoft will figure out how to allow companies to customize different aspects of online services in the future but I don’t expect this to be part of the offering we see in the next few years.

Privacy and security

Companies that move part of their infrastructure to the cloud assume that the cloud providers will protect their confidential data to the same extent as is possible when that data resides inside the company’s firewall. Consumers have long been happy to upload even their most private information (email, documents, photos, PC backups, and financial data) into the cloud, perhaps because they do not quite understand how the data is secured and protected against inadvertent exposure. There have been instances when consumer data was compromised through service provider errors. For example, http://www.techcrunch.com/2009/03/07/huge-google-privacy-blunder-shares-your-docs-without-permission/ describes one instance when Google documents were shared with other users without their authors’ consent.

The underlying problem is that the effect of a vulnerability discovered within a company’s own firewall is limited to the processes that the company has put in place to handle this type of security outage. When a breach occurs in the cloud, you have no control over how the service provider addresses the problem and can only deal with the effect of the breach. The net effect is that cloud platforms remove one layer in the defence in depth strategy that companies have deployed to protect their data because a breach that they might have avoided through their own security processes could affect them along with everyone else that shares the same infrastructure. In general, consumers don’t worry too much about this problem because they have benefited greatly from the advent of cloud-based services. Consumers get a service that is generally free while the service providers build an audience for advertising and a platform that they can sell to other constituencies.

The security and privacy issues that companies have are enormously complex when compared to those of an individual consumer, who doesn’t have obligations to shareholders,the market, regulatory organizations and government, customers, employees or retirees. Another barrier for corporations is the willingness to allow proprietary information to be stored on computers that they have no control over. Email is used as a vehicle to circulate vast amounts of data between users including attachments of all types and content. Budgets, new product plans, details of potential acquisitions, HR information, strategic plans, and designs are just some of the kinds of information that might be included in worksheets, presentations, and documents attached to email. Companies are quite happy when this information is under their control and go to enormous effort to ensure that users understand the consequences of releasing proprietary secrets externally. It takes a leap of faith to be able to achieve the same degree of happiness when company secrets reside on servers in a datacenter that is not under the company’s direct control.

Outsourcing providers address security and privacy concerns by dedicating space to customers in datacenters that are owned and managed by the outsourcer or the customer. The dedicated space holds the computers and other infrastructure to host the applications and data. The operators that look after these systems have probably performed the same work for years and have great understanding of the importance of the data to the company. Experience accumulated over decades of outsourcing has created a security environment and methodology to protect even the most sensitive data that a company might possess.

Things are different in the cloud because dedicated infrastructure is a rarity rather than the norm. Instead, everyone’s applications and data run on multi-tenant infrastructures in huge datacenters. It’s certainly possible to isolate data that belongs to a specific company but the economics of large-scale cloud hosting seeks to eliminate cost whenever possible to deliver a service at the lowest possible price point so operations staff will have little real knowledge of the companies that they serve and the importance of their data and applications to these companies. An inadvertent slip by the operations staff could expose the data belonging to many companies and no one might know until too late. Data belonging to a company under SEC supervision might be intermingled with data belonging to companies that have to comply with U.S. Food and Drug Administration (FDA) regulations. All can be affected by a single software bug or administrative foul-up. This risk is acceptable for private individuals who store their personal email on a service such as Gmail, but senior executives might take a different view if they are asked to store email that contains extremely confidential information on the same service. Think of the issue in this way: no great harm really accrues if Jane’s working document about family holiday plans is inadvertently exposed to others through a software or operational failure. Great harm to a company may accrue if a document describing the terms of a potential take-over is revealed through the same failure. Companies that want to move to the cloud have to take this risk into account when they make their decision and be absolutely sure that the service provider has taken all reasonable steps to ensure that no such problem will occur over the lifetime of the contract.

Keeping corporate secrets intact is clearly important but sometimes even corporate secrets have to be revealed as the result of legal action. Taking the steps to search a mail system for specific documents in response to a legal discovery request is reasonably straightforward for an on-premises deployment but might only be possible at additional cost or with an extended deadline on a cloud platform.

Another issue worth considering is the physical location of the data. For example, if the data resides in a datacenter in the United States, it comes under the prevue of the USA PATRIOTAct that permits U.S. law enforcement agencies to search electronic data. This fact probably doesn’t enter the mind of a CIO as they listen to a pitch on the wonders of the cloud but it serves to illustrate that moving data around can expose companies to new supervision and regulatory oversight that they hadn’t considered. Companies that consider a move to a cloud platform therefore have to satisfy themselves to the overall security regime of the service provider, how the service provider protects the privacy of the company’s data (including compliance with any country-specific requirements relating to employee and customer data), and the procedures and service level agreements that apply for common circumstances such as legal discovery, protection of data against virus infection, protection of users against spam and other irritants, and end-to-end security from client to server including transmission across the Internet.

Costs

A reduction in hardware costs is an anticipated advantage of moving work into the cloud. You won’t have to deploy servers and storage to host applications. You should also gain by paying less for software licenses for the servers. This isn’t just a matter of Exchange server licenses; it’s also the Windows server licenses and any associated software that you deploy to create an ecosystem to support Exchange. Common examples of third-party products that often run alongside Exchange include anti-virus, monitoring, and backup products, all of which have to be separately investigated, procured, deployed, and maintained. Your overall spend on software should be more efficient because you will pay for mailboxes that are actually used rather than for the maximum number of licenses that you think you need.

However, additional costs will arise in a number of areas. You’ll have to pay for the work to figure out how to synchronize your Active Directory with the hosting provider, how to transfer mailboxes most efficiently, and how to change your administrative model to work in the cloud. It’s also likely that the network connection that you currently use to the Internet is sized for a particular volume of traffic. That volume will grow as you transfer workload into the cloud. Traffic generated by applications that once stayed inside the organization will have to be transported to Microsoft and perhaps back to your network. Depending on the online applications that you subscribe to, this traffic includes sending email between users in your company, authentication requests, access to SharePoint sites, Office Communicator connections, and so on. Directory synchronization and other administrative traffic will make additional demands as will the need to move mailboxes during migration.

The point is that you cannot plunge into the cloud if you aren’t sure that your network has sufficient bandwidth to cope with the new traffic and the necessary infrastructure to handle the load of authenticated connections that will now flow into and out of your company. Firewalls, routers, and other network components might need to be upgraded and you might need to pay for additional software to handle network monitoring and security.

Datacenter and other capital expenditure is another aspect of cost to consider. Many companies make long-term capital investments in datacenters and IT equipment (servers, storage, network switches, and so on) and expect to leverage that investment over many years. This cost is already on the balance sheet and will be incurred even if it is not used unless it can be sold to a third party, which is difficult to do for datacenter space and used IT equipment. It might not be cost effective to move to a cloud-based solution until these capital costs are fully written off, or indeed to take the accelerated depreciation on any capital equipment that might be nullified by the move to a cloud-based service. On the other hand, a company that is reviewing new capital investment in IT should consider how cloud services affect their plans so as to optimize expenditure over investments that have to be made to support IT infrastructure that cannot be moved to the cloud and the purchase of cloud-based services. The cost of on-boarding services must also be considered as user mailboxes will not migrate themselves into the cloud automatically and some process and effort has to be put in place to coordinate and manage the transition.

The need for a back-out plan

Many things can happen that would force you to consider moving away from a cloud-based service. You might find that the service doesn’t deliver the kind of functionality that users expect. You might find that the costs of the service are higher than you expect. Your company might be taken over by another company that has a successful and cost-efficient in-house deployment of Exchange that they want to continue to use. Or you may decide that transferring email to Microsoft Online creates a lock-in situation that you are not happy with.

In whatever circumstances you face, it is simply good sense to figure out what your back-out plan will be if you need to use it. Think through the different scenarios that might occur over the next five years and chart out your response to these circumstances so that you have an answer. That answer might be flawed and incomplete, but at least it is an initial thought on how to approach the problem if it occurs. Apart from anything else, thinking about how to retreat from the cloud might inform your thinking about how to enter the cloud in a way that reveals new issues that you need to include in your implementation plan. For example, ask yourself how quickly you can move mailboxes back to an in-house infrastructure. Your company could go through periods of merger and acquisition so you need to understand how mailboxes can be moved out to be ready to be transferred along with the assets of a company that is sold or indeed how to merge in new mailboxes from acquired companies, including those who run non-Microsoft messaging systems. How can you transfer Lotus Notes mailboxes or Gmail mailboxes? How can you synchronize calendars with a newly acquired company without moving the infrastructure used by that company to Microsoft Online? While it might seem natural to move a newly acquired company to a joint platform as quickly as possible, laws in certain countries might slow the pace of a merger and force the maintenance of dual infrastructures for extended periods.

Another good debate to have concerns headcount.While you might need to reduce the number of messaging administrators to support a cloud solution after your migration is complete, maybe you should keep some in-house messaging expertise in place just in case you need to reverse course. Having its own expertise available allows companies to continue to track the evolution of Microsoft Online and solve synchronization and other administrative issues that you can expect ongoing operations to throw up. Not having to bring in expensive consultants is also a good thing when dealing with common messaging scenarios such as merger and acquisitions, the recovery of lost email, legal search and discovery, and so on. It is a good idea to review the out-of-norm scenarios that have occurred in the current messaging infrastructure over the last few years and determine a list that can be checked against the online environment so that you understand what the hosting provider can do, what your company will have to do in each circumstance, plus the expected SLA that the hosting provider will sign up for.

One small problem with the cloud analogy

The proponents of cloud computing eagerly seize on the notion of utility as the cornerstone for price reduction. Utility services such as electricity, gas, or water are pointed to as the metaphor for cloud computing. However, the question that needs to be asked is whether email or other collaborative applications process the same kind of dumb commodity that flows through electricity wires or water pipes. The answer is that collaborative applications do not. Instead, the data that people share and work on with applications such as Exchange often represent some of a company’s prime intellectual property and you cannot compare that data with a utility commodity that can leak or otherwise disperse en route between a generating station and its end consumer. In short, the data manipulated in collaborative applications is neither dumb nor disposable and cannot therefore be compared to the output of normal utilities.

Rushing to embrace cloud-based applications is dangerous unless you can assure that the utility platforms can deliver applications in such a way that your data is preserved, secured, available, and never compromised. By all means, exploit principles such as automation and standardization that underpin cloud computing to reduce cost and improve service, but always keep an eye on the balance between cost and value and don’t allow a race to the bottom to develop that impacts your ability to deliver world-class IT to users.

The cloud in the future

Cloud services are a major influence over the future delivery of IT to end users. The question is how quickly different organizations are able to embrace and use the cloud and what challenges exist along their road. The choice to use an online email service will be easy for some organizations, especially those who either don’t have an existing email service or who don’t make any customized use of their current email service. Things get a lot more complicated the longer an organization has been using email, the more mailboxes they support, and the better email is integrated with other applications. In these situations, you may find that online services are just a little too utilitarian in nature and that a more traditional outsourced or in-house deployment (hopefully one that takes advantage of cloud delivery principles to reduce cost) continues to meet your needs better.

Posted in Exchange, Exchange 2010 | Tagged , , , , , | 4 Comments

Installing an Exchange 2010 Roll-up Update


Microsoft announced roll-up update 2 (RU2) for Exchange 2010 SP1 amongst the set of updates for different versions of Exchange that the CXP team released in December 2010. I believe that CXP means “customer experience”; in previous times this team might have been known as “ongoing support” or “maintenance engineering”. In any case, they’re the people who coordinate all of the support work done for released versions of Exchange and bring out the roll-up updates on a regular basis.

A roll-up update is not a service pack. Instead, it’s a consolidated set of known fixes that you can apply at one time to bring an Exchange server completely up-to-date. Each roll-up update includes all of the fixes from previous releases, which means that when you install RU2 for Exchange 2010 SP1 you gain the benefit of all of the fixes previously released in RU1.

Different roll-up updates are made available for all of the Exchange versions that are currently supported and the December set includes updates for Exchange 2010 SP1, Exchange 2010 RTM, Exchange 2007 SP3, and Exchange 2007 SP2. The next set of updates are due to be released sometime in February 2011 as Microsoft aims to provide ongoing updates to allow administrators to keep their servers in a clean state of health.

Welcome to the installation procedure - how nice!

After starting up, the update process checks to see whether the “Check for publisher’s certificate revocation” security setting is active. If your server is disconnected from the Internet or otherwise cannot contact Microsoft, the update process will be unable to examine the certificate revocation list (CRL) to verify the code signing certificate each time it attempts to compile a .NET assembly into managed code and the update will have a problem any time it wants to check the CRL. The installation procedure checks for this condition and if found, it flags an error, as shown below.

Problem detected during the installation of RU2

Exchange 2007 and Exchange 2010 both make extensive use of .NET assemblies and you can be guaranteed that quite a number of these will be updated. To fix the problem, follow the steps outlined in this TechNet article to disable checking of the CRL and allow the update to proceed smoothly. The article refers to Exchange 2007 but the same issue and same solution is valid for Exchange 2010.

Apart from that, the installation of a roll-up update for Exchange 2010 goes smoothly. The process will:

  1. Do some preparation such as performing the CRL check explained above.
  2. Stop the set of Exchange services running on the computer.
  3. Validate that everything is ready to allow files to be copied (for example, ask you to exit EMC if it is still running).
  4. Create the necessary managed code images from the .NET assemblies.
  5. Copy the changed files.
  6. Restart the Exchange services.

The longest part of the process is when the .NET assemblies are compiled. However, even on a virtual machine running on a laptop, this part of the RU2 installation only took a few moments and it will be much faster on a production-quality server.

RU2 compiles some .NET assemblies

And once everything is done you’ll see a comforting message to tell you that it’s time for a coffee and a well-deserved rest:

All done - Exchange 2010 is updated with RU2

Before you apply a roll-up update to a mailbox server operating within a Database Availability Group (DAG), you should transfer any active databases to other servers within the DAG before attempting the upgrade. You can also block activation of the database copies on the server that you want to upgrade until everything is done by running the Set-MailboxServer cmdlet to set the DatabaseCopyAutoActivationPolicy to “Blocked”. Afterwards you can run Set-MailboxServer again and reset the property to “Unrestricted” and then transfer the databases back onto the upgraded server.

In a previous post, I describe how to extract the version number for Exchange. The interesting thing about the installation of RU2 is that the version number as reported by anything other than EMC does not change. All of the methods described in the post continue to report 218.15 as the version number after RU2 is applied to a server. This is logical because the data comes from the Exchange configuration data held in Active Directory and this isn’t updated when you install a roll-up update. By contrast, if you check the Help About information in EMC, the MMC framework (which is what EMC runs in) reports the version information from the file Microsoft.Exchange.Management.NativeResource.dll in the \BIN directory. This is why you see EMC report a version of 14.01.0270.001 after RU2 is installed (shown below) whereas EMC on a plain vanilla SP1 server reports 14.01.0218.013.

For those interested in trivia, it’s also worth noting that files updated by the installation of RU2 will have a creation date of November 19, 2010 compared to the August 22, 2010 date for files distributed with Exchange 2010 SP1.

Version information reported by EMC after an installation of SP1 RU2

In summary, the installation of a roll-up update doesn’t cause many brain cells to be destroyed for an Exchange administrator and is definitely one of the regular maintenance activities that you should schedule for your servers.

– Tony

Posted in Exchange, Exchange 2010 | Tagged , , , , | 9 Comments

On the naming of DAGs


Most Exchange administrators, even those who don’t have much hands-on experience with Exchange 2010, are now aware that the Database Availability Group (DAG) feature is built on Windows 2008 failover clusters. Exchange 2010 does an excellent job of hiding the complexities of Windows clusters and in normal operation you aren’t aware that the DAG operates on top of a cluster. The clusters created by Exchange to be used for DAGs must be dedicated to the DAGs and not used for other purposes. This is logical because there’s a high potential that you might make a mistake that affects Exchange when you configure the cluster for another application. You don’t know what black magic is invoked by Exchange when it runs the New-DatabaseAvailabilityGroup cmdlet to create a cluster or the Set-DatabaseAvailabilityGroup cmdlet to amend the properties of a cluster.

Wrapping the DAG around Windows failover clusters makes a lot of sense. It avoids the need to create yet another mechanism to monitor and execute failovers when problems arise. It also allows Exchange to select the parts of failover clustering required to make the DAG effective while ignoring others and benefit from the ongoing investment made by the Windows development team to improve their cluster technology.

The only downside is that Exchange 2010 has to accept any limitations that originate through the use of failover clustering. The most obvious limitation is that you can only include a maximum of sixteen mailbox servers in a DAG (because that’s the limit set for failover clusters). Naming the Computer Name Object (CNO) used for the cluster is probably the other most common limitation that administrators will encounter.

Before we discuss naming schemes for the CNO, it’s good to review the material contained in the article Microsoft clustering technology in Windows Server 2008 to understand the use of the CNO. Here is the important piece:

In Windows Server 2008 Failover Clusters, the cluster service no longer runs in the context of a domain user account. Instead, the cluster service runs in the context of a local system account that has restricted rights to the cluster node. By default, Kerberos authentication is used. If the application does not support Kerberos authentication, NTLM authentication is used.

During the installation of the Failover Cluster feature, a domain user account is only required for the following tasks:

  • When you run the Failover Cluster Validation process.
  • When you create the cluster.

To complete each of these tasks, you must have the following access and rights:

  • Local administrator access to each node in the cluster
  • Rights to create computer objects in the domain

After a cluster is created, the domain user account is no longer required for the cluster to function appropriately. However, the domain user account is required to administer the cluster. To administer the cluster, this domain user account must be a member of the Local Administrators group on each cluster node. During the Create Cluster process, a computer object is created in Active Directory Domain Services (AD DS). The default location of this computer object is the Computers container.

The computer object that represents the name of the cluster becomes the new security context for this cluster. This computer object is known as the Cluster Name Object (CNO). The CNO is used for all communications with the cluster. By default, all communications use Kerberos authentication. However, the communications can also use NTLM authentication if it is necessary.

Note Computer accounts that correspond to the cluster Network Name resources can be pre-staged in AD DS. The computer accounts can be pre-staged in containers other than the Computers container. If you want to set a pre-staged computer account to be the CNO, you must disable this computer account before you create the cluster. If you do not disable this computer account, the Create Cluster process will fail.

The CNO creates all other Network Name resources that are created in a Failover Cluster as part of a Client Access Point (CAP). These Network Name resources are known as Virtual Computer Objects (VCOs). The CNO Access Control List (ACL) information is added to each VCO that is created in AD DS. Additionally, the CNO is responsible for synchronizing the domain password for each VCO it created. The CNO synchronizes the domain password every 7 days.

Now that we know the function of the CNO for the cluster, we can consider its impact on the DAG. The first thing to realize is that a newly created DAG is just an empty container that remains inert and unused until you add some mailbox servers to the DAG and then (more importantly) create some mailbox database copies through the seeding process.

There’s no practical limit on the number of DAGs that can exist inside an Exchange organization and you can create DAGs well in advance of their actual deployment. I wouldn’t go and create a set of DAGs without first sketching out on paper some salient facts about how the DAGs might be used in production. For example, you might consider the following questions to provide a guide to the DAGs that you want to create:

  • The purpose of each DAG
  • What mailbox servers will join each DAG and the hardware configuration that each server will use (CPU, memory, disks, NICs, etc.)
  • What mailbox databases will be in each DAG and how many copies of each database will exist. Also, what is the optimum distribution of the active and passive databases within the DAG
  • Who will be responsible for the administration of the DAG
  • What kind of resilience  against potential problems will be factored into each DAG design
  • The network configuration that will underpin each DAG
  • When each DAG will be placed into production

Capturing data like this on paper encourages the development of the first pass of a solid DAG design for the organization. It also provides you with the data that you’ll need to bring the design to the next level by using tools such as the Microsoft Exchange 2010 mailbox server role requirements calculator. Hardware vendors will likely have their own tools that you can use to fine-tune server and storage designs.

Each DAG has its own cluster and trading technology, therefore its own CNO in Active Directory. The name of the DAG is also the name of the CNO in Active Directory. The question therefore is how to name the DAG/CNO. I must admit that I like simplicity and therefore use names such as “DAG1”, “DAG2”, and so on whenever possible. Users never see DAG names so the real issue is what names make sense to administrators. In some circumstances it might make sense to include the names of the part of the company that uses the databases in the DAG so you’d have names like “DAG-FINANCE” and “DAG-TRADING”. This is fine as long as you pick names that aren’t likely to change due to organizational disruption and upheaval (using more granular names such as “DAG-DEPARTMENT12” is not recommended). You could also use country or regional names to identify the DAG, an approach often taken by companies who organize IT management on a geographical basis. Names such as “DAG-DUBLIN” and “DAG-NYC” are just fine because Dublin and NYC are locations that are unlikely to vanish anytime soon. Including a more granular location name might be problematic if the company closes a location. In any case, you have up to 15 characters to create a name that must be unique within the Active Directory forest. Given that most Exchange organizations will have less than ten DAGs, it’s reasonable to assume that creating ten logically well-named DAGs shouldn’t be a huge burden!

Exchange 2010 will protect you against making mistakes with DAG naming. For example, if you use EMC and attempt to create a DAG with a name that’s too long, you won’t be able to type past 15 characters as EMC will stubbornly refuse to allow you to type in the 16th character:

EMC pauses at the 15th character of the DAG name

Much the same protection is offered if you attempt to input an invalid character. Remember, the DAG name is the same as the CNO name and Windows doesn’t allow you to put anything but “normal” characters in a computer name.

An invalid character is entered for a new DAG name

Of course, you could try and get around the limitations imposed by EMC and attempt to create the new DAG with the New-DatabaseAvailabilityGroup cmdlet. The screen below shows what happens if you attempt to use too long a name or one that contains an invalid character.

Can't create bad DAG names through the shell either

The interesting thing here is that EMS also generates a log file. The reason is that EMS has actually attempted to run the cmdlet whereas EMC never got to that point as the safeguards built into the New Database Availability Group wizard UI stopped the creation attempt beforehand. If we investigate the log file in the DAG Tasks folder under ExchangeSetupLogs, we can see the following:

new-databaseavailabiltygroup started on machine EXSERVER1.
[2011-01-04T10:38:05] new-dag started
[2011-01-04T10:38:05] commandline: $scriptCmd = {& $wrappedCmd @PSBoundParameters }
[2011-01-04T10:38:05] Option 'Name' = ''.
[2011-01-04T10:38:05] Option 'WitnessServer' = ''.
[2011-01-04T10:38:05] Option 'WitnessDirectory' = ''.
[2011-01-04T10:38:05] Option 'WhatIf' = ''.
[2011-01-04T10:38:05] The operation wasn't successful because an error was encountered. You may find more details in log file "C:\ExchangeSetupLogs\DagTasks\dagtask_2011-01-04_10-38-05.338_new-databaseavailabiltygroup.log".
[2011-01-04T10:38:05] WriteError! Exception = Microsoft.Exchange.Management.Tasks.DagTaskDagNameMustBeComputerNameExceptionM1:
The name of a database availability group must be a valid, unique computer name. The specified name ('DAG-NameWayTooLongForE14') does not meet these requirements.

Hopefully this helps in your planning for the deployment of Exchange 2010… More details about the planning and operation of Database Availability Groups can be found in chapter 8 of Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition. Other e-book formats for the book are available from the O’Reilly web site.

Cheers,
Tony

Posted in Exchange, Exchange 2010 | Tagged , , , , , , | 9 Comments

Removing mailbox export and import requests


Exchange 2010 SP1 introduces a new cmdlet set to handle mailbox export requests to avoid the previous requirement to install Outlook on mailbox servers. With SP1 you can run the New-MailboxExportRequest cmdlet to create a request for the Mailbox Replication Service (MRS) running on a Client Access Server (CAS) to export data from a mailbox to a PST located in a file share that is protected from prying eyes by the Exchange Trusted Subsystem security group.

Much the same approach is taken for mailbox import requests. In this instance, you create an import request with the New-MailboxImportRequest cmdlet to instruct MRS to access data in a PST located in a file share and import the data into a target mailbox.

It seems clear that Microsoft is steadily moving toward using the MRS as the fulcrum for mailbox maintenance operations. It therefore comes as no surprise that the mechanism used for mailbox export and import requests shares some of the maintenance issues that afflict mailbox move requests, which were the first operation moved to MRS in Exchange 2010. Mailbox export and import requests accumulate and are not cleared out automatically even when they are successful. There is no aging mechanism either whereby Exchange might clean up old requests after they exceed a set limit such as 30 or 60 days. The logic might be that it is possible that you want to retain export or import requests as proof that a certain operation occurred.

The upshot is that you therefore should remove old requests periodically. This is much the same basic housekeeping that you have to perform for mailbox move requests as described in this post.

The appropriate way to do this is to run the following commands to clean up old import and export requests that have completed successfully (make sure that you are sure that you don’t want to keep any requests for whatever reason before you execute these commands).

Get-MailboxImportRequest -Status Completed | Remove-MailboxImportRequest

Get-MailboxExportRequest -Status Completed | Remove-MailboxExportRequest

Some administrators perform mailbox exports before they delete mailboxes to be sure that all of the mailbox content is accessible should the need arise. Others opt to hide the mailbox and disable the Active Directory account and are happy for the mailbox content to stay in the database for a month or so before they delete the mailbox. Whatever approach you take, you won’t be able to remove old mailbox import or export requests if you delete the Active Directory account beforehand. This is because Exchange uses some Active Directory attributes to link the mailbox to the information MRS maintains about requests so you’ll end up with a “lingering” or “orphan” export or import request if you delete the Active Directory account before you clean them up.

Microsoft acknowledges that this is a bug and code will no doubt appear in the future to manage orphan import and export requests a tad better. In the meantime, the sequence is:

  1. Export the mailbox and check that the export request completed successfully (use the Get-MailboxExportRequestStatistics cmdlet with the -IncludeReport parameter)
  2. Check that all is well with the data exported to the PST (open the PST and validate the contents)
  3. Remove export requests for the mailbox with the Remove-MailboxExportRequest cmdlet.
  4. Delete the Active Directory account and mailbox. This can be done with the Remove-Mailbox or Remove-StoreMailbox cmdlets. Remove-StoreMailbox is new for Exchange 2010 SP1 and is used to remove a mailbox from its database immediately.

MRS is certainly a central player for Exchange management going forward. For now, it works very well in most circumstances and issues like this are just minor tweaking. For more information about how to export mailbox data with Exchange 2010 SP1, see pages 758 to 760 in chapter 12 of Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition. Other e-book formats for the book are available from the O’Reilly web site. General information about how MRS processes mailbox export and import requests can be found on pages 747-758 onward.

– Tony

Follow Tony @12Knocksinna

Posted in Exchange, Exchange 2010 | Tagged , , , , , , , , | 14 Comments

Tales of the unproductive TMO


I fear that I am the most unproductive TMO in rugby. The last six matches that I have been involved in have seen just one potential try decision referred to me (480 minutes for one decision, very low productivity). I guess I have been fortunate to work with referees such as Alan Lewis, Alain Rolland, Steve Walsh, Stuart Dickinson, and Bryce Lawrence, all of whom get into great position to make their own decisions about tries. In any case, Saturday found us in Toulon for the Heineken Cup game against London Irish. Toulon had beaten London Irish in England last weekend so this was a “do or die” match for London Irish as a win was an absolute necessity if they had any chance of progressing in the competition.

The game was pretty uninspiring in the first half. Fourteen penalties and an interminable series of line-outs and unsatisfactory scrums in what was supposed to be a high-class game of professional rugby tells its own story. Eventually Alain Rolland (referee) told both captains that there was far too much interference with the ball on the ground and warned them of the likely consequences. The second half started with London Irish facing a 17-0 deficit and exploded into action after the London Irish 11 (Sailosi Tagicakibau) was dismissed after he wrapped a Toulon player in a tackle some 10m from the Irish line. He had been yellow-carded in the 21st minute for a deliberate knock-on so as light followed day another yellow card emerged from Alain’s pocket to be transformed into a red card after Alain realized that the numberless jersey really was number 11.

This red card was a good example of a team sanction but it was not intelligent for a professional player who had already received a yellow card to put themselves into a position after a team warning where the referee had to deliver on the warning and give a yellow card to the next player who infringed, especially in the “red zone”. I’m sure that his team had plenty to say about the red card as it seemed to create an insurmountable challenge: Toulon is difficult enough to play at home with a full 15; giving them an extra player seems unwise to say the least.

However, as it turned out, losing a player spurred on London Irish and they rapidly scored a penalty and two converted tries to regain parity at 17 all. Things were interesting in the broadcasting truck where I could hear the moans of disbelief from the French contingent as Toulon struggled to cope with 14 opponents. However, when a team is a man down they have to be very precise about how they play and the wheels came off London Irish’s comeback in the 67th minute when a turnover resulted in the third Toulon try. This was swiftly followed by two further tries to secure a 38-17 win and bonus point for Toulon and to leave London Irish bereft of anything to show for their effort.

Our travel this weekend was pretty smooth because we had the luxury of avoiding a transit and used the direct Dublin-Nice route with Aer Lingus. It’s a great luxury to be able to take a direct flight to a European match instead of being forced to go through the hell-holes of CDG or LHR. Two other Irish refereeing teams had significant problems with snow and ice elsewhere in France and Italy that resulted in match delays. When we reached the airport on Sunday, we thanked our good luck again as we found hundreds of people trying to arrange flights due to closures in Frankfurt and Heathrow.

I bumped into Bob Casey, the second row of London Irish, at the airport. I’ve known Bob since he played schools rugby for Blackrock College. He played against Toulon but had come off in the 77th minute after a clash of heads with a teammate. Fortunately, his sunny good humour was still evident, perhaps due to the prospect of returning to Ireland for some pre-Christmas cheer. Bob writes an interesting rugby column from the perspective of a professional player in the Irish Times every Monday. It was interesting to hear his view on the game and how things went.

Rugby travel is now over for 2010. It will resume in January when the last two pool rounds of the Heineken Cup are played followed soon afterwards by the Six Nations. In the meantime, it’s time to have a rest and enjoy being at home for a while.

– Tony

Posted in Rugby | Tagged , , , , | 1 Comment

Cameras and the TMO


Following on my post describing some of the challenges facing officials engaged in professional rugby, I received some notes asking me what kind of TV camera angles are available to a TMO when they are asked by a referee to give a decision about a try. I think the question is driven by the pictures that audiences see at home during the time between the referee stopping play and the decision being taken. There’s a feeling that the TMO has access to more cameras and more angles than the TV viewer so the question really is what those pictures might be.

The first thing to say is that many different TV broadcasters cover rugby and they all approach the job slightly differently. For example, in Ireland we have TG4 and RTE, in France there’s Canal Plus and FR2, and in the UK there’s the BBC and Sky Sports. Broadcasters decide whether to send their own crews to matches or to hire an independent company to do the job.

The next variable is the director. Each director will have their own way of covering matches and will ask their camera crews for different shots and angles. And then there are the camera crews – some are more accustomed to covering rugby and are better able to predict what might happen in a play and where the important action will take place. Others are more “artistic” and produce beautifully framed pictures that might not capture the essential detail that a TMO is interested in when the time comes to determine whether a try is scored. In order or priority our focus is on the ball, goal line, ball carrier, and other bodies that may prevent grounding. We want to know where the ball ended up and how it got there and look to the broadcaster to provide pictures to let us determine this information.

Chart showing TV camera positions for Perpignan v. Leicester Heineken Cup match

When you turn up for a game, the first port of call for the TMO is to locate the director. Often they are to be found in a outside broadcast command center in a large truck parked in close proximity to the stadium. The order of business is to introduce yourself as the TMO and then discuss with the director how they will provide pictures at decision time. We also test the communications equipment to make sure that we have two-way contact with the referee.

The director will have a camera plan similar to the one shown above that describes the placement of the fixed cameras and any hand-held cameras that are available. Each camera is described in terms of its capabilities and the name of the operator and the director uses this information to call for different pictures as the game progresses. You can see the kind of detail that’s available in the chart that’s illustrated.

Fixed cameras are located in the stands or in special stands behind the goals or close to ground level around the half-way line. They are also found in cherry-pickers that are hired for the game (usually these are the high cameras that are positioned behind the goal posts at each end of the ground). Hand-held cameras roam the touch lines to capture close-up action and can be invaluable for touchdown calls, providing they are in good position to capture the play and are not blocked by an assistant referee or players.

The number of cameras varies from match to match depending on its importance and the willingness of the broadcaster to spend money on the coverage. Most high-quality (international and Heineken Cup) games will have two high cameras on one stand and another on the opposite stand, high cameras at each end of the ground, low-level cameras monitoring the goal line from the dead-ball area, and at least two hand-held cameras. Depending on the ground, extra fixed cameras might be installed at low-level on the sideline.

With such an array of cameras following every play we have an excellent chance of being able to see a try being scored, so why does it sometimes take so long for the TMO to make a decision? Well, we first have to listen to what the referee wants us to check. This can be a simple check to ensure that a player grounded the ball properly to score or it can be a more complex affair of checking that no foul play or other action occurred in the immediate vicinity of the try-scoring attempt. Next, we have to be sure that we know what happened and sometimes it just takes a little time to figure it all out. It would be nice if every decision was obvious but the nature of the human body means that trailing legs or leading arms can brush a touch line and then the question is whether the player had grounded the ball before going into touch. Or maybe an opponent slid some part of their body under the ball to prevent grounding. Or maybe the sheer dynamic and physically confrontational nature of rugby led to ten players colliding in a small part of the pitch and a try may or may not have been scored…

A good director who knows rugby will quickly figure out what the best picture and angle will be and will feed it to the TMO. After that it’s just a matter of working the angles and exploring available pictures to determine what happened to the best of the TMO’s ability. We often see different pictures than those shown on the broadcast simply because we might want to look at things in a slightly more intense manner than a viewer might be interested in – but it’s up to the director as to what pictures are broadcast.

Having High-Definition (HD) pictures available makes things easier but this isn’t always the case. HD makes lines clearer and exposes detail that just isn’t available to standard monitors and it helps a TMO understand what has occurred faster and more accurately. It’s a silly position to be in where broadcasters pump HD pictures to people viewing at home while the TMO reviews lesser quality pictures when they make a decision. Hopefully HD will become the standard in the outside broadcasting facilities soon.

So that’s what happens – no magic bullets to help to make decisions and no cloak and mirrors used anywhere. Just sheer hard work after mild panic occurs when the referee stops play and says “Hallo Tony – can you hear me. Can you tell me…”  Good fun all round.

– Tony

Posted in Rugby | Tagged , , | 2 Comments

One or two NICs?


During the development of Exchange 2010, Microsoft originally required that any mailbox server participating in a Database Availability Group (DAG) had to be equipped with at least two NICs (Network Interface Controllers). One NIC handles “MAPI” or client traffic; the other handles the replication traffic generated by Exchange to keep database copies updated within the DAG.

Not everyone appreciated Microsoft’s stance because it meant that additional cost was imposed on smaller installations where you might want to run highly available mailbox servers but didn’t want to buy new hardware or update servers with additional NICs. I know that there seems to be a disconnect here because you might think that anyone interested in high availability would be willing to invest in the hardware required to do the job, but there are situations where companies who have run configurations such as Single Copy Clusters for Exchange 2003 or Exchange 2007 (the traditional “WolfPack” approach) for small offices now want to upgrade to Exchange 2010 and use a DAG and want to reuse their servers.

In any case, Microsoft retreated from their original position and you can operate mailbox servers within a DAG with just one NIC. In this configuration, all client and replication traffic are handled by the one NIC and everything works very nicely.

The question that is commonly asked therefore is what point you need to move to multiple NICs for an Exchange 2010 mailbox server? I think there are a number of points that deserve consideration:

  • The number of users supported by the server: Normally, a company feels more of an impact if a server that supports thousands of users fails than one that supports just a few hundred. If you operate small sites (for example, distributed offices), you probably configure relatively small servers and may decide that you will standardize on a single mailbox server design that is used everywhere. The prospect of a NIC failure is probably less common than a disk or other failure, so you decide to run just one NIC. On the other hand, if you’re operating large servers to support thousands of users within a centralized datacenter, you’re probably going to take every step that you can to prevent a hardware failure affecting service to users and it makes sense to equip the mailbox servers with multiple NICs and the additional cost will be accepted.
  • The true desire for high availability: People talk a lot about their desire to assure high availability and then throttle back as the factors necessary to protect the infrastructure such as cost and complexity climb. Mailbox servers within an Exchange 2010 DAG operate very well with just one DAG and you might decide that this server configuration will deliver a satisfactory amount of high availability for your infrastructure.  On the other hand, if email is deemed to be a critical application that is fundamental to the continuing success of the business, you’ll probably invest in the extra hardware.
  • The potential for outages: This factor is unique to each organization. You know how good your operations staff is, what kind of monitoring is in place to detect server and other outages, how old and well-maintained the equipment is, and other environmental factors such as network reliability and so on. If you operate brand-new hardware that has been well bedded-in before deployment and your operations and management processes are capable of dealing with outages swiftly and effectively, you’ll probably be happy to run with a single NIC (of course, your brand-new hardware might well be delivered with multiple NICs so the point then becomes moot). If not… well, the comfort factor of multiple NICs might be nice.

You’ll notice that the word “probably” is used a lot in this discussion. That’s because this issue is intensely unique to a company.  Companies have different cultures and part of that is a varying attitude to risk and cost that will lead them to make their own hardware design decisions. The only hard fact is that Exchange 2010 mailbox servers function well within a DAG whether they have a single NIC 0r multiple NICs. It’s only when problems occur and a server has a NIC failure that the other factors come into play and you may appreciate the additional flexibility afforded by multiple NICs.

I suspect that the server vendors are figuring out what kind of solutions they should sell to satisfy customer demand for highly available Exchange 2010 servers and that they will take the NIC issue into account as they determine suitable configurations. I think that we will see some OOTB HA Exchange 2010 hardware solutions appear in 2011 and it will be interesting to see how the server vendors elect for their configurations. In the meantime, if you have the choice, it’s best to be flexible and opt for multiple NICs as it removes a potential area of concern for your mailbox servers.

– Tony

Posted in Exchange 2010 | Tagged , , , | 7 Comments