A peek into the future from Exchange’s General Manager

One of the joys of attending a conference as a speaker is the chance to listen to others once your session is over. My keynote at TEC was at 8AM on Monday morning so I had most of Monday and Tuesday to pick other sessions to attend before I had to depart for my flight to London and then back to Dublin. An excellent variety of good speakers were on the agenda. Exchange sessions by Greg Taylor, Ross Smith, Paul Robichaux, Scott Schnoll, Lee Mackey and Paul Bowden competed in my mind with Active Directory sessions by Guido Grillenmeier and Brian Desmond. I stayed with the Exchange track most of the time and enjoyed the vast majority of the sessions that I attended.

Tuesday’s keynote for the Exchange track was given by Kevin Allison, the General Manager responsible for the development and support of Exchange for both the on-premises and cloud platforms.

Kevin started by reviewing the current adoption rate of Exchange 2010, which he estimated to be roughly a year ahead of Exchange 2007 in terms of customer deployments. This didn’t come as a surprising revelation because Exchange 2007 was the first release in a new generation of product and marked a significant change in the deployment and management techniques used for Exchange 2003. By comparison, Exchange 2010 builds on the architecture established by Exchange 2007 and is therefore a more familiar target for customers. In addition, Exchange 2010 has a more compelling range of functionality to convince customers to upgrade, not least the Database Availability Group (DAG).

Kevin said that most Exchange 2003 customers are now engaged in planning activities to upgrade their infrastructure. Of course, they now have a big choice to make in that customers can now opt for a traditional on-premises deployment or choose to move into the cloud with Office 365. He remarked that he has seen a different attitude to cloud deployments where customer expectations are that the mechanics move much faster than traditional projects. Most large companies can take up to a year to plan, prepare, and deploy a new version of Exchange whereas customers who sign up for Office 365 seem to expect that they can sign the contract on Monday and start to move mailboxes on Friday. Of course, It is true that new customers can go through a rapid onboarding process and be up and running on Office 365 in a matter of hours but it is neither feasible nor practical to commence a cloud deployment so rapidly if you have to migrate thousands of mailboxes, even if the end target is to have all mailboxes eventually in Office 365 rather than using a hybrid approach where some mailboxes remain on-premises.

Factors that slow down cloud deployments include the need to plan for co-existence and to have high fidelity in data exchange between on-premises and cloud (for instance, you’d like users to be able to see free/busy information from both sides). Other major time soaks include preparing mailboxes to be moved (for example, making sure that your Active Directory is ready to synchronize with the cloud) and the small matter of the network-constrained task of moving user mailbox data from on-premises servers across the Internet to servers running in Microsoft’s datacenters. No one has yet invented a method to transfer gigabytes of mailbox data in seconds! Mailboxes typically move in batches of a few hundred and each batch has to be prepared, moved, and verified. And then there’s the small matter of preparing users for a mailbox move.

Interestingly, Kevin acknowledged that human resources are a very real limit. Microsoft simply doesn’t have all of the people that might be required to help customers move to Office 365 if a large part of their installed base makes the decision to move to the cloud. Planning and executing moves are human-intensive processes and some of the expectations about timelines that might be set in the sales cycle are unattainable.  However, on the upside, the need for help to ensure successful Office 365 deployments is a huge opportunity for consultants and resellers who might otherwise feel that Microsoft is taking away work that surrounds traditional deployments.

Kevin revealed that 65 million Exchange online users are currently deployed across 8 datacenters using a single infrastructure with over 6 million active users logging on daily. The figure for total users is made up from several sources, including Microsoft’s own internal users, Live@EDU, and customers.

Enterprise mail users put 10 times the load on cloud infrastructure than consumers. This isn’t surprising because consumers might generate five or six messages daily and never use some of the extended features of Exchange whereas enterprise users connect with a variety of devices, are perpetually communicating, and use all manner of features that are simply uninteresting to a consumer such as mailbox delegates or calendar scheduling assistants. As Office 365 rolls out, Microsoft will have more enterprise users to deal with and the proportion of active users will grow as will the demand that users exert on the infrastructure, which then creates all manner of interesting challenges to maintain Service Level Agreements (SLAs). Consumers don’t tend to care very much about SLAs (if they even realize that such a thing exists) but enterprise customers care deeply about the capability of the service provider to deliver a high quality of service according to the contract that was signed.

Kevin remarked that Microsoft’s need to keep such a massive infrastructure going requires a huge amount of ongoing work to identify and isolate faults more quickly through monitoring and event recording. The good thing is that the lessons that Microsoft is learning from running their cloud datacenters is incorporated into the code base used by on-premises servers. Kevin offered search as an example as it wasn’t originally able to deal with hundreds of thousands of items held in the kind of very large mailboxes that are common today. Indeed, the temptation offered by 25GB cloud mailboxes is that no one will ever delete or file any message so you end up with massive inboxes that become a single large searchable repository.

Enterprise customers are gaining from the work done in cloud to solve architectural and scaling limits that are encountered to deal with millions of mailboxes. Further gains come from understanding how to make DAGs work well and cope with disaster recovery (the current deployment uses “pods” of two paired datacenters where mailboxes are stored on databases with four copies, two copies being held in each datacenter) and the tuning of components such as MRS and how it is used to move mailboxes both from on-premises to the cloud and internally as databases are rebalanced.  Another interesting aspect is how Microsoft is amending their datacenter deployments so that they have truly resilient infrastructures where failures in other components such as DNS can’t have fundamental effects on Exchange Online.

Provisioning hardware into datacenters is a difficult exercise in capacity planning. Microsoft has to predict hardware requirements five months out to make sure that sufficient capacity is available to handle customer onboarding. Indeed, if a large proportion of the current installed base decides overnight to move to the cloud, Microsoft might have to turn customers away because they can’t provision servers quickly enough. Success has many different aspects!

Kevin observed that the experience that Microsoft has gained through the operation of Exchange Online has been invaluable in that it has helped them to realize where Exchange has become complex and needs to be improved to remove obstacles to deployment and operation. New wizards are being introduced to make tasks simpler for administrators and to hide complexity and tools like the Exchange Remote Connectivity Analyzer help to debug and resolve issues. Tooling is critical going forward because the network exerts a huge influence over availability in the cloud and administrators need tools from Microsoft to understand the effectiveness and quality of the service being delivered from the cloud.

In terms of what’s next for Exchange, Kevin briefly talked about features that will appear in Exchange 2010 SP2 later this year. Three major updates were discussed:

  • Address Book Policies (ABPs), also known as “GAL segmentation”. This feature is described for Exchange 2007 in a white paper but Microsoft knew that the approach taken (ACLs) would break in the Exchange 2010 architecture, which indeed happened in Exchange 2010 SP1. ABPs follow the same route as other policies (OWA, ActiveSync, etc.) applied to mailboxes in that an administrator can create policies that establish what objects in the GAL can be viewed by a user. The default policy is equivalent to today’s GAL – you can see everything. But administrators can narrow things down by establishing policies that might restrict a user to only being able to see GAL objects that belong to a specific department or country and then apply that policy to mailboxes using EMC or PowerShell. In effect, address book policies create virtual views into the GAL that administrators can amend to meet company requirements. See the Exchange team blog for some more information.
  • Device overload: Microsoft acknowledges that it’s difficult for administrators to know how well mobile devices work with Exchange and with the mass of clients that can connect to the server. Recent issues have occurred that caused recurring meetings to be deleted by some clients because of a lack of interoperability testing that might have surfaced the problem. A new ActiveSync testing lab is being established to help improve interoperability between devices produced by different vendors including RIM, Apple, Microsoft, and Android.
  • Hybrid co-existence, aka rich co-existence. Kevin noted that “We do not expect large customers – over 2500 seats – to have everyone in the cloud all at once.”  A hybrid deployment therefore requires the on-premise Exchange organization to be tailored so that it can share data effectively with Office 365. Today, some 46 individual settings have to be changed to make rich co-existence work well. Exchange 2010 SP2 includes a wizard that will reduce the number of settings that require administrator intervention to 6, so the process of establishing co-existence will be much simpler.

The session finished with a Q&A session. Kevin was asked about the future of email, which he acknowledged is hard to know because of the changing face of communication. He noted that Facebook and SMS updates are more used far more commonly than email by kids today and wondered how they would communicate in 10 years? Figuring out how to keep email relevant and useful is a real challenge that occupies the Exchange team as they plan future releases. Given the way that the product has evolved from the days when messages were an average 4kb in size and we thought that a 100-user server was large to the point where Exchange supports tens of millions of users in the cloud, I think that they have a fair chance of being successful.

– Tony


About Tony Redmond

Lead author for the Office 365 for IT Pros eBook and writer about all aspects of the Office 365 ecosystem.
This entry was posted in Cloud, Exchange 2010, Office 365 and tagged , . Bookmark the permalink.

3 Responses to A peek into the future from Exchange’s General Manager

  1. Pingback: Weekly Roundup: Public Beta, Facebook Contest and Developer Resources - Microsoft Office 365 Blog - Office 365 - Microsoft Office 365 Community Beta

  2. Roy Zhou says:

    For Office 365, how does it handle security concerns for a global enterprise? Do you recommend in-house Exchange CS solution of the cloud solution?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.