Microsoft reveals the truth about single-role servers


In the April 8 post on the Exchange team’s blog, a clear direction is given that single-role Exchange servers are not the preferred starting point for designs. In fact, a rather bold statement is made:

“… always start design discussions with multi-role, and that is the recommended solution from the Exchange team.”

There’s a lot of good information and recommendations in the post that I totally agree with and some that I don’t. For example, I don’t agree with the notion that you should start with RAID-less JBOD direct-attached disk configurations simply because most shops don’t have the time, energy, or operational efficiency to monitor JBOD disks and take action when failures occur – and they will. It seems to make a lot more sense to plan for a degree of robustness in the solution up-front and take advantage of all the smart technology that companies such as EMC, NetApp, and HP include in their disk controllers today. Sure, you can go cheap-and-cheerful with JBOD if you like and get the warm glow that results from a successful deployment, but be prepared for “interesting times” when disks fail over the course of server lifetime. Of course, if you’re a consultant who parachutes in to do a design and departs immediately upon payment, you don’t need to worry about long-term operational robustness, but that’s getting away from the point I originally started to discuss.

When Microsoft introduced Exchange 2007 way back in 2006, they made a big fuss about the wonders of single-role servers and the splendid code isolation that they had achieved by giving administrators the chance to install just the code required to do the job – and no more – on their Exchange servers. Less code was exposed to hackers and speedier performance was assured because excessive instructions and data couldn’t get in the way. The horrible mess of Exchange 2000 and Exchange 2003, which of course are multi-role servers and come equipped with all the code necessary to do whatever task is demanded of them, was discarded in a wonderful embrace of the notion of “less is better”.

I just wonder what’s happened in the five years since to make Microsoft recant and realize that multi-role servers are actually very flexible and the right option to begin with for all deployments. Of course, the default installation mode for Exchange 2007 and Exchange 2010 has always been to offer to install a multi-role MBX/HT/CAS server, so maybe the fuss and bother about the joy of single-role has been so much smoke and mirrors? As outlined in my own post of March 1, I suspected as much…

No, you say. Microsoft would never do such a thing to their faithful community of Exchange administrators. So there’s got to be another reason. I suggest that the answer is encapsulated in another sentence in the blog post:

In Exchange 2007, we did not support the Client Access or Hub Transport roles on clustered servers, so there was no way to have a high availability deployment…

The penny drops! Single-role deployments are useful and valuable in some specific circumstances, usually encountered in very large deployments, but the notion that single-role is wonderful is so much marketing powerfully pungent brown bovine emissions thrown out to disguise the fact that Exchange 2007 only ever aspires to be a partially highly-available application because the HT and CAS roles can’t be installed onto a server that operates in a Continuous Cluster Replication (CCR) or Standby Cluster Replication (SCR) configuration. The situation is very different with Exchange 2010 because the Database Availability Group (DAG) supports multi-role servers as well as dedicated mailbox servers, so you can include all of the necessary servers into a single highly-available entity (in Exchange terms anyway – there are other parts of the infrastructure that also need protection before you achieve true high availability).

Now that we’ve cleared up the confusion, we can consider what happens from this point. According to their blog (second only to TechNet in terms of accuracy, clarity, and insightfulness) Microsoft’s best practice is now firmly focused on multi-role servers. We can therefore anticipate that this trend will continue and that future engineering efforts will support this position.

No more single-role servers are needed unless you need them for a specific purpose. For example, virtualizing HT and CAS servers has always seemed to be a pretty good idea to me because these server roles are essentially stateless and it seems practical and logical to isolate these servers on virtualized machines for large deployments. But for smaller deployments built around a few servers in a DAG, follow the excellent recommendations of the EHLO post, keep everything simple and go with multi-role. You know it makes sense.

– Tony

For more information about how to configure and deploy Exchange 2010 SP1 servers, including how to approach many tickly design problems, see Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition.

Advertisements

About Tony Redmond ("Thoughts of an Idle Mind")

Exchange MVP, author, and rugby referee
This entry was posted in Exchange 2010 and tagged , , , , . Bookmark the permalink.

8 Responses to Microsoft reveals the truth about single-role servers

  1. Nick Martin says:

    Great ” off topic” comments about the interesting time that local disks are going to bring. I remember working with exchange before Sans were the default and the idea of going back to managing what will now be terabytes of disk hidden in a collection of separate cages and arrays in multiple servers gives me nightmares.

  2. mdrooij says:

    “In Exchange 2007, we did not support the Client Access or Hub Transport roles on clustered servers, so there was no way to have a high availability deployment”

    So what are all these load balancers and HT’s built-in round-robin mechanism for?

    I can only imagine such comments are also meant to softly “disqualify” Exchange 2007 and position Exchange 2010 for all these Exchange 2003 customers planning to upgrade. It’s a weird statement. Some years ago, in a large global Exchange 2003 deployment, I was actually doing what Exchange 2007 brought with its role model, isolating Exchange functionality in a mailbox, transport and client component. This also offered better predictability and lowered attack surface (I remember the thick hardening manuals from back then). In fact, already then (2004) I was already using mailbox, MTA and FE building blocks.

    This monolithic mail server story sounds like going back to Exchange 2003 again.

    On the RAID/JBOD topic; I think the JBOD argument is purely a marketing one. Compare the hours of (manual) recovering your crashed mail server after a drive crash against replacing a faulty drive and (automatic) rebuilding the RAID set.

  3. Constantino Tobio, Jr says:

    Two points I’d like to make:

    1. For our deployment (approx 5500 mailboxes, Exchange 2010 in a greenfield), we ran the Exchange storage calculator to both ways- JBOD and RAID10, using 2TB 7.2k SAS disks. If we were going to do JBOD, we were going to deploy 4 database copies, with RAID, we were going to do 2. The total spindle count was identical, essentially we were going to save $0 using JBOD, but we were going to expose a world of hurt for ourselves when (inevitably) one of these midline SAS drives died (we’ve replaced 2 already that were “prefailed”). JBOD for our particular environment was simply not acceptable because it provided no cost saving benefits, and introduced painful scenarios for when things (inevitably) went wrong.

    2. Consider this scenario- if you set up 2 multirole DAG servers with the MBOX, CAS, and HT roles, obviously you want to have both have the same roles installed. However, is it not the case that a DAG MBOX cannot talk to itself for HT, and that it must talk to another HT in the site? If so, wouldn’t that expose a loss of service when you lose one of the hosts, since it MUST talk to a different HT? You can mitigate that by having a third HT, I suppose, and if it only needs the HT role. Another situation is the fact that you can’t use Microsoft’s built-in NLB when the CAS role is collocated on a DAG MBOX server, and must use a hardware LB. There’s another cost right there. I just don’t like the idea of making DAG members into multirole servers- in the long run, you limit what you can do, and there’s some hidden costs. Personally, if you want to go the HA route but save money on hardware, I’d much rather virtualize as much CPU as possible. Your disk spend will be the same either way, but you can save on physical hosts. I think a minimum HA solution should entail 2 DAG member MBOX servers, and 2 CAS/HT servers. You could virtualize this into one or two physical hosts, depending on needs and exposure risks, and especially if you use two physical hosts, you have a very highly available setup.

    • You’re correct that a Mailbox server in a DAG will look to route messages via a different HT, if one exists. This is to ensure that messages are captured by the transport dumpster and can be replayed just in case things go wrong. However, if only one HT is available to the mailbox server and that HT is co-located, it will still be able to use that HT. There isn’t the same kind of block that prevents an Exchange 2007 server routing via an Exchange 2010 HT (because of version changes).

      As to a hardware LB, there are many cheap LBs available, including some that run on virtual machines. Kemp Loadmaster seems to be the best combination of price/performance for the low end at the moment.

      TR

      • Constantino Tobio, Jr says:

        Ah, thanks for the correction. So HA of all roles can be done with just two all-in-ones then. That still leaves the HLB cost- looks like the cheapest Kemp Loadmaster lists for nearly $1600 (x2 if you want to make that item HA as well), so that’s a minimum cost that should be considered if one were going to go the all-in-one DAG route. I guess this could be an attractive option for the right environment- especially smaller environments that you want to provide HA features at lower budget costs.

  4. Andrew Ehrensing says:

    Just to clarify – one of the reasons for breaking out the roles 5-8 years ago was that servers were 1-2 core boxes with limited number of megacycles. Going forward, we are seeing 24-32 core servers shipping, and that number going up going forward. We want Exchange to leverage all that is possible in the hardware, and merging that with multirole makes a lot of sense. Essentially you get a “brick” model deployment, all identical servers, with logic built in to take advantage of multirole (don’t use hub on same server in multirole, use HUBs on other multirole servers to take advantage of Shadow Redundancy).

    This provides a single standard server build for ALL Exchange servers, and an inexpensive (or expensive, your choice) hardware load balancer in front of it all. Adding capacity is simple, add another brick, and rebalance your active users. Since mailbox moves are online, you can do this during the day dynamically.

    Cheers!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s