Ever since Exchange 2007, Microsoft has emphasized the product’s capability to be installed in different roles on a server. The logic behind this direction is pretty simple: you should only ever install the code that you actually need on a server rather than installing a lot of stuff that you may not need and might, under the right conditions, create a security risk due to an increased “attack surface”. In other words, the more code that you lay down on a computer, the larger the chance that some of that code will contain a bug that can be exploited by a hacker.
It all sounds good and the technical community has largely bought into the premise. However, there’s a nagging doubt in my mind that in reality, the vast bulk of Exchange deployments don’t pay much attention to single server roles because they use multi-role MBX-CAS-HT servers. Indeed, the default option offered by the Exchange 2010 installation program is to create such a multi-role server! Another data point is provided by the recent launch of the HP E5000 messaging “appliance”, which provides different multi-role server packages designed to cater for 500, 1,000, and 3,000 mailboxes. Out of the box, the E5000 delivers two multi-role MBX-CAS-HT servers configured into a DAG. Sure, you can go and remove the CAS and HT roles afterwards, but I doubt that anyone will.
Another factor is the sheer speed and capability of current server hardware. In the old days, when spare CPU cycles were intensely valuable and storage came at a premium, it made sense to reduce the amount of code that executed on a server or had to be installed on a server. Today, server hardware is so powerful that CPU is usually not an issue and bountiful storage is available. No server will notice the extra code necessary to run CAS and HT functionality and the extra storage required is a rounding error.
Don’t get me wrong. I still think that the notion of separating functionality into server roles is a good thing. It’s just that in the harsh light of practical deployment, most companies elect to use multi-role servers because they don’t want to spend the extra cash to buy the hardware and software necessary to create purpose-built single-role servers. UM servers are the notable exception, but these are used in a small minority of the overall Exchange installed base.
It seems clear therefore that the separation of server roles is something that is only interesting to large enterprises who have the necessary financial and technical resources to design, deploy, and support an Exchange organization created from layers of mailbox, CAS, and HT servers. This is certainly a very viable and worthwhile deployment model where you have the chance to build different server configurations for the different roles: memory and disk-heavy for mailbox servers while the CAS and HT servers can be stripped down boxes because they are largely stateless in terms of the data that they hold.
The bottom line is that server roles are a good thing because they provide extra flexibility to Exchange. That flexibility is important to some very large and important customers and that’s probably why Microsoft has provided the capability. But I suspect that the vast bulk of Exchange 2007 or Exchange 2010 server deployments will continue to be happy with their multi-role all-in-one servers. Why complicate things when you don’t need to?
Complication of course comes from the fact that you can’t do NLB for a CAS Array if the CAS is installed on a DAG server.
So you’ll then have to fork out for hardware load balancers or get fancy with DNS if your primary CAS has a problem.
Yep. Very true and a point that some miss when they start to consider options for load balancing and high availability for DAGs. Windows Failover Clustering, the foundation for DAGs, does not support Windows Load Balancing so you can’t have both on the same box. However, IMHO, if people are really interested in high availability, they will ignore NLB and all its flaws and weaknesses and deploy a real load balancer. For high end deployments, these will be serious hardware-based appliances such as F5 BIG-IP; for the lower-end they’ll be something like a Kemp Loadmaster running as a virtual machine or on hardware.
Virtualization is another depoyment scenarion where multi role servers cannot always be used. The 4 vCPU Limit is often the reason why customers decide to separate roles.
Nice article Tony. What I’m missing as argument for splitting roles is separating functionality, thereby lessening the (potential) attack surface, and being able to scale out without the possible need to remove certain roles from the multi-role servers. Also, most organizations consider their mailbox servers sacred and consider the other Exchange roles part of the infrastructure, like DNS and AD which also translates in their backup strategy.