NFS and Exchange: Am I a sheep? Some at Nutanix think so…


A number of people working for Nutanix have been in the vanguard of those who would like to see NFS supported as a valid storage option for Exchange deployments, some of whom cheerfully recommend that “customers continue to run Exchange on Nutanix containers presented via NFS. In my mind this is a bad idea because it puts customers into an unsupported configuration. Running what is often a mission-critical application in a deliberately unsupported manner has never really appealed.

As you might be aware, the current situation is that Exchange 2010 and 2013 supports three storage architectures: Fibre Channel, iSCSI, and DAS. These architectures are supported because Microsoft knows that they work well with Exchange and are implemented by a wide range of storage vendors in a predictable and robust manner, all of which creates supportable solutions for customers.

As I have discussed before, Microsoft has a number of technical issues that have to be resolved before NFS can be added to the list of supported storage architectures. We can argue this point to the nth degree, but it’s not me that needs to be convinced that NFS is supportable for solutions created by multiple vendors. At the end of the day, the vendors who want to sell NFS-based solutions have to convince Microsoft. And Microsoft have to be convinced that NFS will be implemented at the same high quality by every vendor – and that those vendors can support their hardware in the field if things go wrong.

I think Microsoft has another concern too: there doesn’t appear to be too much customer demand that Exchange supports NFS. At least, this was the strong impression formed when Jeff Mealiffe queried an audience of several hundred people during his “Virtualization Best Practices” session at MEC last April. Only 3 or so people put their hand up when asked. If the commercial demand existed, you can bet that Microsoft would be all over NFS like flies around cow dung.

Those who want to see NFS join the list of supported architectures are diligent about advancing their case across a range of social media. An example is a Reddit debate. Twitter is also a favourite medium, which we’ll get to shortly. Another is the write-up in a TechNet forum that sets out the case for Exchange to support VMDKs mounted on NFS datastores. This post makes the point that there is a “gap in supporting Exchange in VMware vSphere & KVM with datastores connect via NFS 3.0.” That gap exists but the evidence from the audience pool at MEC is that it is a relatively small gap in the overall context of the Exchange market.

In saying this I accept that the gap (in support) might be terribly important to certain companies who wish to standardize their virtualization platform around NFS. But that’s not evidence that consistent and perceivable customer demand exists in sufficient quantity to warrant Microsoft to do the work necessary to certify that NFS indeed works (in a supportable manner) with Exchange.

Remember that once a storage architecture is supported, that support has to stay in place for an extended period (probably ten years given Microsoft’s normal support policies) and that support must be available on a worldwide basis. Convincing management to sign up for the costs necessary to support another support architecture without evidence that this will do anything to increase either Microsoft’s profitability or market share is a hard case to argue. And this proposal has to be viewed in the context of an on-premises market that will decline over time as a sizeable chunk of the installed base moves to Office 365 or other cloud providers. So the case being advanced is a good technical debate that runs into rough waters when money and market issues are factored into the equation.

It does not seem like a good economic decision for Microsoft to support NFS, but does the technology work? It certainly seems like quite a few companies have followed the Nutanix advice to run Exchange databases on NFS storage. To demonstrate that everything works just fine, Nutanix explained in the Reddit debate how they ran a JetStress test. They reported that “the Exchange Solution Reviewed Program (ESRP) Jetstress test (ran) for 24 hours in a single Windows 2008 VM with 4 PVSCSI adaptors serving 8 x VMDKs hosted on a single NFS datastore and no surprise it PASSED with flying colours.” This is all very well, and I am sure that everyone associated with the test was tremendously happy.

However, running Jetstress is no way to prove supportability, which is the key issue debated here. Jetstress certainly allows you to stress storage subsystems to prove their stability, so the test simply proves that (a specific version of) Nutanix’s implementation of NFS works when exercised by JetStress. It does nothing to prove the case for supportability of NFS within Exchange deployments. More work would have to be done to ensure that a guaranteed standard of implementation and performance was achieved for NFS across the industry. A single test run by a single vendor against a single storage configuration proves nothing.

It’s also true that the configurations that are capable of passing the guidelines laid down in ESRP are often “odd” and impractical in terms of real-life deployments, so saying that a test was passed with flying anything really doesn’t help us understand how NFS would cope with Exchange over a sustained period across multiple software versions deployed by different people around the world.

Some comments from a Nutanix employee on my Twitter feed

Some comments from a Nutanix employee on my Twitter feed

Going back to Twitter, I received some advice from Nutanix employees that I should engage with their initiative to encourage Microsoft to support NFS. I should stop being a sheep and work to convince Microsoft to change their “outdated and embarrassing support policies.” Quite.

I found it odd to see the debate reigniting because Nutanix has a configuration that presents iSCSI storage to Exchange. Given that this configuration exists, I wonder why we have to debate NFS support at all. It would seem much more productive all round for Nutanix to tell customers to use the fully-supported iSCSI approach rather than expending all these cycles trying to convince Microsoft that NFS should be supported too.

In my past, I spent over a decade in Chief Technology Officer or Technical Director jobs at Digital, Compaq, and HP. As such, acknowledging that I have been known to be wrong (many times), I think I have a reasonable grasp of how to assess the import and worth of a technology. So I spent a little time looking at the information available on the Nutanix web site.

Up front I have to admit that I have no hands-on experience of Nutanix products. Their literature indicates that they have some interesting technology created by people with a strong track record in the storage industry. As such, its products are certainly worth checking out if you are interested in virtualization. However, I have some doubts about some of their statements concerning Exchange.

For example, browsing Nutanix’s “Microsoft Exchange Solution Brief,” I was interested to find the assertion “With more than 50% of Microsoft Exchange installations now virtualized, it is critical to select the right server and storage architecture.” Although no one could quibble with the advice, I think they are way off with the preamble. I have never seen any reliable data to indicate that the majority of Exchange deployments are virtualized and no evidence is offered by Nutanix to support this statement. And anyway, what do we mean by an installation being virtualized? Does it mean that a single CAS is virtualized? Or all CAS and no mailbox servers? Or all Exchange servers in an organization?

It’s absolutely certain that virtualization is an important technology for many who deploy Exchange but I don’t think it is so widely deployed as to now be in the majority. I asked Microsoft and they didn’t think so either but couldn’t comment publicly. I accept that the document is intended for marketing purposes but it is not good when the first statement in the text can be so easily queried.

Nutanix and Exchange High Availability

Nutanix and Exchange High Availability

Another question is how Nutanix positions their technology alongside Exchange high availability. The picture above is taken from a presentation given by Nutanix representatives to customers where the recommendation is to use VM replication to copy databases to a disaster recovery site. Another recommendation is to take a separate copy to use for litigation or on-hold purposes.

This arrangement smacks of the old-fashioned arrangements needed before the advent of the Database Availability Group (DAG) in Exchange 2010 where storage subsystems copy Exchange databases to a separate disaster recovery site. It doesn’t reflect current best practice, which is to stretch the DAG to cover servers in the DR site and use normal Exchange replication to keep database copies in the DR site updated so that they can be quickly switched into action should a problem affect the primary site.

Exchange has transitioned from depending on separate hardware replication to being able to provide this level of resilience within the appliance. I would have had no problem with the approach in an Exchange 2007 deployment but I think its value must be questioned in Exchange 2010 and 2013. It seems like a case where a product can do something so we’ll do it, even though a better method is available. I can see how the approach would work, but it’s always best to seek simplicity in DR and it seems better to allow the application to do what the application is designed to do in these situations rather than to layer on an additional technology to serve the same purpose. The case can be argued either way.

I also don’t understand the need for a separate immutable copy for litigation or in-place hold. These are features that are designed into Exchange 2010 and Exchange 2013 and you don’t need any additional hardware or software to gain this functionality. To be fair, some circumstances might require a configuration like this to satisfy a particular regulatory demand, but I can’t think of one right now.

I am not slamming Nutanix’s technology or the value proposition that they advance to potential customers. I rather like their can-do attitude and their willingness to take on the big players in the storage world (an example is in this blog about EMC) with new technology and approaches.  However, I do think that some Nutanix employees are prone to hyperbole and that the advice extended to the Exchange community is sub-optimum and in some cases (as in the solution brief), incorrect.

Perhaps Nutanix would be better to focus on building Exchange solutions based on their technology that embrace, enhance, and extend the functionality that’s already available in Exchange rather than starting crusades to support technology that does not seem to be wanted by many. At least, not by those who attended the recent Microsoft Exchange Conference, a group that I assume represents the core audience to which Nutanix wants to sell their products.

After all, if Nutanix can deliver iSCSI storage (a supported storage architecture for Exchange), why do they want to insist on campaigning for NFS? The “Nutanix Bible” and the blog referenced above indicate that this is possible, so I really don’t know why all the hot air exists around NFS.  I must be missing something!

Follow Tony @12Knocksinna

About Tony Redmond

Lead author for the Office 365 for IT Pros eBook and writer about all aspects of the Office 365 ecosystem.
This entry was posted in Email, Exchange, Exchange 2013 and tagged , , , , , , . Bookmark the permalink.

6 Responses to NFS and Exchange: Am I a sheep? Some at Nutanix think so…

  1. Several thoughts in reply, although in general I’m right with you — the obligation is not on Microsoft (and the Exchange group in particular) to support Exchange on NFS, it’s on the NFS community to demonstrate to Microsoft that their solutions meet Exchange’s requirements.

    1) The key to this debate, I think, centers around how virtualization is being positioned and sold to customers. Virtualization is being sold as a one-size fits all magic bullet — accept a little more complexity in the stack in order to get a unified story for HA, DR, backups, and management across *all* of your applications. Even today, many applications don’t attempt to provide the same level of functionality as Exchange, but instead rely on weak (at best) integration with third-party services to do so. As a result, virtualization DOES provide a huge advantage — consolidate hardware and storage, simplify provisioning, AND give yourself higher levels of availability (via the ability to move live VMs from host to host, automatic VM restart, etc.), DR (replicate your entire set of VMs to your secondary site, and if something happens, the software will get all of them up and running and provide the necessary transformations of VM IP addresses, etc. so that everything just works), and backups (snap the VMs on the back-end storage and then do interesting things from there instead of worrying about backups within each VM).

    From this point of view, Exchange is a huge sore point in this story — because everything else can be handled by a single storage configuration that is simple, expandable, and maintainable. Add disks on the storage solution as needed and add them into the filesystem for NFS, and then the hypervisor consumes that NFS and that’s where all of the virtual disks live. Instant VM portability across hosts and everything is simple and consistent. Then comes Exchange, which requires you to pull back some of that storage and carve off iSCSI volumes (one for each disk volume) and make sure iSCSI is configured and properly visible to all the hosts that might be sharing that VM. More networking, more configuration, complications for replication and backups, and all for one stupid application. Even if that application is Exchange.

    Tying Exchange into that sort of DR strategy is much more complicated, because you can’t just let the software replicate and update all of the VMs. Nope, you have to maintain separate (running) VMs in the DRs site for AD and Exchange at a minimum. The rest of your infrastructure gets forklifted along, but even that can cause problems and complications. For small and mid-size companies, this kind of complication and expertise can drive up the costs for personnel skills required to deploy and maintain your systems. The virtualization VAR I used to work for made good money exploiting this gap in skills.

    That’s part of why, I think, you see so many virtualization and even storage vendors (Nutanix is not unique here) recommending processes for implementing a feature for Exchange that ignores Exchange’s built-in features. One, it’s a lot more version-agnostic — you don’t have to upgrade Exchange 2013 to get the latest and greatest immutable hold features. Two, you’re using the same techniques and software to protect Exchange as you are all of your other applications, so your operators are more likely to remember what to do when the crap hits the fan. I’ve made this point myself time and again as one of the potential points in favor of virtualizing Exchange — I just advocate that this decision is made with full knowledge of what trade-offs you’re making and what built-in functionality you’re giving up, not by a salesman conveniently telling you only the shiny part of the story. I often ask people if they really want a DAG when they’re virtualizing Exchange, and we work through the pros and cons of how everything interacts. And then I document it all, so they know what decisions were made AND WHY…so they can revisit those decisions and see if the operational history meets the expectations.

    2) As a former Unix admin, I feel safe in saying that the world of NFS is a hot mess. The NFS protocol, especially versions 3 and 4, have many wonderful features and qualities. Unfortunately, as you and others point out, not all implementations are created equal. Even today, many people implement NFS at the lowest common denominator, and miss many of the performance, security, and stability enhancements from NFSv4. I have seen customers blithely using SAN/NAS solutions that only implemented NFSv2 feeding into their VMware hypervisors and wondering why they had problems. At its core, NFS is still a file-level protocol, not a block-level protocol. The write and caching semantics are not the same, and there is no practical way to ensure that the NFS clients and servers are all properly configured. That could change in a future version of NFS — which also implemented the necessary changes to address the remaining block vs. file issues that you mentioned in a previous blog post. But that work to fix NFS has to come from the companies that implement and use NFS as one of their primary protocols.

    3) Fortunately, Microsoft has already addressed the desire for file-level protocols used in hypervisors to get the flexibility and better space usage that file level protocols and network storage can provide: SMB 3.0. But even here, this work was done NOT by the Exchange group, but by the groups who owned SMB and the Hyper-V group. Did the Exchange team have input at some point? I would be surprised if they didn’t. But they weren’t the ones to fix it, the SMB group was, and Hyper-V had to support it. Last I knew, SMB 3.0 is available for licensing for any storage or hypervisor vendor to use. And if you do, you can put Exchange virtual drives on SMB 3.0 all day long and be fully supported. Frankly, this is the path vendors need to be going — or pressuring the NFS working groups to fix their mess.

    4) When I worked for a virtualization VAR, I had one Exchange deployment in 2.5 years on bare metal. Now working through an independent consultant, I still see a about 60% of the small to midsize companies using virtualization for Exchange. The bigger they are, the easier it is to virtualize properly in line with the support statements. Smaller companies are more concerned with saving $$$ and simplifying their administrative processes. Granted, this is a single viewpoint, and the data is not the plural of anecdote.

    • Great reply. I agree that forcing vendors to use iSCSI introduces some additional complexity into the virtualization mix. My point is that vendors like Nutanix could create some software within their solutions to mask the complexity and automate the setup so that it is done right in a supportable manner. It seems to me that extending and embracing the supported standards in this way is a much more productive (and in the long term, profitable) use of the talents within the company than an odyssey of social media lobbying in an attempt to force Microsoft to change a policy where they seem to have good reason to stay firm.

      TR

  2. Very well written article, Tony. One thing though: “It’s absolutely certain that virtualization is an important technology for many who deploy Exchange but I don’t think it is that widely deployed. I asked Microsoft and they didn’t think so either but couldn’t comment publicly.”

    I have asked about virtualization numerous times when speaking for an audience about Exchange. Consistently I have seen more than 3/4 of the public say the virtualize Exchange for either all roles or all but the Mailbox role. I observed an increasing trend consistent with the increased adaption of virtualization in the industry. Currently every single customer I work with has a preference for virtualizing as much as possible of Exchange.

    Now I can only speak for The Netherlands and Belgium and I’m aware that the adaption of virtualization may be lower in other regions. Anyway, I think that statement that half of the Exchange deployments is virtualized is not far from the truth. Bear in mind, they’re not saying that half of the Exchange seats are running on virtualized Exchange servers.

    • I am not a fan of virtualizing the mailbox role. I have no issue with virtualizing the CAS or Edge transport role (or Exchange 2010 transport servers). I keep on repeating the mantra that virtualization is a good strategy if you gain a measurable business or technology advantage from its deployment. Virtualization just because it seems like a good idea is a bad tactic to adopt.

      Looking at the text that you cite again, I see how my intended meaning could be misinterpreted. I have clarified it with an edit.

      In any case, there is no available data that can prove the assertion one way or another. All we have is conjecture and personal experience, which will vary greatly across market segment, country, and industry. All I can say is that none of my contacts at Microsoft who might know what the true situation is thought that virtualization was used by the majority of customers. It’s like knowing the true number of installed Exchange mailboxes. Is it 270 million or 310 million or 350 million? Only Microsoft really has an idea and they don’t know exactly how many of the CALs sold for Exchange translate into mailboxes used on a daily basis. All we can say is that virtualization is a widely used technology and that there are hundreds of millions of Exchange mailboxes in use. Putting the two together results in a conclusion that a reasonable percentage of Exchange servers run on a hypervisor. But the majority? Hmmm…

  3. Pingback: Exchange and NFS – A Rollup | EighTwOne (821)

  4. Pingback: Intersting things I have seen on the internet, July 14th | 503 5.0.0 polite people say HELO

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.