A warmed-over MCSE program is no pinnacle


Apparently the head of certifications at Microsoft (Tim Sneath) has said that Microsoft is going to make the MCSE exams “harder for everyone” by introducing new types of questions for which answers are harder to memorize. In other words, they want to eliminate the “certification by rote learning”, brain dumps, and question sharing activities that have made MCSE certification far less valuable than it should be.

Of course, this aspiration comes from the same organization (Microsoft Learning – MSL) that eliminated the Microsoft Certified Master (MCM) and Microsoft Certified Architect (MCA) programs last year, much to the dismay of those who had invested large amounts of time, energy, and money into attaining those accreditations, both of which were firmly based in knowledge acquisition and the ability to put that knowledge into effective practice. I thought that the decision was a bad one then and nothing has happened since to make me change that view.

MSL promised that they would look at creating another “pinnacle” to replace MCA and MCM. In this case, the pinnacle would be the top of the Microsoft accreditation stack and recognize the best of the best in the various technical disciplines. The good thing that could have happened here is an expansion of the numbers who achieved the peak, probably at the cost of some weakening of the high standards demanded by MCM and MCA. I would not have had a problem with this because not everyone can afford the time and cash commitment implicit in travelling to Redmond for MCM training or to go through the time-intense nature of MCA board interviews. It would have been good had a solution been found to allow an MCM-lite accreditation be rolled out on a worldwide basis at reduced cost. Of course, in order to be credible as a “pinnacle”, that accreditation would have had to be maintained at a much higher level than the average MCSE. A 90% MCM would have been a good goal.

However, the problem here is that even an MCM-lite program would have taken a lot of resources and brainpower to deliver. Even if Microsoft had found a good set of tutors available to deliver MCM training around the world, huge effort would have been required to develop the training – and to keep the knowledge refreshed in a world where change occurs on a quarterly cadence. I’m sure that budget was a huge obstacle to overcome.

It seems therefore that MSL has elected to attempt to tweak the MCSE program and force standards higher. Increasing the complexity of the questions asked is a more cost-efficient way of raising standards because the work can be done centrally and then deployed through existing testing mechanisms. MSL can say that they are responding to the needs of customers and IT professionals alike and all is well in the world.

But it really isn’t. Although some increase in the effort required to attain MCSE certification might happen initially, the ecosystem that surrounds Microsoft accreditation will respond. Organizations who deliver training focused on passing MCSE exams will flex and change to accommodate the new testing regime. Question dump sites will continue. People will continue to find ways to game the system. And even if MSL continue to tweak and improve the exams and testing methodologies, they will really only be staying one step ahead of others who make money today from MCSE training and want to continue to do so in the future. It’s a hard place to be.

At the end of the day, I don’t think this approach will improve the level of technical competence of certified individuals very much at all. It might move the needle a tad but it’s hardly going to represent a new pinnacle for the certification stack. On the upside, MSL look good because they are responding to concerns about the MCSE program and are doing so in a cost-effective manner, so the people who read program reviews and monitor budget spreadsheets will be happy.

It’s sad, but MSL seems set on a path that does not accommodate the 1% or so of the technical community who wish to extend themselves and become the best of the best. MCM and MCA were flawed programs  but they represented an obvious and well-earned pinnacle for Microsoft certification. A warmed over MCSE (2014 model) will not.

A missed opportunity…

Follow Tony @12Knocksinna

Posted in Training | Tagged , , , , , , | 2 Comments

And now for something completely different – Monty Python Live


Py8I’m sure many of you don’t appreciate the humour of Monty Python because it is very much an acquired taste. But those of us who were brought up in the black-and-white TV era appreciate the wit and insightful comment that Monty Python brought to the screens at the end of the 1960s and early 1970s, not to mention their successful “Holy Grail” and “Life of Brian” films.

All of which meant that the announcement of some live concerts at the O2 in London created a unique opportunity to see five of the six Pythons in action… So I went with my two sons and a friend and am terribly happy that I did.

Four Yorkshiremen

Four Yorkshiremen

This was the first time I had been to the O2 and it’s quite a location, especially if you arrive there via the Emirates SkyLine.

O2 London from the Emirates SkyLine

O2 London from the Emirates SkyLine

We attended the second Monty Python Live (Mostly) concert (July 2). I had read the reviews of the first night and discovered that most of the critics were unhappy because the show was basically a rerun of many popular sketches including “Four Yorkshiremen” (above). However, that’s exactly what I expected and my view appeared to be shared by the vast majority of the 16,000 crowd, not many of whom looked for new breakthrough comedy from the 70+ year old stars.

Lots of people arrived dressed up as their favourite characters. Many red-caped cardinals were to be seen along with a group of Gumbys complete with their most precious wellington boots. I also saw one nattily-dressed gentleman who had forgotten his trousers – or simply wanted to show off what nice underclothes he had.

Paying for an argument

Paying for an argument

The show developed very much as expected with famous sketches being interspersed by some energetic song and dance routines that gave the Pythons a rest.

Lumberjack song

Lumberjack song

I thought Michael Palin was terrific with Eric Idle a close second. Palin’s delivery of the “Lumberjack song” was great as was his delivery in the Spanish Inquisition. Idle’s singing was excellent too and he boasted a tasteful number in black lingerie too.

English judges

English judges

John Cleese was a larger, better padded, and much-rounder version of the younger Cleese who is no longer capable of performing silly walks. That didn’t stop his wit showing through, notably in the argument sketch.

Argument sketch

Argument sketch

However, the best moment of the show came when the Cleese and Palin reunited for the Pet Shop sketch and both managed to forget their lines, much to the delight of the audience and their mutual amusement. It was quite something to note how many of the audience were able to recite the lines about the famous parrot as the sketch developed. Truly these were true believers.

Lovely Parrot that has ceased to be and is no more...

Lovely Parrot that has ceased to be and is no more…

I thought it interesting that Terry Gillam took a more up-front role in the sketches, probably because there’s a certain limit to the number of cartoons that can be deployed in a live show. He popped up as the redoubtable Gumby in the flower arrangement sketch and later on as Cardinal Fang of the Spanish Inquisition. All good stuff

Flower arranging

Flower arranging

The Spanish Inquisition

The Spanish Inquisition

All of the sketches I anticipated showed up and were a delight. In fact, the whole show was a barrel of laughs from start to finish.

Australian philosphy

Australian philosphy

The live shows come to an end on July 20. However, this show will be broadcast to cinemas around the world on that date and is also due for rebroadcast on July 23 and 24. I might just attend it again.

For those who are interested, all of the photos shown with the exception of the SkyLine shot were taken from the general body of the audience using my son’s Canon S120 digital camera. As obvious here, the low-light performance of this camera was pretty impressive for something that easily fits into a pocket! I used the camera on my Nokia Lumia 1020 to take the photo from the SkyLine.

Follow Tony @12Knocksinna

Posted in Writing | Tagged | 2 Comments

Automatic mailbox move requests in Exchange 2013


Soon after writing about the need to clean out lingering mailbox move requests for Exchange 2010, I requested the developers to add the ability to remove move requests automatically after a certain period. After all, it’s a royal pain to find that you can’t move a mailbox just because an ancient move request still exists.

As I was researching new features to write about in Microsoft Exchange Server 2013 Inside Out: Mailbox and High Availability, I was delighted to discover that a solution exists in Exchange 2013. It’s taken me a while to comment about the solution, but better late than never (I guess).

If you look at the parameters for the cmdlets that control how the Mailbox Replication Service (MRS) moves mailbox data, you’ll find that they all support the new CompletedRequestAgeLimit parameter. These cmdlets are:

Note that the New-PublicFolderMigrationRequest cmdlet, which is used to migrate old-style public folders to their modern counterparts on either Exchange 2013 on-premises or Exchange Online, does not support an age limit parameter. This is very logical because public folder migrations can last for an extended period. Make sure that you read my notes about Exchange 2013 public folder migration if you haven’t started this process yet.

If you don’t pass a value for the CompletedRequestAgeLimit parameter, the default of 30 days is used. And once this period expires, MRS cleans up by removing the request automatically. Of course, Exchange 2013 includes the migration service and mailbox moves are now processed in batches that are controlled by the migration service, but mailbox move requests live on underneath the cover and are the prompts for MRS to move mailboxes.

Some might ask why it took Microsoft so long before they decided to auto-expire mailbox move requests. My theory is that it’s yet more evidence of the increasing attention paid to automation in Exchange 2013 that is brought about by the massive increase in scale seen in Office 365. Consider just how many mailbox moves occur between Exchange Online databases. Now consider just how much of a royal pain in the rear end it would be if all of the mailbox move requests had to be cleaned up manually. Automatic request expiration makes a huge amount of sense when you’re dealing with millions of mailboxes, just like it makes sense if you have just a few to look after.

Another interesting new parameter is Priority, which allows you to provide MRS with an indication of the importance of a job. MRS uses the priority along with other factors such as target server health (as measured by Managed Availability) to decide which job to process next. The default value is “Normal” and it extends from “Emergency” (highest) to “Lowest”.

Follow Tony @12Knocksinna

Posted in Uncategorized | Tagged , , , , , | 3 Comments

Why it is easier for Microsoft to innovate inside Office 365


My last post discussed the fact that an increasing level of integration to create new features by the Office server products creates some issues for hosting companies (other than Office 365) and on-premises customers. At least in the case of Exchange, Microsoft uses the same code base for its cloud and on-premises products, so why does innovation appear in the cloud and not flow through to the on-premises versions?

I think the answer lies in three reasons. First, Microsoft obviously has the resources necessary to design, develop, commission, and operate all of the necessary components drawn from the different products to deliver more complex software than before. On-premises customers and other hosting companies might find it cost-prohibitive to embark on similar integration projects because they don’t see how the effort involved can be justified.

On the other hand, Microsoft is obviously in the software engineering business so complex projects are in their bloodstream and the benefits achieved from the work can be measured in different ways such as investment in future software directions, creating a competitive advantage over Google, something different to trumpet in the market, and so on.

Second, the Office 365 fabric that Microsoft has created allows them to test and refine new features with relatively small communities of users before gradually rolling out software upgrades en masse across the world. This is the approach taken, for instance, with the transition from RPC over HTTP to MAPI over HTTP currently under way within Office 365.

Microsoft has also indicated that they will need to make frequent and ongoing updates to the Office Graph after it is first made available to Office 365 customers to tune and refine the machine learning algorithms that discover the connections displayed by the “Delve” product based on graph data.

By comparison, when Microsoft delivers a new feature to on-premises customers or the hosting community, that feature had better be fully baked. If it isn’t and flaws appear, Microsoft will have a terrific support load to deal with. Getting updates out to the installed base is easier (in some ways) than before given the new servicing model for Exchange 2013 that delivers quarterly cumulative updates, but that’s nothing like the ability that Microsoft has to tweak and tweak again to refine new features running inside Office 365.

Finally, Microsoft does not have to deal with all of the complications that occur inside on-premises deployments when they design something for Office 365. Although the Office 365 infrastructure is massive and growing like a weed, it is relatively simple and extremely well-understood because every component is standardized. There is no notion of being able to install a third party application because it is requested by one or more Office 365 tenants. No one gets to vote about Office 365 delivers. If something is not in the playbook, it is not available. In short, Office 365 is a highly defined infrastructure that delivers a highly consistent environment for software developers to design against.

These three factors make it possible for Microsoft to take on very complex engineering projects like Delve. The question is whether the same brains who come up with these features can turn their minds to how to package the resulting software in a way that it can be run outside Office 365.

Using the same code base does not automatically mean that the same features apply to every version that flows from the code.  Branching happens. We shall just have to see whether the gap that is now appearing between cloud and on-premises versions closes over time or if the cloud will always have the upper hand in terms of new features.

Follow Tony @12Knocksinna

Posted in Cloud, Office 365 | Tagged , , , , | Leave a comment

Exchange Unwashed Digest – June 2014


June 2014 was an interesting month for my “Exchange Unwashed” blog on WindowsITPro.com as the material covered was pretty diverse as we went from platforms like Azure to storage firmware and all points in between. See what you think!

Why running Exchange on Azure is an unattractive proposition (June 26): Quite a few people are enamoured of the prospect of running Exchange on Azure – or Amazon Web Services for that matter – but I am not quite so sure. It is not a matter of technology but rather of economics. You can certainly make Exchange run on Azure or AWS but will it pay? And if you really want to run Exchange in the cloud, don’t better alternatives already exist?

Keeping up to date with what’s been happening with Set-MailboxDatabase (June 24): Examining the individual parameters of a commonly-used PowerShell cmdlet might seem like a silly thing to do, and so it is if you do it for sport. But you can find some interesting nuggets, which is what happened when I found that one of the cmdlet’s parameters has been deprecated in Exchange 2013 and a new one simply doesn’t work as it’s supposed to. But it will in Exchange 2013 CU6, or so I hear.

Is Microsoft really saying “don’t virtualize” Exchange? (June 19): Microsoft publishes a “preferred architecture” for Exchange 2013, which is nice, except that it doesn’t accommodate virtualization. “So what?”, you might say, if you prefer a nice physical server. But there are those who would virtualize everything, including their cat, and the preferred architecture came as a surprise.

Nothing to fear in MAPI over HTTP (June 17): I wrote a long feature article about the transition from RPC over HTTP to MAPI over HTTP when Microsoft first announced the technology. This post discusses some of the concerns that have surfaced in the meantime and why I am not too concerned about them.

OWA for Android debuts but leaves on-premises customers waiting (June 12): Microsoft announced the second leg of their OWA for Devices strategy at MEC last March but it then took them two months to provide working code for Android devices. And that code only works for some Android devices (small screen, no pads) and only if you have an Office 365 account. So the release was a bit of a damp squib. At least Microsoft provided a web page for people to use for complaints.

Encrypting email in transit makes a heap of sense (June 10): Google launched an initiative to embarrass email domains into encrypting messages in transit. This seems like an excellent idea. The good news is that Exchange has used opportunistic TLS for quite some time and that Office 365 encrypts mail in transit too. But you might not, so it’s a good topic to consider.

How flawed firmware can really give your DAG some replication headaches (June 5): The storage team at IBM did us all a favour by sending out the world’s most obscure and badly written support bulletin (well, a candidate for the prize anyway). The serious side of the bulletin told how a change to a storage controller configuration could have very bad side-effects for Database Availability Groups. Not what you want to read on a Monday morning…

The cyborgs are coming or how Microsoft “Clutter” will help you to do a better job of processing email (June 3): I’m a big fan of machine learning and the Clutter feature is based on that technology. It’s designed to help remove unimportant messages from your Inbox so that you can get right to processing the most important email. Unfortunately it looks like Clutter will only appear in Office 365 for now, but I still think that it’s pretty cool.

On to July. Vacation season might be in full swing but there are still posts and articles to write. Technology never stops evolving.

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Exchange 2013, Office 365 | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

Increasing Office 365 integration poses challenges for hosters and on-premises customers


It seems to me that a fundamental transformation is occurring within the Office 365 datacenters that has some consequences for those who offer alternate hosted services as well as on-premises customers. And it’s all to do with the level of integration that Microsoft is now building into their Office servers.

Looking back on the past two “waves” (or generations) of Office servers, we see a progression from almost no integration in the 2010 (wave 14) releases to a perceivable attempt to make integration more of a priority in the 2013 (wave 15) releases. Perhaps it’s following a “better together” theme, but more likely the simple realization that customers become more embedded into the Microsoft server infrastructure if they use more of the functionality incorporated into the software.

Thus, Exchange 2013 and SharePoint 2013 give us the wonders of site mailboxes and integrated eDiscovery across the repositories, with Data Loss Prevention (DLP) soon to be added to SharePoint Online. Exchange 2013 and SharePoint 2013 share the Search Foundation to make cross-platform searches feasible, a development that is a good long-term step even if it creates difficulties in that it is not possible to conduct a search across mailboxes that reside on servers running different versions of Exchange.

Which brings us to what’s happening in Wave 16, the next generation of Office servers that will gradually appear in breadcrumb format in the online services, dribbled out in a series of incremental updates over the next year before code is assembled in a form that can be delivered to on-premises customers sometime in late 2015 (my best guess).

“Oslo”, now named Delve, is one of the headline features due to appear in Office 365 by the end of 2014. Beta software has not yet been made available so any assessment of what Delve is comes from descriptions provided online or in Microsoft presentations. From these it seems like the technology called Office Graph that provides the information accessible through Delve depends on being able to retrieve information from repositories like Exchange and SharePoint and meld data together in such a way as to be able to present the most important items to users based on knowledge acquired of end-user connections and activities refined by a healthy dose of machine-learning.

What’s clear here is that the underpinnings of Delve depend on a lot of integration. And that the integration is possible in Office 365 because Microsoft owns the complete environment. Knowing that everything they depend on will be in place and connected together is a different prospect for software engineers when it comes to designing new features. It opens up a new vista of possibilities.

In the past, software engineers could not assume that any necessary component was in place, which meant that invariably complicated installation and configuration instructions had to be written to explain how to knit different products together to accomplish the intended goal. The process of configuring Exchange 2013 and SharePoint 2013 to make site mailboxes possible is an example.

Now, the Office 365 engineers know exactly what software is available down to a specific build number and can therefore construct their software with a freedom that was never previously available. It’s a world of difference that enables complexity like the Office Graph.

Being able to use features enabled through very complex software without fearing that some obscure configuration glitch will cause problems is also great for end users. It also binds people more tightly to the service and makes it harder for them to move elsewhere, which is precisely the reason why Microsoft creates the feature.

But features like Delve create all manner of questions for those who don’t use Office 365. For instance, will Microsoft make the necessary components available to third party hosting companies or will these features remain a competitive advantage for Office 365? It’s reasonable that Microsoft would have some period of exclusivity both to have an advantage and to fully sort the software, but I’m sure that the third party hosting companies will view a growing functionality gap with Office 365 with some dismay because the gap makes their offering less competitive and complete. It’s also something that the competition authorities in different jurisdictions might review.

The same functionality gap seems likely to occur for on-premises customers too. At least, Microsoft has revealed that Office Graph won’t be in the next major version of Exchange. Again, there’s a certain reasonability about the position because not every customer will want to invest in the resources necessary to deploy something like the Office Graph. It therefore follows that Microsoft would be better off dedicating engineering resources to more prosaic but widely-used features than assigning them to create the installation and configuration procedures necessary for an on-premises deployment.

We can treat Office Graph as a one-off exception and assuage the fears of hosting providers and on-premises customers will not be left behind in terms of functionality. But I think the integration possibilities that now exist within the Office 365 servers will present Microsoft with the opportunity to deliver future features that will be unique to the Microsoft cloud. It just makes sense.

Follow Tony @12Knocksinna

Posted in Cloud, Email, Office 365 | Tagged , , , , , , , | 6 Comments

NFS and Exchange: Am I a sheep? Some at Nutanix think so…


A number of people working for Nutanix have been in the vanguard of those who would like to see NFS supported as a valid storage option for Exchange deployments, some of whom cheerfully recommend that “customers continue to run Exchange on Nutanix containers presented via NFS. In my mind this is a bad idea because it puts customers into an unsupported configuration. Running what is often a mission-critical application in a deliberately unsupported manner has never really appealed.

As you might be aware, the current situation is that Exchange 2010 and 2013 supports three storage architectures: Fibre Channel, iSCSI, and DAS. These architectures are supported because Microsoft knows that they work well with Exchange and are implemented by a wide range of storage vendors in a predictable and robust manner, all of which creates supportable solutions for customers.

As I have discussed before, Microsoft has a number of technical issues that have to be resolved before NFS can be added to the list of supported storage architectures. We can argue this point to the nth degree, but it’s not me that needs to be convinced that NFS is supportable for solutions created by multiple vendors. At the end of the day, the vendors who want to sell NFS-based solutions have to convince Microsoft. And Microsoft have to be convinced that NFS will be implemented at the same high quality by every vendor – and that those vendors can support their hardware in the field if things go wrong.

I think Microsoft has another concern too: there doesn’t appear to be too much customer demand that Exchange supports NFS. At least, this was the strong impression formed when Jeff Mealiffe queried an audience of several hundred people during his “Virtualization Best Practices” session at MEC last April. Only 3 or so people put their hand up when asked. If the commercial demand existed, you can bet that Microsoft would be all over NFS like flies around cow dung.

Those who want to see NFS join the list of supported architectures are diligent about advancing their case across a range of social media. An example is a Reddit debate. Twitter is also a favourite medium, which we’ll get to shortly. Another is the write-up in a TechNet forum that sets out the case for Exchange to support VMDKs mounted on NFS datastores. This post makes the point that there is a “gap in supporting Exchange in VMware vSphere & KVM with datastores connect via NFS 3.0.” That gap exists but the evidence from the audience pool at MEC is that it is a relatively small gap in the overall context of the Exchange market.

In saying this I accept that the gap (in support) might be terribly important to certain companies who wish to standardize their virtualization platform around NFS. But that’s not evidence that consistent and perceivable customer demand exists in sufficient quantity to warrant Microsoft to do the work necessary to certify that NFS indeed works (in a supportable manner) with Exchange.

Remember that once a storage architecture is supported, that support has to stay in place for an extended period (probably ten years given Microsoft’s normal support policies) and that support must be available on a worldwide basis. Convincing management to sign up for the costs necessary to support another support architecture without evidence that this will do anything to increase either Microsoft’s profitability or market share is a hard case to argue. And this proposal has to be viewed in the context of an on-premises market that will decline over time as a sizeable chunk of the installed base moves to Office 365 or other cloud providers. So the case being advanced is a good technical debate that runs into rough waters when money and market issues are factored into the equation.

It does not seem like a good economic decision for Microsoft to support NFS, but does the technology work? It certainly seems like quite a few companies have followed the Nutanix advice to run Exchange databases on NFS storage. To demonstrate that everything works just fine, Nutanix explained in the Reddit debate how they ran a JetStress test. They reported that “the Exchange Solution Reviewed Program (ESRP) Jetstress test (ran) for 24 hours in a single Windows 2008 VM with 4 PVSCSI adaptors serving 8 x VMDKs hosted on a single NFS datastore and no surprise it PASSED with flying colours.” This is all very well, and I am sure that everyone associated with the test was tremendously happy.

However, running Jetstress is no way to prove supportability, which is the key issue debated here. Jetstress certainly allows you to stress storage subsystems to prove their stability, so the test simply proves that (a specific version of) Nutanix’s implementation of NFS works when exercised by JetStress. It does nothing to prove the case for supportability of NFS within Exchange deployments. More work would have to be done to ensure that a guaranteed standard of implementation and performance was achieved for NFS across the industry. A single test run by a single vendor against a single storage configuration proves nothing.

It’s also true that the configurations that are capable of passing the guidelines laid down in ESRP are often “odd” and impractical in terms of real-life deployments, so saying that a test was passed with flying anything really doesn’t help us understand how NFS would cope with Exchange over a sustained period across multiple software versions deployed by different people around the world.

Some comments from a Nutanix employee on my Twitter feed

Some comments from a Nutanix employee on my Twitter feed

Going back to Twitter, I received some advice from Nutanix employees that I should engage with their initiative to encourage Microsoft to support NFS. I should stop being a sheep and work to convince Microsoft to change their “outdated and embarrassing support policies.” Quite.

I found it odd to see the debate reigniting because Nutanix has a configuration that presents iSCSI storage to Exchange. Given that this configuration exists, I wonder why we have to debate NFS support at all. It would seem much more productive all round for Nutanix to tell customers to use the fully-supported iSCSI approach rather than expending all these cycles trying to convince Microsoft that NFS should be supported too.

In my past, I spent over a decade in Chief Technology Officer or Technical Director jobs at Digital, Compaq, and HP. As such, acknowledging that I have been known to be wrong (many times), I think I have a reasonable grasp of how to assess the import and worth of a technology. So I spent a little time looking at the information available on the Nutanix web site.

Up front I have to admit that I have no hands-on experience of Nutanix products. Their literature indicates that they have some interesting technology created by people with a strong track record in the storage industry. As such, its products are certainly worth checking out if you are interested in virtualization. However, I have some doubts about some of their statements concerning Exchange.

For example, browsing Nutanix’s “Microsoft Exchange Solution Brief,” I was interested to find the assertion “With more than 50% of Microsoft Exchange installations now virtualized, it is critical to select the right server and storage architecture.” Although no one could quibble with the advice, I think they are way off with the preamble. I have never seen any reliable data to indicate that the majority of Exchange deployments are virtualized and no evidence is offered by Nutanix to support this statement. And anyway, what do we mean by an installation being virtualized? Does it mean that a single CAS is virtualized? Or all CAS and no mailbox servers? Or all Exchange servers in an organization?

It’s absolutely certain that virtualization is an important technology for many who deploy Exchange but I don’t think it is so widely deployed as to now be in the majority. I asked Microsoft and they didn’t think so either but couldn’t comment publicly. I accept that the document is intended for marketing purposes but it is not good when the first statement in the text can be so easily queried.

Nutanix and Exchange High Availability

Nutanix and Exchange High Availability

Another question is how Nutanix positions their technology alongside Exchange high availability. The picture above is taken from a presentation given by Nutanix representatives to customers where the recommendation is to use VM replication to copy databases to a disaster recovery site. Another recommendation is to take a separate copy to use for litigation or on-hold purposes.

This arrangement smacks of the old-fashioned arrangements needed before the advent of the Database Availability Group (DAG) in Exchange 2010 where storage subsystems copy Exchange databases to a separate disaster recovery site. It doesn’t reflect current best practice, which is to stretch the DAG to cover servers in the DR site and use normal Exchange replication to keep database copies in the DR site updated so that they can be quickly switched into action should a problem affect the primary site.

Exchange has transitioned from depending on separate hardware replication to being able to provide this level of resilience within the appliance. I would have had no problem with the approach in an Exchange 2007 deployment but I think its value must be questioned in Exchange 2010 and 2013. It seems like a case where a product can do something so we’ll do it, even though a better method is available. I can see how the approach would work, but it’s always best to seek simplicity in DR and it seems better to allow the application to do what the application is designed to do in these situations rather than to layer on an additional technology to serve the same purpose. The case can be argued either way.

I also don’t understand the need for a separate immutable copy for litigation or in-place hold. These are features that are designed into Exchange 2010 and Exchange 2013 and you don’t need any additional hardware or software to gain this functionality. To be fair, some circumstances might require a configuration like this to satisfy a particular regulatory demand, but I can’t think of one right now.

I am not slamming Nutanix’s technology or the value proposition that they advance to potential customers. I rather like their can-do attitude and their willingness to take on the big players in the storage world (an example is in this blog about EMC) with new technology and approaches.  However, I do think that some Nutanix employees are prone to hyperbole and that the advice extended to the Exchange community is sub-optimum and in some cases (as in the solution brief), incorrect.

Perhaps Nutanix would be better to focus on building Exchange solutions based on their technology that embrace, enhance, and extend the functionality that’s already available in Exchange rather than starting crusades to support technology that does not seem to be wanted by many. At least, not by those who attended the recent Microsoft Exchange Conference, a group that I assume represents the core audience to which Nutanix wants to sell their products.

After all, if Nutanix can deliver iSCSI storage (a supported storage architecture for Exchange), why do they want to insist on campaigning for NFS? The “Nutanix Bible” and the blog referenced above indicate that this is possible, so I really don’t know why all the hot air exists around NFS.  I must be missing something!

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2013 | Tagged , , , , , , | 6 Comments