VAX Notes remembered


I was surprised and delighted to come across a White Paper called “The Camelot of Collaboration” that documents the use of a now-forgotten technology called VAX Notes within the late-lamented Digital Equipment Corporation (DEC).

Collaboration comes naturally to many today given information presented on blogs, social networking sites such as Facebook, the stream of updates from Twitter, and the wide array of web sites maintained by corporations and individuals to publicize products and causes. In fact, we live in a world where information is available in a bewildering number of sources, should we feel the need to go looking. As always, the problem is to separate out the good information from the misleading, something that has existed since man first started to publish.

But going back into the world of the 1980s, information was not so easy to obtain. Sure, we had books and magazines to read, but the thought of being able to get immediate up-to-date information that could help to resolve a problem just did not exist. Except, of course, inside DEC, where VAX Notes, a collaboration technology made information available to employees around the world as long as they could get to their faithful video terminal and connect to DEC’s internal network (DECnet).

As the white paper tells, in August 1989, some 10,355 VAX Notes conferences were active inside DEC, 390 of which were dedicated to employee interests such as “Good restaurants in the South of France”. But the majority of traffic went into the 9,965 conferences set up to discuss matters relating to DEC technology, such as the inner workings of VAX/VMS, how to best configure TCP/IP services for VAX/VMS, or even how to connect PCs and Macintoshes to VAX servers. The best thing about VAX Notes was the way that the people who engineered products would respond to questions that arose in the field, meaning that someone working in Hong Kong who encountered a problem with a product could ask a question in the appropriate conference at the end of their working day with a real expectation that the question would be answered by engineering in the U.S. by the time they came into work the next day. And if engineering couldn’t answer, some other DEC employee probably would.

None of this seems earth-shattering now, but considering that all of this happened when telephone calls were expensive, the Internet was a loose collection of academic systems connected by dial-up modems, browsers had not yet appeared, and email was restricted to people who required a business reason to be assigned a mailbox. It’s just awesome how far we have come since 1989 and how advanced VAX Notes was at the time.

I have a special reason for remembering VAX Notes in 1989 because I was the project leader for the work done to integrate VAX Notes with ALL-IN-1 Version 2.4 in that period (John Rhoton, who has since gone on to become a well-respected cloud computing strategist, wrote a lot of the code to integrate the two products). Most of the work was done in DEC’s European Technical Center in Sophia-Antipolis, France or the European Office Engineering Team in Turin, Italy and then integrated with the rest of ALL-IN-1 in DEC Park Reading, England. I enjoyed this project enormously because it brought the best internal collaboration tool that DEC could offer to a much wider audience outside the company. Regretfully, neither the native version of VAX Notes nor the integration with ALL-IN-1 really took off outside DEC. I don’t think this was the fault of the technology, but rather that a collaborative ethos did not exist inside other companies in the same way that it did inside DEC.

Another interesting comment from the white paper was that the effect of Windows and Microsoft technology and the growth of the web eventually “gutted the technology infrastructure for collaboration”. Looking back, I think this statement is too strong. A poor PC client for VAX Notes certainly contributed to its demise as more users moved to Windows, but it’s also true that the web offered other sources of information that were became more important as customers moved from proprietary technology to more general standards (think of the transition from DECnet to TCP/IP). In addition, within DEC, there was also the wider use of some other collaborative technology such as email distribution lists and even Exchange public folders (DEC eventually had tens of thousands of public folders, many of which were used for short periods and then left to linger in the public folder hierarchy). Some VAX Notes conferences still persist, not least those offered with a web interface by the Encompass U.S chapter, part of the HP users group, at http://encompasserve.org/anon/htnotes/.

Len Kawell wrote Notes-11 (his LinkedIn profile says that this work was done “in his spare time”) and later worked with Ray Ozzie on Lotus Notes. Notes-11 was then taken on by Benn Schreiber and Peter Gilbert as a “skunk works” project within DEC Central Engineering and resulted in VAX Notes. After the first version of VAX Notes was complete, Benn Schreiber moved to the DECwest group in Seattle to work on the famous PRISM project with Dave Cutler. DEC eventually canned PRISM, Cutler moved to Microsoft (where he is now a Senior Technical Fellow) to create Windows NT, and the rest is history.

The early versions of Lotus Notes traced a direct connection to VAX Notes in not only the name. Think of databases being equivalent to conferences and you see what I mean. Of course, Lotus Domino is much different now than it was in the early days, but a link can still be traced back to VAX Notes in its technical ancestry.

Collaboration was just one of the aspects that made it great to work at DEC in its heyday. It’s just such a pity that DEC came to the end that it did, largely due to some poor strategic decisions by management as well as some “interesting” bets on technical direction. As always, hindsight is everything…

Follow Tony @12Knocksinna

Posted in Technology | Tagged , , , | 17 Comments

Is Dell about to cut TEC?


Word is reaching me that Dell might have decided to cancel future TEC (The Experts Conference) events. If this is true, the last TEC was in Barcelona in October and it marked a series of events that was fun and full of good information, one that I only came to appreciate late in its run.

Associated with the rumor is the solid news that Gil Kirkpatrick, the father of TEC, has left Quest to become CTO at ViewDS. Gil was a major player in keeping TEC going ever since he started the event at NetPro (acquired by Quest in 2008). It’s not altogether surprising that he might decide to move on to other challenges as the prospect of working at a major corporation like Dell is not always appealing.

Of course, the rumors that have been shared with me might be just that and Dell is about to revamp TEC to make it brighter and better than ever before. I would very much welcome this if it was the case, but all my corporate antennae and experience of what happens following large acquisitions tells me that such a development is unlikely.

The problem is simple. Corporate America worships at the altar of financial data and the returns that companies extract from investments. Dell paid $2.4 billion to buy Quest last July and now it’s time for efficiencies to squeeze cost out of Quest so that Dell can say that their purchase was good for stockholders. The nicest way that this can be presented is to talk about “synergies” where Quest and the larger Dell work together to make the combination more valuable than had the two companies remained separate. Those who have been through the experience of the aftermath of corporate acquisitions might refer to synergies as “slash costs”, something that is always hard to do without impacting people.

The nice beancounters at Dell have very probably not ever attended a TEC event and so have no idea of the value that TEC delivers to various groups. Yes, I know that Quest uses TEC as a sales vehicle because they bring lots of their customers to the events to receive briefings on new products, but it’s always seemed to me that Quest has done this without getting into the face of the people who aren’t lined up to be victims of the sales force. The point is that the TEC sessions have usually been high quality and interesting to attend, even if you don’t quite understand the likes of Brett Shirley of Microsoft as he natters on about the wonders of ESE database internals.

From the perspective of the Exchange community, it would be sad if TEC goes because it will leave the conference schedule pretty bare, especially in Europe. TechEd isn’t much good because it attempts to cover too many technologies, MEC 2012 was excellent but we have to see how it goes in 2013, and Exchange Connections is a pale shadow of its former self. There are other more local events but not many, and it’s just a shame if TEC goes.

Perhaps you can help by telling the folks at Quest and their Dell overlords that any decision to stop TEC would be as attractive as dirty canal water. Your opinion might get a hearing if you’re about to buy a heap of Dell hardware. On the other hand, the word of the beancounters carries an awful lot of weight and if that spreadsheet cell says “cut”, it might mark the end of TEC. I wish Gil Kirkpatrick good luck in his new role but the other news about TEC is sad.

Follow Tony @12Knocksinna

Posted in Technology | Tagged , , , | 5 Comments

Exchange 2013 EAC showing some frayed edges


Last July I wrote about the Exchange Administration Center (EAC) and how it was being introduced in Exchange 2013 to replace the creaking and slow Exchange Management Console (EMC) as featured in Exchange 2010 and Exchange 2007. Now that we’ve had some time to assess the finished object, as shipped in the RTM version of Exchange 2013, some remarks on how EAC turned out seem appropriate.

To begin, some positive points. First, I very much appreciate the general speed of EAC when compared to EMC. Things just seem to happen more snappily.

Second, it’s great to have a console that runs on so many browsers. I move between Chrome and IE and EAC appears to run smoothly on both. I have no doubt that it also works well on Safari and Firefox. Of course, we’re not just discussing PCs or workstations any more either – EAC opens up the ability to run on Windows RT Surface, iPads, and other devices, which is also a very good thing.

Third, the introduction of EAC has made administration much simpler all round in Exchange 2013. It’s now a matter of EAC or EMS when you want to work with server objects or maybe OWA options (really some EAC code reused) when users want to change mailbox options. The sometimes confusing set of EMC and the Exchange Control Panel (ECP) with some administration features being implemented in EMC and not in ECP and vice versa is no longer present. In addition, user options are much better integrated into OWA than is the case with Exchange 2010.

But inevitably as with all new software, some problems also exist. I’ve already publicly noted my concern that EAC has dropped the three PowerShell learning tools that are so useful to anyone who wants to understand the cmdlets and syntax of EMS, and that I didn’t like the fact that context-sensitive menus are no more in EAC but do exist in OWA. Aside from those issues, here’s my current hate list for EAC:

No preview capability is provided for dynamic distribution groups. On the surface, you might say “so what”, but this is a useful feature that’s in Exchange 2007 and Exchange 2010 that allows you to determine whether the query you base a group upon will actually end up addressing anyone when someone uses the group to send mail. And what’s worse is that a preview feature that looks very much like the one you’d use for dynamic distribution groups is available for email address policies. If Microsoft can create and implement a preview function for email address policies, wouldn’t a little cut and paste magic work for dynamic groups? After all, both email address policies and dynamic distribution groups used OPATH queries to determine the set of objects that they operate against.

A preview option like this would be really nice for dynamic distribution groups

A preview option like this would be really nice for dynamic distribution groups

The engineer who determined how group membership is shown is clearly under the impression that either groups are very small or administrators enjoy frantic scrolling through large data sets. There’s no way to resize the list control either so this qualifies in the category of brain-dead and dumb. Generally speaking, I dislike the way that space is used (or rather, abused) in Microsoft’s new graphic design guidelines, but who am I to comment?

Four items in a list of distribution group members? Simply not enough!

Four items in a list of distribution group members? Simply not enough!

If you assign Full Access control over a user’s mailbox, you’ll see that EAC prepopulates Exchange Trusted Subsystem and Exchange Servers in the set of objects that have full access. Unsurprisingly, this is because these groups enjoy full access to every mailbox, which they use to manipulate mailboxes as required by Exchange. The problem is that you’re allowed to mess with these entries and remove either or both from the list. Now, if you mess with Exchange permissions, my experience is that bad things happen.  I’ve removed both groups just to see what happens and nothing dramatic has occurred so far, but I have a nagging suspicion that something will soon – and if it doesn’t, then why are these highly privileged groups shown in the list?

Exchange Trusted Subsystem and Exchange Servers need to access mailboxes, but does EAC need to show them?

Exchange Trusted Subsystem and Exchange Servers need to access mailboxes, but does EAC need to show them?

It’s true that EMC shows you the same information (EMC actually also shows you NT Authority\SELF too), but I complained about the problem for Exchange 2010 and I have complained about it for Exchange 2013 too (see the the formal bug report on Microsoft Connect).

Change often brings both good and bad with the ratio between the two dependent on the views and experience of the observer. Some people will absolutely love EAC and some will hate the new console. Others will be like me and grumble about the bits that are considered broken or problematic while accepting that Microsoft has cast the dice and won’t bring back EMC (thankfully). All we can do is note the problems and file bug reports to make Microsoft aware that more work is needed to make EAC the finished version. Make sure that you bug anything that you find that you think Microsoft should fix in an update for Exchange 2013. If you don’t, then you cannot complain if EAC remains in the same state for the next few years.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2013 | Tagged , , , , , , | 2 Comments

“Exchange Unwashed” November 2012 digest


November was another busy month for the “Exchange Unwashed” blog. Here’s the digest of the articles that were published.

Visio stencils released but no news of Exchange 2010 SP2 RU5 or TechNet updates (November 29). Microsoft released a very useful update of the Visio stencils that many use to create Exchange-related documentation. The update incorporates new icons used with Exchange 2013, SharePoint 2013, and Lync 2013. On the downside, we’re still waiting to hear from Microsoft about how they have fixed the DAG bug that appeared in Exchange 2010 SP2 RU5 and whether they will change their current strategy of forcing Exchange 2013 content to be the default when you search TechNet for Exchange. As I write this update, there are 71 comments on the EHLO blog entry of 8 November that informed everyone about the change, pretty well all of which are negative. I can’t think of a similar change made by the Exchange team that has caused so many complaints recently and while I have some sympathy for the folks at Microsoft who took the decision because I’ve no doubt that it was taken with good motives at heart, it simply hasn’t worked and should be reversed. If you care about this issue, why not let Microsoft know by adding your comment to EHLO?

Exchange 2013 DAGs, Windows 2012, and the CNO (November 27). Exchange does an extremely good job of hiding the complexity that underpins Database Availability Groups but sometimes a change made elsewhere means that a crack appears. And so it is with Windows 2012, which isn’t as accommodating to applications that want to create computer objects in Active Directory as Windows 2008 is. But there’s an easy workaround, as long as you know about it.

Exchange 2013 and TMG explained (November 22). Sometimes it seems like we live in a crazy world. Why else would Microsoft publish an interesting and worthwhile blog describing how to configure Threat Management Gateway (TMG) to support publication of Exchange 2013 services some nine days before TMG was to be removed (it’s happened now) from the Microsoft price list? Of course, there’s a good reason. TMG is very popular in the Exchange world and it’s going to be supported until April 2015, so there’s lots of time to get good value from the excellent advice contained in the post.

Microsoft Office Filter Pack 2013 now available, but it’s not needed by Exchange 2013 (November 20). Strangely, very soon after Microsoft posted the Office 2013 filter pack (at this download point), the pack disappeared into Web limbo and hasn’t reappeared since. I assume that someone discovered something that wasn’t quite right that caused Microsoft to remove the download, but in any case it doesn’t matter very much because Exchange 2010 servers are quite happy to use the Office 2010 Filter Pack (SP1) and Exchange 2013 doesn’t need it at all. Why? Exchange 2013 uses the Search Foundation (FAST) instead of the older MSSearch component used in Exchange 2010 and Search Foundation has in-build indexing capability for all of the Office file formats plus PDF.

Two PR disasters for the Exchange team in a week. Not good (November 16). The PR disasters are the TechNet update and the DAG bug in Exchange 2010 SP2 RU5. What struck me is that these were easily avoided snafus and that they took the edge off the feeling of well-being generated by a highly successful Microsoft Exchange Conference (MEC) followed by a high-profile launch of Exchange 2013. It’s distressing that a quality problem was found so quickly in Exchange 2010 SP2 RU5 because there’s been other examples of poor quality control in other recent roll-up updates. I do hope that the Exchange team loses the habit of shooting themselves in the foot soon. Then again, these problems do give people like me something to write about.

Not a good week for Exchange Online (November 15). You might conclude from this digest that I spent November complaining about various events that happened in Redmond. It’s true that Microsoft provided lots of opportunity for complaint. In this case, some problems in Exchange Online (Office 365) caused poor service to users in the Americas. On the other hand, as an EMEA-based user of Office 365, I experienced perfect service during the month and have done so ever since I first signed up as a paying customer some 15 months ago. And as I point out, even with a couple of glitches in reliability, I have a very strong feeling that Office 365 delivers a far more robust and reliable service than many IT departments are capable of providing. I could be wrong, but I don’t think so.

Exchange TechNet update unwelcome and unwanted (November 13). My original commentary on Microsoft’s decision to force-feed Exchange 2013 content to all and sundry. I thought it was a brain-dead decision then and I still do. Read this article to discover why.

Migrating an Exchange 2010 DAG to Exchange 2013 (November 8). Lots of companies run an Exchange 2010 DAG and will be looking forward to the prospect of migrating their servers to Exchange 2013. The good news is that the process is straightforward. But you have to create a new DAG built from Exchange 2013 servers and move mailboxes across. Not difficult, it just needs careful planning and execution.

Outlook’s missing picture compression feature (November 6). It might be more accurate to call this “Outlook’s obscured and hidden picture compression feature”. The article discusses a feature that I used a lot with Outlook 2003 to send compressed versions of photos. Given that digital photos are becoming larger and larger, it seems that the feature would be even more useful now than it was ten years ago, but I couldn’t find any trace of it in Outlook 2010 or Outlook 2013, nor could any of the Exchange and Outlook engineers that I asked at MEC. As it turns out, a lingering trace of the feature (as it was) is still around, but its usefulness is much reduced because it’s so obscured.

Apple releases iOS 6.0.1 to fix Exchange meeting bug (November 2). ActiveSync is a very successful protocol because it is so easy for third party mobile device vendors to integrate ActiveSync into their own email clients and so be able to connect to Exchange. Unfortunately, the success of ActiveSync has been marred by some strange implementation details. Microsoft doesn’t oversee how the companies that license ActiveSync use it and the potential always exists that problems will be revealed when new versions of an ActiveSync client are released. And so it was with the Apple mail client in iOS 6 when the famous meeting hijack bug came to the fore. Things are apparently better in iOS 6.0.1. At least, that’s the word from Cupertino…

Hopefully there will be fewer PR disasters and similar issues to discuss in December. It would be nice to get through a month by being able to focus on technology for technology’s sake.

Follow Tony @12Knocksinna

Posted in Exchange, Exchange 2010, Exchange 2013, Office 365, Outlook | Tagged , , , , , , , , | 2 Comments

Microsoft increases prices for Office products – time to chat with your sales rep


Microsoft customers probably didn’t enjoy reading about the new prices for Office 2013 products that are reportedly on the way from December 1. The date is important because that’s when the Office 2013 versions of applications like Exchange, SharePoint, Lync, and so on are added to Microsoft’s price list and sales representatives have the chance to share the information with their customers.

Microsoft is also changing how client access licenses (CALs) are priced, which creates further potential for CIOs to receive higher bills from Microsoft. In this case, the change seems to make excellent business sense from the Microsoft perspective because it reflects the fact that people tend to use multiple devices to access servers today. Anyone looking at the proliferation of iOS, Android, and Windows Phone devices and now Windows RT Surfaces, will rapidly realize just how many devices can connect to servers like Exchange. If people use more than one device, then it makes sense to license on a per-user rather than device basis.

Interpreting the number of CALs that you need to buy can be a murky business. According to Microsoft’s web site (covering up to Exchange 2010, no details are yet available for Exchange 2013):

“Exchange requires a CAL for each user or device that accesses the server software.”

Exchange 2010 supports both device and user CALs, with a user CAL usually costing about a third more than a device CAL. I wonder how many organizations really understand the usage pattern that drives the correct combination of these CALs. For example, do they count up the number of ActiveSync partnerships that exist for each mailbox and use that as the basis of determining the number of CALs they should report to Microsoft. My guess is that few do, probably through lack of guidance from Microsoft and knowledge on the part of customers. Steve Goodman’s article on MsExchange.org is a good place to start if you really want to find out just how many ActiveSync partnerships are in use. Of course, if your company is large enough, you’ve probably done a great deal with Microsoft to allow just about any number of devices to connect to Exchange as you’d like.

But then we come to the old standard CAL versus enterprise CAL debate and the confusion that exists around that calculation, which is based on the features that you actually use within the product. For example, if you use database-based journaling, you need standard CALs, but if you switch on per-user journaling, costs dramatically increase because enterprise CALs are now required. Exchange 2010 includes a script that’s supposed to give guidance on the topic, but the script has experienced logical and arithmetic difficulties in the past and its output should not be relied upon. You can download a better version of the script from Microsoft, but even so, its calculations should only be regarded as a starting point.

Of course, when you’re calculating CALs in an Exchange environment, you can’t forget to include the licenses needed for Windows and Outlook (or Office). Exchange 2003 used to include the Outlook CAL, but Microsoft removed this valuable right when they shipped Exchange 2007, much to the fury (at the time) of the Exchange community. We’ve now forgotten that Exchange used to always include client software with the server (remember the first MAPI Viewer client included with Exchange 4.0?). I guess removing Outlook from the mix simplified matters, or maybe it just increased profits.

Those addicted to conspiracy theories might conclude that increasing prices for on-premises products is simply a not-so-subtle hint from Microsoft that now’s a good time to consider moving all or part of your work to the cloud. Of course, Microsoft hopes that you’ll select Office 365 as the natural landing point for anyone who uses on-premises versions of the Office products today, but any price increase can force customers to review the other options that are available, such as Google Apps. After all, if you’re going into the cloud, you might as well check out the competitors.

Another possible downside of all the FUD around licensing is that customers will simply refuse to move from their current versions to upgrade to Office 2013 any time soon. This won’t come as good news to Microsoft because they like to create a flood of feel-good PR when new software is adopted quickly, which obviously can’t happen if customers dig in their heels and say “hell no, I’m not going” because of increased prices.

Like anything else to do with selling, negotiation is always possible and I imagine that many customers will soon be talking to their Microsoft representative to understand exactly what the pricing changes mean to them and how it will influence deployment plans over the next few years. Software licensing is a complex area. It certainly pays to be prepared by understanding exactly what server software you need and how many CALs are necessary across all the various products in use. Equipped with that knowledge, at least you’ll have some data to base a discussion about list prices, discounts, and timing when the nice person from Microsoft turns up to spread the good news.

Follow Tony @12Knocksinna

Posted in Email, Exchange, Exchange 2010, Exchange 2013, Office 365, Outlook | Tagged , , , , | Leave a comment

Kemp reveals LoadMaster changes after Microsoft drops TMG


In my post that discusses the notion of creating protocol-specific namespaces for use with Exchange 2013, I wondered what the load balancing vendors would react to the changing landscape of the Exchange world. Of course, it’s not just the new slimmed-down and nearly stateless Client Access Server (CAS) that creates some upheaval, mostly in the change of focus from L7 to L4, but also the decision of Microsoft to effectively check-out of the on-premises security product market. The biggest problem here is the demise of the Threat Management Gateway (TMG) because of its popularity when used as a reverse proxy alongside Exchange.

At recent conferences such as MEC and TEC, Microsoft has been careful to point out that mainline support for TMG will still be available until 2015 and that they won’t stop selling licenses until December, which is just a few days away now. Perhaps we’ll see a booming second-hand market in TMG licenses once Microsoft closes up shop.

In any case, signs are appearing that the load balancing vendors are considering their options in the space that Microsoft has vacated and how best to cope with the bright new world of Exchange 2013. I am obliged to Bernd Kruczek, who forwarded me a memo from Kemp Technologies (you can download the full text here) containing details of the strategic path they intend following. Bernd assures me that this text is not company confidential, which is the reason why I discuss it here.

The core of Kemp’s message is that their LoadMaster technology will soon be enhanced as follows, with a target availability of Q1 2013:

LoadMaster will be extended to support the following features:

  • End Point Authentication for Pre-Auth
  • Persistent Logging and Reporting for a new User Log
  • LDAP and Kerberos communication from the LoadMaster to the Active Directory
  • NTLM and Basic authentication communication from a Client to the LoadMaster
  • RADIUS communication from LoadMaster to a RADIUS server
  • Single Sign On across Virtual Services

These features are being launched in response to the TMG situation. I imagine that Kemp will have some recommendation for deployment alongside Exchange 2013 in the same timeframe. Anything earlier might be too soon to be useful as they’ll have to test across the various mixes of Exchange 2007, Exchange 2010, and Exchange 2013 and likely involving different sets of clients too.

Please don’t read this post as a recommendation for Kemp LoadMaster (the technology is worth testing so that you can make your own mind up as to whether it’s a good fit for your needs). All we have is a marketing memo that promises new features and we’ll have to wait and see what their implementation looks like when it’s available. In the interim, Kemp’s memo provides an example of how the market is changing to respond to Microsoft’s shift in direction. More change will come as other vendors reveal their plans. It’s just a matter of time.

Follow Tony @12Knocksinna

Posted in Email, Exchange 2010, Exchange 2013 | Tagged , , , , , | 6 Comments

Announcing Exchange 2013 Inside Out: Well, we’re starting anyway…


We are delighted to announce a new joint project: Exchange 2013 Inside Out, a two-volume set that we will write for Microsoft Press, with an anticipated publication date in Fall 2013. Tony is writing part 1, which covers the mailbox server role, the Store, DAG, compliance, modern public folders and site mailboxes. Paul is writing part 2, which covers client access, connectivity, transport, unified messaging, and Office 365 integration. This division looks as if Paul got more work to do, but Tony assures everyone that he can easily fill a book on just one topic.

Why two books where Exchange 2010 Inside Out merited just one? Well, just look at that book and reflect that it contains some 400,000 words in a 2-pound tome. Apart from the weight, it takes a long time to write such a book and there’s tons of changes and new material in Exchange 2013 that we want to cover. The option of writing a single 500,000 word volume was just not attractive. Thankfully Microsoft Press agreed with us.

We have deliberately decided to take our time writing. There’s no point in rushing out a book based on a product immediately after it is released because no real-world experience exists. Microsoft runs an excellent Technology Adoption Program (TAP) that helps the development group understand how new versions of Exchange behave in production environments through early deployments, but we prefer to see how the software evolves and behaves as it is deployed more widely. This cannot really happen until after Microsoft releases Exchange 2010 SP3 and whatever update is necessary for Exchange 2007 SP3 to allow coexistence with Exchange 2013. Writing based on a firm foundation of real-world deployment experience has always seemed to make a lot of sense to us and we see no reason to change now.

Although the two volumes of Exchange 2013 Inside Out will stand alone, we will absolutely make sure that each volume complements the other. We will be technical editors for each other’s volumes, giving us equal opportunity to insert bad jokes and Exchange war stories across the breadth of both volumes.

Mostly because we have no firm dates in mind, we’re not releasing any details of our schedule, we hope that we will be able to offer an early-access program to readers through the Microsoft Press prePress program, so stay tuned!

– Tony Redmond and Paul Robichaux

PS. More information about the Microsoft Press prePress program is available in this blog..

Posted in Writing | Tagged , , | 2 Comments

Analyzing Exchange 2010 SP2 RU5


Microsoft has re-released Exchange 2010 SP2 RU5 (v2) – This update should fix the DAG problem that was reported soon after the original release. However, protect yourself by making sure to test RU5 thoroughly before it goes anywhere near a production server!

RU5 V2 does not include KB2748870 and it seems like the English language version of the KB has been removed from Microsoft’s website. However, if you’re curious, you can read several other language versions, including http://support.microsoft.com/kb/2748870/tr

To everyone’s delight, Microsoft released Exchange 2010 SP2 RU5 on November 13. RU4 was released on August 14, 2012, so three months rather than the more usual six weeks has elapsed between updates. The additional delay is easily accounted for by the need for everyone to rush down to Florida to attend the Microsoft Exchange Conference in September followed by the release of Exchange 2013 to manufacturing last month. It’s been a busy time for all concerned in the world of Exchange.

We also have new roll-up updates for Exchange 2007 SP3 (RU8-v3) and Exchange 2010 SP1 (RU7-v3). The updates for Exchange 2007 are important if you considering deploying Exchange 2013 in the near future as you will absolutely need to keep your Exchange 2007 running the very latest bits to be able to interoperate with Exchange 2013. It’s actually always been like this, but I thought that I’d emphasize this point. You’ll need Exchange 2010 SP3 to interoperate with Exchange 2013 and that’s not scheduled to appear until “early 2013”. No roll-up update is sufficient to apply enough lipstick to Exchange 2010 SP2 to make it attractive to Exchange 2013.

In any case, what does RU5 contain? The first thing to note is that only 20 separate fixes are included, a significant drop-off from the number of patches provided in previous updates. Does this mean that people are finding fewer bugs in Exchange? I rather think that it’s a case that the developers are fixing fewer because their attentions are elsewhere, in this case, they’re busy fixing Exchange 2013 to prepare for its introduction into “the service” (aka Exchange Online) or finalizing Exchange 2010 SP3 for its scheduled release in the new year. And of course, there’s the inevitable service pack or whatever they’ll call it that no doubt will arrive for Exchange 2013, if only to calm the fevered brows of those who simply must wait for the first service pack before they deploy any Microsoft technology.

Below is my assessment of the 19 fixes. I’ve added a High/Medium/Low priority for each patch based on my reading of the situation as to how many people are likely to run into the problem. You might be one of the unlucky ones whose most common problems feature on this list, in which case you’ll probably disagree with my assessment.

As always, remember to test RU5 thoroughly before you deploy. There’s no point in taking risks, even with mature technology such as Exchange 2010.

Follow Tony @12Knocksinna

Update: Make the bug count fix 21 as Scott Schnoll of Microsoft tells me that the DAG issue described in his blog is also fixed by Exchange 2010 SP2 RU5.

KB Number KB explanation Notes
2707146 IRM-protected messages cannot be returned in search results if the messages are recorded and sent to an external contact in an Exchange Server 2010 environment Problems when you journal IRM protected messages to an external SMTP address – the resulting messages don’t show up in searches. Won’t affect many as ADRMS is not widely deployed. (Low)
2712595 Microsoft Exchange RPC Client Access service crashes when you run the New-MailboxExportRequest cmdlet in an Exchange Server 2010 environment Crash caused by an unhandled condition in the RPC Client Access service that might cause Outlook clients to temporarily go offline. (Medium)
2710975 Some MAPI property objects in an ANSI .pst file contain unreadable characters if you import the file by using the “New-MailboxImportRequest” cmdlet If you import an older-format (ANSI) PST that contains non-ASCII characters, you might get interesting results. ANSI PSTs haven’t been around for a long time and are probably full of other problems. (Low)
2712001 ExTRA.exe does not collect data if you select a scheduled task for a data collection in an Exchange Server 2010 environment The Exchange Troubleshooting assistant has issues capturing data for a scheduled task. (Low)
2716145 Store.exe crashes on an Exchange Server 2010 mailbox server if a VSAPI based antivirus software is used The case of the spurious semicolon. At least, an extra semicolon provided by Exchange to an anti-virus product leads to confusion all round and the Store crashes to show its disapproval. (Medium)
2717522 Microsoft Exchange System Attendant service crashes on an Exchange Server 2010 server when you update the OAB that contains a DBCS address list System attendant process can’t handle an address lists using double-byte character sets (e.g. Japanese) when their name is more than 11 characters long. (Medium)
2720017 An RBAC role assignee can unexpectedly change a DAG that is outside the management role group scope in an Exchange Server 2010 environment The cmdlets that manipulate DAGs do not respect RBAC scoping, so even if you create a role that’s scoped to a specific DAG, the holder can manage any DAG. (High)
2727802 Microsoft Exchange Replication service crashes intermittently when you try to move mailboxes from an Exchange Server 2003 server to an Exchange Server 2010 server This replication service is MRS, not the one involved in DAGs. It has problems cleaning up completed move requests that involve mailboxes moving from Exchange 2003. (Medium)
2733415 Event ID 1 is logged on the Exchange Server 2010 Client Access server in a mixed Exchange Server 2010 and Exchange Server 2003 environment Autodiscover doesn’t run as well as you’d expect in a mixed Exchange 2010 and Exchange 2003 organization. (Low – but only because the problem has taken a surprisingly long time to surface or fix)
2733609 Email message and NDR message are not delivered if an email message contains unsupported character sets in an Exchange Server 2010 environment Another problem with double-byte character sets. In this case, a message sent to a user whose mailbox is being journaled and that message is then forwarded from the journal mailbox only to run into problems. (Medium)
2743761 DAG loses quorum if a router or switch issue occurs in an Exchange Server 2010 environment Failure in a router or switch causes DAG members to lose connectivity with each other. This confuses the DAG and quorum is lost, which halts the DAG. Users can’t connect to their mailboxes. All hell breaks out. (High)
2748767 You receive an NDR message that incorrectly contains recipients of successful message delivery in an Exchange Server 2010 environment An NDR created when a message goes to many recipients comes back to report that all failed when only some of the intended recipients couldn’t be reached. (Low)
2748766 Retention policy information does not show “expiration suspended” in Outlook Web App when the mailbox is set to retention hold in an Exchange Server 2010 environment OWA shows the wrong information about retention policies when a suspicious user (who is on retention hold) checks them. I don’t know many people who check retention policies, but I guess that you might if you were doing something that would have you placed on hold. (Low)
2748870 Declined meeting request is added back to your calendar after a delegate opens the request by using Outlook 2010 The case of a reappearing meeting. Only happens when a delegate uses Outlook in online mode (most people use cached mode). (Medium)
2748879 You cannot access a mailbox by using an EWS application in an Exchange Server 2010 environment Not much real information provided as to what the root cause might be. All we know is that EWS applications get 503 errors when they attempt to access mailboxes, which is nice. Impossible to say how important this issue is based on the available data. (Low)
2749075 A copy of an archived item remains in the Recoverable Items folder of a primary mailbox in an Exchange Server 2010 environment The MFA copies items into the Deletions sub-folder of Recoverable Items when it moves them to an archive based on a retention policy. The items are eventually removed at the end of the expiry period, but they shouldn’t be there in the first place. (Medium)
2750293 Items remain in the “Recoverable Items\Deletions” folder after the retention age limit is reached in an Exchange Server 2010 environment Another issue for MFA. This time the unhappy assistant fails to notice that items in the Deletions sub-folder need to be removed when they expire (and a user is on litigation hold). So the items remain visible when they should not. (High)
2749593 Outlook logging file lists all the accepted and internal relay domains in the Exchange Server 2010 organization when you enable troubleshooting logging Most people like logging to capture all available data, but in this case a complaint has been made that Outlook’s logging file gets too big when the EWS call used to retrieve MailTips has to traverse many internal relays. You won’t meet this often and it’s not a big problem. The fix is to only capture the primary SMTP domain of the user. (Low)
2750847 An Exchange Server 2010 user unexpectedly uses a public folder server that is located far away or on a slow network Exchange 2010 selects a public folder “far far away”. It’s happened before you know, but now we have a fix, just in time for the appearance of modern public folders. (Medium)
2763886 “The operation failed” error in the Outlook client when you open a saved message from the Drafts folder and then try to send it in an Exchange Server 2010 environment What a lovely catch-all error message! In this case, we’re in Outlook online mode again and someone creates a draft message that contains inline images (the example given is 20 colour bullets). The message is saved as a draft and can’t be opened again, mostly because Outlook saves it as a read-only item. Microsoft doesn’t know why this happens, but they are investigating! RU5 only offers documentation and a workaround (don’t use HTML format messages) (High)
Posted in Email, Exchange, Exchange 2010 | Tagged , , , , | 18 Comments

Exchange 2013 and the case for protocol-specific namespaces


Perhaps the Exchange developers were unaware of the law of unintended consequences when they decided to change Exchange’s load balancing requirement from layer 7 to layer 4 of the network stack. For the most part, the change is wonderful. For some, especially those who have to manage systems that cater for large numbers of incoming connections, it creates an interesting question about protocol handling that deserves some attention in planning for the deployment of Exchange 2013. Such was the question addressed by Greg Taylor when he talked about load balancing options at The Experts Conference event in Barcelona last month. It’s taken me a while to decipher the scrawled notes that I took when Greg spoke…

Anyone dealing with high-end deployments based on Exchange 2007 and Exchange 2010 will be all too aware of the need to manage incoming connections carefully. Typically, the solution involved a hardware-based load balancer (running on physical or virtual hardware) that terminated the incoming SSL connection and then sent it on to one of an array of Client Access Servers that then processed the connection and directed it to the correct mailbox server. Horror stories about attempting to use software-based solutions such as the late but not-at-all lamented ISA Server to handle connections and the utter failure that ensued because of various limitations (not the least being that ISA is a 32-bit application) drove deployment teams to implement hardware-based systems such as F5 Network’s BIG-IP, a solution for which I have a lot of respect. Since then we’ve seen the advent of virtualized load balancers suitable for low- to-medium deployments. Those made by Kemp Technologies seem to be quite popular among Exchange 2010 administrators.

Exchange 2013 greatly simplifies the area of load balancing. You will still need to deploy hardware-based load balancers in situations where high availability is required, but Exchange 2013 supports solutions such as Windows NLB and round-robin balancing to cater for lower-end deployments.  All of this is because the Exchange 2013 CAS does not do the rendering and protocol handling that its predecessors did. Instead, the Exchange 2013 CAS simply proxies connections to the appropriate mailbox server, which does all the real work. The idea here is to break the version linkage that previously exists between CAS and mailbox insofar as you couldn’t upgrade one without the other; version independence is a big theme for future versions of Exchange and if all goes well, you’ll be able to upgrade different parts of the infrastructure in the knowledge that the new components won’t break anything running on the old bits.

Simplification is always good in computer technology as complexity invariably leads to additional cost, confusion, and potentially poorer results. However, any change has consequences and one of those that flows from the move to L4 is the loss of protocol awareness. When a load balancer terminates an incoming SSL connection at L7, it is able to sniff the packets and figure out what protocol the connection is directed to. Exchange has a rich set of protocols including Exchange Web Service (EWS), Outlook Web App (OWA), ActiveSync (EAS), the Offline Address Book (OAB), and Exchange Administration (ECP), each of whose endpoint is represented as an IIS virtual directory. But when an L4 load balancer handles a connection, it sees it going to TCP port 443 and the IP address for the external connectivity point (such as mail.contoso.com). Later on the CAS will sort out the connection and get it to the right place, but that’s too late to have any notion of protocol awareness.

The problem is that a target CAS might be sick. Worse again, it might be sick for only one protocol. Exchange 2013 managed availability attempts to automatically resolve issues like this by taking actions such as recycling an application pool or even rebooting a server. But an L4 load balancer sees a CAS in the whole rather than having the ability to deal with different protocols, some of which are healthy and some of which might not be so good. With L7, the load balancer would be aware that OWA is up but EAS is down on a specific target CAS and be able to take action to redirect traffic as individual protocols changed their status.

You might not be too worried about this at all as you don’t think that an essentially stateless CAS (for this is what the Exchange 2013 version is) won’t fail too often and anyway, if one protocol fails it’s likely to reflect a server-wide problem. There’s a certain logic in this position, but at the higher end you might be in the position where it becomes important to be able to exercise selective control over individual connections going to specific protocols.

One way of achieving selective control is to publish specific connectivity points for each protocol as part of your external namespace. In other words, instead of having the catch-all mail.contoso.com, you’d have a set of endpoints such as eas.contoso.com, ecp.contoso.com, owa.contoso.com, and so on. The advantage here is that the L4 load balancer now sees protocol-specific inbound connections that it can handle with separate virtual IPs (VIPs). The load balancer can also monitor the health of the different services that it attaches to the VIPs and make sure that each protocol is handled as effectively as possible. The disadvantage is that you have more complexity in the namespace, particularly in terms of communication to users, and you have to make sure that the different endpoints all feature as alternate names on the SSL certificates that are used to secure connections. None of this is difficult, but it’s different than before. What you gain from the work done to transition from L7 to L4, you lose (a little) on extra work and perhaps the cost of some extra certificates.

We haven’t yet seen much advice published by the vendors of load balancers to provide platform-specific guidance on this issue. I’m sure that it will provoke an interesting debate when the advice arrives!

Follow Tony @12Knocksinna

Posted in Email, Exchange 2013 | Tagged , , , , , , , | 3 Comments

Three short days in Iceland


I might just be the world’s worst tourist, especially when arrangements are made for me. But a pretty bleak choice exists if you live in Ireland and want to visit Iceland, namely, to sign up for an IcelandAir tour package or take on the hassle of making a pile of separate bookings and then having the joy of a transit through Heathrow or another major European hub airport. The problem is that no scheduled flights exist between Ireland and Iceland…

In any case, I’ve wanted to visit Iceland for quite some time. No great or wonderful reason underpinned this feeling, it just seemed like a good thing to do. And so my wife and I signed up for a package and found ourselves in Dublin Airport anticipating an enjoyable trip. Alas, after spending 48 minutes in a security queue that featured people taking all sorts of short cuts in an attempt to make their flights, I wasn’t so sure that it was a good idea. The Dublin Airport Authority exhibited all the management credentials that I have seen in small airports in India in terms of their ability to conduct security checks and seeing that this happened in the “old” Terminal 1 rather than the brand-new Terminal 2, some suspicion passed through my mind that the cost-cutting approach of Terminal 1’s major tenant (Ryanair) had influenced the lack of staff, control, and throughput. However, this couldn’t be true because Ryanair is ultra-efficient when it comes to doing things in time, even if it insists on informing you of its success in landing on time with a very annoying trumpet call.

The IcelandAir flight was just fine and landed on time in Keflavík Airport (KEF), some 40km away from Reykjavík, the location of our hotel. Bus transport was arranged as part of the package and we were ushered out by some guides, all of whom were determinedly cheerful in the pursuit of the schedule laid out on their clipboards.

We ended up at the Reykjavík Natura Hotel, owned by IcelandAir and part of their HQ complex. The hotel was some 2km out of the city center and positioned alongside Reykjavík’s domestic airport. All I can say about the hotel is that it was adequate and that the pictures on its web site were not reflected in the accommodation. Our room was small but clean and containing nothing that would make you want to stay in it for any length of time. The restaurant and other facilities were just OK. I guess it was fine for three nights, but if I return to Iceland, I definitely won’t be staying at the Natura.

However, you don’t go to Iceland to do hotel reviews. We quickly left to walk downtown and explore what Reykjavik had to offer. The answer is that the city is compact so the center is easy to get around. A good variety of restaurants and shops are available, so it’s easy to spend a few hours mooching around and then find something good to eat.

Reykjavík cathedral

The next day we awoke to 100kph winds with gusts to 150kph. All tours had been cancelled because the combination of high winds and narrow Icelandic roads made tour buses dangerous. The prospect of going on a guided tour had seemed both limiting and expensive to me, so I had booked a car with Avis (using AutoEurope.com to get an excellent rate). Unfortunately the nearest Avis office was at the domestic airport, so I had a 25 minute trek against the wind to get there. Once at the airport, the very nice Avis representative upgraded our car to a Land Rover Discovery, probably because I was one of the few customers they had seen that day.

Equipped with one of the world’s best off-road vehicles, we set off to Thingvellir national park, about 50km away. Apart from the wind and the ten-degree wind chill factor that made the zero degree C temperature a tad cutting, the weather wasn’t too bad until we got to the park. At this point, the wind really started to blow snow into our face – and whipped my glasses off and under a table about 7m away, which took some searching to find. Given the weather, there wasn’t really too much to see so we set off again to head towards Geysir.

I admit that attempting to see a geyser in the kind of conditions that existed seems a little mad, but we were out and about and wanted to see the country. Fortunately, the weather deteriorated close to a white-out due to drifting snow and we turned back to the city, stopping to admire the effect of the wind on Lake Thingvellir.

Wind blowing on Lake Thingvellir

The wind still blew on the following day but it wasn’t anywhere close to as bad. Off the dedicated explorers went again in an attempt to see more of Iceland, heading across the amazing lava fields to Krýsuvík. Our route took us across some amazing countryside over a gravel road, which we hand’t quite expected to encounter. A number of 4x4s equipped with some amazingly large tyres passed us along the road – they clearly knew where they were going and we were ambling along, gaping at the view.

Along the road to Krýsuvík

The hot springs, mud pots, and other geothermal activity at Krýsuvík comes along almost as a surprise. A small sign on the side of the road indicates that something might be seen if you follow a track towards the wisps of steam that can be seen rising. This leads to a couple of wooden cabins that probably sell souvenirs and food at busier times of the year, but then you’re on your own to walk around the places where the earth belches hot air and foul gases.

Geothermal activity at Krýsuvík

I imagine that such a setting would be impossible elsewhere in the world where insurance liabilities preoccupy the guardians of natural sites. Sensibly, the Icelanders seem to have taken the view that if you play with the kind of hot water and steam emitting from the earth, you accept the consequences whether in the form of a nice hot shower or a scalding burn.

There’s a limit to the amount of time that you can spend looking at hot water and steam, even if it comes with a sulfurous stench, so off we went again to see what could be found in the rest of the Reykjanes peninsula.

Truthfully, there’s not too much to be seen in Reykjanes unless you visit the delights of the Blue Lagoon, a tourist trap that every tour organizer in Iceland seems to want people to visit (possibly because of the high fees they earn by directing traffic there). It’s a perfectly nice place to drive through, unless you’re easily bored by lava fields and slow-paced fishing villages that make even slow-paced Irish farming villages look like hotspots of activity. More thermal activity can be seen, including the way that the energy from the hot springs is captured at Svartsengi so that it can be used to heat houses (I quite like the photo below and it’s now serving as my Windows 8 desktop background).

Geothermal steam rising in front of lighthouse near Svartsengi

Apparently, most of the heating in Iceland comes from geothermal sources and they use very little oil or gas for this purpose. The capture is so efficient that the temperature of the water captured declines by only 2 degrees Celsius as it travels by pipeline to Reykjavík, or so the guidebook said.

Speed limits in Iceland are quite low (90 kph maximum) and although there’s not a lot of other traffic, covering ground takes more time than you’d expect.We paused at the spot where geologists have determined that the European and American plates meet (I doubt that Disney will seek to include this in any theme park soon) and made our way back to the city.

Prow of the “Sun Ship” on Reykjavík promenade

Our final day allowed some time to have a walk on Reykjavik’s seaside promenade and around the city’s opera house. The storm over the previous few days had thrown up plenty of seaweed and stones onto the promenade but that didn’t stop lots of people enjoying the bright winter sunshine.  It was a nice way to finish up our trip to Iceland before driving back to the airport for an afternoon flight home.

Three days isn’t enough to get to know a country, even one that’s as small as Iceland. I would like to return, probably in summer, and take the time to travel to other parts of the island, including the more isolated parts in the north and east. I’m sure that better weather and more time will make an Icelandic trip a much more enjoyable experience all round!

Follow Tony @12Knocksinna

Reykjavík Opera House

Posted in Travel | Tagged , , , , , , | 3 Comments