Forcing an Active Directory update with Exchange 2013 might not be such a good idea


My post of May 4 exploring the question whether the upcoming Exchange 2013 release should force companies to upgrade their Active Directory infrastructures to a modern version generated a considerable number of messages. As you might recall, the question was originally debated at a spirited Q&A session at TEC in San Diego where the folks who were present reached a consensus that it would be acceptable if Microsoft declared that Exchange 2013 required Active Directory to run at Windows 2008 functional level. This approach seemed to offer many benefits, such as freeing up Microsoft engineering resources to work on new features instead of having to test Exchange against a complex matrix of older Active Directory levels. It also appeared sensible to refresh Active Directory as part of the planning process for the deployment of a new version of Exchange.

Paul Robichaux weighed in on the debate by coming down on the side of those who think that Exchange cannot force an update. Paul said that he thought “Microsoft should be working on the lowest-friction migration path possible”. It’s hard to disagree with this sentiment. Paul also didn’t think that my point about testing held water because so much of this is automated today. I still think that it’s valid because people do real work to plan, execute, monitor, and fix tests. If not, tests go wrong or fail to detect conditions – and we have seen the effect of this in some of the roll-up updates Microsoft has released for Exchange 2010.

Lots of email was generated about different aspects of the debate. Quite a few described a situation pertaining in many companies where it appears that the folks who run Active Directory and those who are responsible for applications are disconnected when it comes to infrastructure updates. This seems like a throwback to the era when IT departments operated in silos and I was surprised to see how often correspondents reported that the Active Directory managers did their own thing and didn’t really understand what applications needed. A lot of people noted that their IT infrastructures supported so many applications that it was almost impossible to test what effect an Active Directory upgrade might have. That came as quite a scary thought because it’s an inevitable fact of technology that sooner or later Microsoft will force customers to upgrade Active Directory. What happens then?

For example, some reported infrastructures that contain components (for instance, Samba or Netfiler) that depend on old authentication mechanisms such as NT LAN Manager V1. This is a very old mechanism that has been subsequently revised and largely replaced by more secure authentication mechanisms such as Kerberos. Others told of firewalls that exist between client computers and domain controllers that might cause authentication to break because the RPC dynamic port range changes between Windows 2003 and Windows Server 2008.

One point repeated over and over was that Exchange works fine with Windows 2003 – if it doesn’t break anything why attempt to fix it in Exchange 2013? This point was reinforced by the salient fact that none of the improvements in Active Directory when operated at Windows 2008 functional level impact Exchange in any obvious way (finer-grained password policies, SYSVOL replication, retention of logon data, and so on) and no version of Exchange supports the most obvious recent enhancement (read-only domain controllers). Essentially Exchange treats Active Directory as a highly functional LDAP server that also happens to store a great deal of its configuration data. There’s no great difference between Windows 2003 and Windows 2008 when it comes to LDAP queries and updates. I think this is a fair point that prompts the question “what advantage could Exchange get from modern Active Directory versions”? Cue deafening silence…

Of course, one of the big reasons for upgrading is to maintain support. Windows 2003 extended support finishes in July 2015 while Windows 2008 reaches the end of its extended support in 2018. It’s true that you could keep on running Active Directory on Windows 2008 R2 servers using Windows 2003 functional level well into the next decade and so secure peace of mind that this essential piece of your infrastructure is maintained until then. The support imperative runs into difficulties because Microsoft has done a pretty good job of ensuring that Active Directory can run in old modes on new versions of the O/S.

Although the testing matrix for the Exchange development group would be simpler if they could eliminate support for older versions of Active Directory, the downside from their perspective is that forcing an upgrade might slow adaption of the new release. Customers tend to take their time before they get around to deploying a new version of Exchange. It can take anything from six months to two years before a company is happy that its infrastructure can support a new version, that users have the right clients, that the help desk is ready to support the deployment, and that the migration won’t impact the business. Throwing Active Directory upgrades into the mix would probably create a further delay for a myriad of reasons from internal politics to cost-benefit analyses to the actual work necessary to perform the upgrade. Slowing an Exchange upgrade might also slow the adaption of other products such as new versions of Outlook and the other Office applications.

In a nutshell, I think the discussion can be summarized as follows:

  • There are technical advantages such as longer support in moving Active Directory to a recent version such as Windows 2008 functional level running on Windows 2008 R2 servers. However, there don’t seem to be many business advantages for such a move.
  • There are definite performance advantages in running domain controllers and global catalogs on modern hardware. In other words, it’s time to dump those old 32-bit computers (if you still have any that are still limping along).
  • Even applications like Exchange that make such extensive use of Active Directory cannot impose a requirement to upgrade Active Directory unless the engineering group creates a compelling case for such an upgrade.

So after all the argument and debate, the simple fact is that whereas there are a lot of “nice to have” benefits of running the latest and greatest Active Directory, Exchange 2010 runs fine with Active Directory operating in Windows 2003 functional level. Given the lack of evidence to prove that the newest version of Active Directory delivers anything that Exchange 2013 could exploit, I anticipate that the same forest functional level will continue for Exchange 2013. Time will tell.

Follow Tony @12Knocksinna

Posted in Active Directory, Email, Exchange | Tagged , , | 5 Comments

Another STC award for Microsoft Exchange Server 2010 Inside Out


The nice people at Microsoft Press have been in contact with me again to let me know that my book Microsoft Exchange Server 2010 Inside Out has now received an “excellence” award from the Society of Technical Communication (STC), the professional body for people who work with technical publications covering subjects from computers to science.

The notice I received from the Publisher of Microsoft Press said:

You’ll remember, I think, that your book Microsoft Exchange Server 2010 Inside Out won a regional award of Distinguished from the Society for Technical Communication, Puget Sound Chapter.

But, wait – it gets better J Your book has now won an award of Excellence in Technical Communication from the international STC – the highest award they bestow on any book. It will be displayed at the 2012 STC Summit (May 20-23), in Rosemont, Illinois, and will be viewed by hundreds of conference attendees.

We have not yet received the judges’ evaluations, but we’ll send those along as soon as possible. STC describes the award this way: “An entry that wins an award of Excellence consistently meets high standards in all areas. The entry might contain a single major flaw or a few minor flaws. The entry demonstrates an exceptional understanding of technical communication principles.”

Apparently this makes Exchange 2010 Inside Out one of the top 50 books published last year in terms of its technical content and design.

Clearly I’m very happy for the book to be recognized in this manner as it makes all the hard work done to write, edit, copy-edit, technical-edit, layout, design, index, and publish worthwhile. The folks at Microsoft Press really did an exceptional job with this book.

Now on to Exchange 2013?  Perhaps…

– Tony

Follow Tony @12Knocksinna

Posted in Exchange, Exchange 2010, Writing | Tagged , , , | 5 Comments

Ten years on: The HP-Compaq merger


On May 3, 2002 HP announced that it had completed the acquisition of Compaq. The new HPQ stock began trading on May 6 as the new company launched into action around the world.

For those of us involved in the merger, it certainly was an exciting time. Based on an assumption that stockholders would approve the merger agreement, the senior leadership from both companies assembled in San Francisco for the week immediately before the merger closed to consider topics such as “Building the 100 day plan” and “Organization Design and Structure” (ODS) and “Adopt and Go”. I ended up in the Hyatt Hotel beside San Francisco Airport and joined my Compaq colleagues in meeting our HP counterparts for the first time and form the teams that would the various parts of the new company. The first introductions were slightly strained but the desire to get on to achieve swift progress soon kicked in to help us move forward.

Having gone through two major mergers (Compaq-Digital and HP-Compaq), I think that people selection is the hardest part of the process. ODS was a very structured process that forced managers to first define their goals for the new organization and then lay out what positions needed to be filled to achieve those goals. And then the real horse-trading began as the very natural tendency for ex-Compaq managers to advance cases for ex-Compaq people and vice versa on the HP side swung into full motion. There were many sidebar discussions to describe the strengths of various candidates. At times it was like speed dating. And the worst thing was that everyone knew that eventually they’d have to fill slots with people that they didn’t really know – and that some of the people that had done a great job for you in previous organizations would lose out to a candidate from the other company and would probably be made redundant.

There were safeguards in place to ensure that managers didn’t end up with a team selected exclusively from one company. My recollection is that whilst everyone acknowledged that a 50-50 selection to get to a balanced HP/CPQ team would be almost impossible to achieve, we all knew that 60-40 was attainable and that became the benchmark. In some cases, making the final decision about a slot to get to the desired balance was incredibly difficult.

One of the highlights of the pre-merger week was a session with Carly Florina. I know that there are many who think that Carly was a bad CEO for HP and lament her lack of regard for the “HP Way“, the almost mythical set of values operated by Bill Hewett and Dave Packard, HP’s founders. In 2007-2008, I occupied an office in Palo Alto beside Bill and Dave’s old offices, which were preserved in the same state as when they had left HP. Almost every day visitors would arrive to look over the offices and examine the awards and trophies that HP’s founders had accumulated. One interesting fact is that reasonably soon after the merger – maybe six months or so – there were more new people working for HP than those who had been there before the merger. The erosion in “true-blue” HP people allied to the upheaval around the merger and the subsequent market pressure on HP to perform to justify the merger are factors that I think undermined the HP Way and weakened the connection that many employees felt to the company.

I don’t share a negative view of Carly because all of my dealings with her were positive, including the times when she would attend CTO meetings to convince the technology leadership of the company to be more inventive. I can also say that her performance in front of many doubting managers at the Hyatt Regency at the Embarcadero in downtown San Francisco was compelling. She was in jeans, clearly tired after flying in front the east coast where she’d been arguing the case for the merger with bankers and investors, yet she did an incredible job of firing everyone up for the period that lay ahead, including a dynamic and unscripted Q&A session where she gave the impression of being a CEO that had both a vision and the drive to accomplish that vision.

Ten years later on I think that lots more good resulted from the merger than bad. There’s no doubt in my mind that the PC-centric culture of Compaq would have eventually led it to bankruptcy. Perhaps it was my Digital background, but I thought that Compaq’s energy was badly focused and misdirected. On the other side, HP had some great things going on (mostly in the realm of a terrifically profitable printing business) but was woeful at PCs and industry-standard servers. I also think that HP’s services business was in a bid of a mess, especially on the consulting side. As it happened, the two sides came together reasonably well and soon jelled to become more productive and profitable. And although there have been some hiccups since 2002 (CEO woes continuing, the strange case of WebOS, and probably a failure to seize more of the high ground in the Linux space), HP has proven to be a company that leads in many places.

Challenges lie ahead for HP. Enterprise Services is still not as strong as it should be. The printing business seems to be in a slow decline. The PC market is under huge cost pressures as well as the morphing nature of the devices that people really want to use (Windows 8 tablets will be very important to HP). And HP’s cloud initiative is possibly a year or so behind where it could have been had some decisions to invest been taken earlier. But overall, I think the work done ten years ago to bring Compaq and HP together has proven to be very successful, even if the stock price isn’t where it should be (fair disclosure, I own no HPQ stock) and Meg Whitman’s new team has some heavy lifting still to do.

Upwards and onwards to the next ten years.

– Tony (CTO of Compaq Global Services before the merger, and thankfully selected to be CTO of HP Consulting afterwards)

PS. Fortune Magazine has a great article about the recent trauma @HP. It’s a good read.

Follow Tony @12Knocksinna

Posted in Technology | Tagged , , , , , | 1 Comment

Should Exchange 2013 force an Active Directory upgrade?


One of the interesting debates at the Exchange Q&A session at TEC 2012 was the question whether the upcoming release of Exchange 2013 should force companies who want to deploy the new software to upgrade their Active Directory infrastructure to a level higher than Windows 2003. Specifically, the proposal was that the deployment of Exchange 2013 within an organization should first require the Active Directory forest to run at Windows 2008 functional level on Windows 2008 R2 servers.

I should be clear that Microsoft has not put this requirement on the table and that I have seen no formal press release or other communication from Microsoft that even hints that they might move along this path. However, private conversations with a number of Microsoft engineers reveal a certain frustration that so many customers operate Active Directory based on outdated software running on old hardware. After all, Windows 2003 is now pretty elderly and is rapidly approaching the point when it becomes unsupported. Lots of people run Windows 2003 domain controllers and global catalog servers on old 32-bit servers whose best days have long disappeared in the rear view mirror.

Exchange was the first major Microsoft application to take advantage of Active Directory with the release of Exchange 2000 in 1999. This wasn’t altogether surprising because the first generation of Exchange was based on an X.500-based (loosely based in the eyes of some) Directory Service that looked, felt, behaved, and generally responded very similarly to Active Directory. The advent of a fully-fledged enterprise-quality Active Directory was good for Exchange because it could drop its own Directory Service and take advantage of a directory that was much more tightly integrated into Windows. The situation has persisted to this day.

The transition from Windows NT to Windows 2000 was slowed a tad by the need to plan for Active Directory. We learned a lot in those early days and soon became accustomed to dealing with forests and domains. Best practice slowly evolved after a few hiccups (such as the assumption that the domain is a security boundary) and the fears that administrators had about operations such as schema upgrades faded with time and familiarity.

Aside from the introduction of the Read-Only Domain Controller (RODC), which isn’t supported by Exchange, not much seems to have happened to Active Directory in terms of new functionality or dramatic new capabilities since. Or so it seems on the surface. And perhaps it’s because Active Directory is so familiar (like a comfortable old shoe) that we’ve forgotten that it’s important to keep it fresh and updated to meet the needs of new applications and new operational imperatives, such as need for increased automation.

I can’t quite work out why people would want to keep on running Windows 2003 domain controllers and global catalogs. Hopefully these are 64-bit systems rather than the antiquated 32-bit servers that Windows 2003 began upon, but even so, the facts are that Windows 2003 is old and needs to be removed from corporate computing environments. Moving to a more modern platform (my recommendation is to use Windows 2008 R2) provides Active Directory with a new lease of life with an operating system that is maintained and more secure than its predecessor. It also allows Active Directory’s functional level to be upgraded to take advantage of new features such as the recycle bin (something that should probably have been part of Active Directory from day 1 anyway).

Overall, I think that it would be a good thing if Microsoft declared that the deployment of Exchange 2013 required a modern Active Directory infrastructure. Let’s face it, you can expect that Exchange 2013 will require a schema upgrade to accommodate new features. Every other version of Exchange since Exchange 2000 has extended the schema so there’s no reason to suspect that the new version will break the habit of a lifetime now, so it’s probably a good opportunity to take a hard look at Active Directory and figure out how to improve and enhance your deployment at the same time.

Putting Windows 2003 functional level into Active Directory’s wastebasket will help Exchange too because it will reduce the complexity and amount of testing scenarios that the setup and deployment team has to go through. And if they’re relieved of the need to test deployment on outdated Active Directory infrastructures, the engineers should be able to use their time more gainfully to test new Exchange 2013 features.

I accept that some companies might have a problem if Microsoft requires Windows 2008 functional level as a prerequisite for Exchange 2013. So be it. Given the track record of every other major release of Exchange, I sincerely doubt that there will be a rush to deploy Exchange 2013 soon after general availability, so there’s plenty of time for those companies who have an issue (maybe there’s an application that depends on Windows 2003 or some form of now outdated authentication scheme that’s no longer supported) to sort things out and bring their infrastructure up to scratch.

The debate at TEC on this topic was spirited. At the end of the day, a large majority of the companies who were present saw no issue with Exchange 2013 forcing those who are stuck with old Active Directories to do the right thing and upgrade. You know it makes sense.

Follow Tony’s ramblings @12Knocksinna

Posted in Active Directory, Email, Exchange | Tagged , , , , | 15 Comments

Exchange Keynotes at “The Experts Conference”


Southwest 737 on final approach to San Diego airport pictured from the Marriott Hotel - Landing @SAN is usually a bonus when attending a conference in San Diego

One of the signs of a better conference is the quality of the keynotes. I’m not qualified to offer an opinion about the Monday morning keynote for the Exchange section of TEC 2012 as I was the speaker. You can download a copy of my keynote for your reading pleasure. Now that my stuff is out of the way, let’s focus on what Kevin Allison (Microsoft General Manager for Exchange) had to say during his Tuesday keynote, loosely titled “the state of Exchange”.

If you want, you can refer back to his comments at TEC 2011 (April 2011) and Exchange Connections (October 2011). Kevin is a regular speaker on the Exchange circuit whose next stop must be at MEC 2012 in Orlando next September. No doubt he will have a lot to say about Exchange 2013 then.

In some ways it’s a difficult time for a Microsoft executive to speak because we are currently right in the middle of the product doldrums at a point when the previous product generation (Exchange 2010) is showing some signs of age and the new generation (Exchange 2013) hasn’t yet been formally announced. Faced with such a situation, many executives elect to mutter some banalities and vague aspirations dressed up as product directions that convey little or no insight to the audience. However, to Kevin’s credit, he delivered a session that was full of information and insight as to the future directions of Exchange and Office 365l, even if he declined to answer the inevitable question “what’s in Exchange 15?” (aka Exchange 2013). Microsoft is obviously keeping its powder dry for MEC, which Kevin said would be “full of technical information about Exchange 15”.

Kevin to Tony: "Are public folders really like cockroaches"?

As an introduction, Kevin’s responsibilities broadly break down into two parts. For on-premises Exchange, he owns:

  • Sustained engineering – think service packs, roll-up updates, and testing
  • Setup and deployment
  • Support tools

For Office 365, his focus areas are:

Removing technology blockers that prevent a smooth sales process (in other words, making sure that Office 365 delivers what customers need in order to want to buy)

  • Deployment of Office 365
  • Sustained engineering
  • Compliance

All in all, it’s quite an array of tasks over which to have oversight.

Kevin started by explaining three important factors that influence the way that Microsoft has thought about the future of Exchange. Mobility and the ever-swelling number of devices that can connect to Exchange through ActiveSync is clearly huge, with over 1 billion smartphone users expected in the near future, all of whom are potential consumers of Exchange. Then there’s the small matter of iPads, Android tablets, Windows 8 tablets, RIM BlackBerry devices, and so on.

Developing the topic, Kevin said that Anywhere Access is just an expected given today. He cited the growth in possible clients for Exchange in just the last five years from the relatively small collection that typically connected to Exchange 2007 just after it launched to the array that can connect today. In one respect, ActiveSync has become a huge success for Microsoft but the widely differing implementations across devices and even across different versions of device families is causing some management challenges, especially as companies succumb to user desire for “Bring Your Own Device” (BYOD) and the ability to plug in just about any wireless-enabled gizmo into their network. Consumerization is truly something that companies absolutely have to sort out to the mutual benefit of both users and the business.

Kevin noted that we’re at a point in time where three generations of workers need to be catered for and that this has never happened before.  I guess because people retired earlier or otherwise didn’t have such long careers, only two generations in the workplace was the norm up to now. The real point here is that the newer generation of workers has been raised on IM and SMS as their preferred way of communication and therefore have a very different perspective of how and when to use email. To experience the difference, try explaining how faxes and telexes worked to your kids. The blank looks will tell the story. The issue for a product like Exchange is to figure out how to integrate new methods of communication in as elegant a fashion so as to accommodate the changing communication habits.

The final big factor is the cloud. Kevin said that he expected 130 million enterprise customers to be “in the cloud” by 2014. He wasn’t specific whether these folk would all be consumers of Microsoft cloud services or across all the services offered by different cloud providers. Interestingly, when Kevin refers to Office 365, he calls it “the Service”. Maybe this is a form of “the Matrix”…

Moving back to Exchange, Kevin spoke about some of the issues that companies consider when they think about moving to the cloud. Control over data by handing physical control to Microsoft evokes many emotions. It also means that the data cannot be used for reporting (for example, analyzing the message tracking logs), a weakness in Microsoft’s service that Kevin acknowledged. However, he said that Microsoft’s view is that data always remains the customer’s and is never owned by Microsoft. Office 365 data is managed regionally today. In other words, if you’re a U.S.-based customer, your mailboxes will be in a U.S. datacenter. This aspect might change in the future to allow customers greater flexibility in deciding where their data should be stored.

Exchange Online follows a cadence of three-month, six-month, and annual updates to keep the service fully patched and as up-to-date with features as possible. When new features are introduced, everyone gets them at the same time. Some customers have difficulty with this aspect of the service. For example, if a new version of OWA is introduced it can have downstream consequences for on-site user support. Microsoft is figuring out how to allow customers to have greater choice and control over when (but not if) new features are deployed to their users. In passing, Kevin noted that the last major refresh of thousands of Office 365 Exchange servers took five days in total (four days patching, one day testing and monitoring), all done with a great deal of automation built using PowerShell and other tools. Exchange Online maintained its 99.9% SLA during this time and no one suffered any loss of access to their data. This is an impressive performance that demonstrates the importance of automation to the success of massive online operations.

Casting his mind back in time, Kevin reflected that Microsoft’s decision to invest in what is now Exchange Online came about as a result of recognizing that Exchange administration was becoming increasingly complex and a real challenge for many customers (given all the new features in Exchange 2010, it’s even more complex now). It therefore made sense for the company who built the software to offer it as a service to customers. The current operations model is likely to evolve over time. For example, Microsoft might offer a service whereby they install hardware on customer sites and then do all the management across the network (a service that seems similar to those offered by companies such as Azaleos today).

Talking about migration, Kevin expressed some frustration that it can take some companies a long time to move to Office 365. The migration process can be long drawn-out and tiresome with work required from Microsoft and the customer as well as potential assistance from third party consultants. Tools such as those offered by ISVs such as Quest or Binary Tree help, but it’s up to Microsoft to improve the migration process to make it much easier and much faster for customers to move to the cloud. One idea was to automatically create a tenant domain when an on-premises deployment of Exchange is done so that the customer has then “reserved” their place in Office 365 and can move from on-premises to hybrid or all-in-the-cloud as they like.

Kevin pointed to the need for high-fidelity migrations to ensure that all of the data that reaches Office 365 is fully functional and behaves in exactly the same way as it does on-premises. As an example, he compared Microsoft’s approach to that taken by Google where calendar items are migrated but have then to be manually reinserted into user calendars to become functional calendar items again.

A video created to show the possibilities of future collaboration technology provoked the question of how much of what the video showed is possible today. Most audience responses were that we’re a while away from what was presented in the video. Kevin’s view was that much of the technology (such as automatic speech to text translation) existed but that a lot more work had to be done to achieve the necessary degree of integration to make everything work so smoothly together. And then there’s the small matter of creating and shipping production-quality versions of the devices that appeared.  Perhaps everything will come together in the next five years.

When questioned from the audience whether a future existed for on-premises versions of Exchange after Exchange 15, the response was that there will always be a certain percentage of customers who want to control the implementation of their messaging system and Microsoft will meet that need with new versions of Exchange. However, the question is what that percentage will be. It could be as low as 5% by 2016 or maybe as high as 25%. My feeling is that the cautious nature of large enterprises will make the percentage figure higher rather than lower, at least in terms of number of mailboxes. We shall see.

On another topic, naturally Kevin uses a Windows Phone and I had the opportunity to compare the weight and general appearance of his Nokia Lumia 900 against the smaller Nokia Lumia 800 that I’ve been using since January. I didn’t think that the extra weight and size compensated for the additional screen area and am quite happy with the 800. It just does the job and has gotten much better in terms of battery consumption since Nokia released the latest firmware.

I’m traveling home to Dublin today, just as the weather in San Diego picks up. All day Monday and Tuesday morning it did a pretty good job of resembling classic Irish April weather (cold, low clouds) rather than the sunny paradise San Diego represents itself to be.

– Tony

Follow Tony’s ramblings @12Knocksinna

Posted in Cloud, Exchange, Office 365 | Tagged , , , , , , | Leave a comment

April 2012 articles published on WindowsITPro.com


Here’s the digest of articles published on my WindowsITPro.com blog in April 2012.

FAST searching coming to Office 15 server applications (April 26) reports that the Office 15 wave of server applications (Exchange 2013, SharePoint 2013, and Lync 2013) will share a common enterprise search capability powered by the FAST technology acquired by Microsoft in 2008. For the first time you’ll be able to search across multiple repositories in one operation, something that will be popular for those responsible for compliance within large enterprises in particular.

Offline access through IE10;maybe an OWA feature in Exchange 2013? (April 23) speculates that Microsoft is about to add the ability to work offline to Outlook Web App in Exchange 2013. Some of this work is tied up with the march to HTML5 and it will be interesting to see just how pervasive Microsoft can make this feature across the browser set (IE, Safari, Chrome, Firefox) that they currently support for “premium” access.

Does Microsoft Explain Cloud Outages Better Than Google? (April 19) considered how Microsoft and Google handle customer communications when things go wrong in their cloud services and concludes that Microsoft probably does a better job of explaining the root cause of Office 365 outages than the somewhat secretive approach taken by Google when failures occur for Gmail or Google Docs.

Exchange’s monopoly and its importance to Microsoft (April 17) considered some of the comments made by a Venture Capitalist about the importance of Exchange to Microsoft and how this might play out over the next few years. Of course, a monopoly is not a good thing to have in place within a market…

Exchange 2013 to RTM in mid-November 2012? (April 13) Possibly one of the worst-kept secrets around the Microsoft campus is that Office 15 is locked and loaded to achieve RTM (Release to Manufacturing) status towards the end of 2012 with general customer availability in early 2013. I don’t see this schedule being compromised unless some horrible bugs come to light. Product schedules have a life of their own and gather momentum as they progress to RTM. We’ll just have to wait and see whether a brand-new version of Exchange is ready for deployment in January 2013 or thereabouts.

Exchange 2003 and Outlook 2003 enter the final two years (April 12) It’s time to put a stake through the heart of Outlook 2003 and Exchange 2003. At least, that’s what you might think if you’re worried about long term support. Seriously though, if you’re running this software, it’s time to make plans to upgrade. Unless of course you like life on the edge and believe that the software will run just fine without Microsoft’s imprimatur.

Dispelling myths and other half truths (April 10) This article got more traffic than any other in April. A blog post written by someone who knew Zimbra’s server (ZCS) well attempted to compare ZCS to Exchange and got so many things wrong that I simply had to say something (not that I patrol the Internet to locate erroneous blogs – that sounds very much like a task similar to cleaning the proverbial Augean stables). In any case, you can read the myths and my responses and see what you think.

The Finer Details of Exchange High Availability (April 5) The documentation available for Exchange has improved dramatically over the past few years and continues to get better all the time. However, so much is published online or available elsewhere that it’s sometimes hard to focus in on what’s important or to understand important details. Everyone knows that Exchange 2010 includes native high availability in the form of the Database Availability Group (DAG), but a number of really interesting details lurk in the undergrowth about what happens when things go wrong and Exchange has to activate a database copy and use transaction logs to make sure that all the data is present.

Immutability, Perry, and Exchange (April 4) This might also have been titled “All about the dumpster”. I took the chance to point people to a set of video interviews given by Perry Clark, a Distinguished Engineer who’s been working in the Exchange development group since the late middle ages and probably now represents the technical conscience of the group. In this case, he’s talking about immutability and how Exchange 2010 protects data that might be needed for compliance purposes. Perry has made videos about other Exchange topics too and the series, all available online, are worth viewing.

Follow Tony’s ramblings @12Knocksinna

Posted in Cloud, Exchange, Office 365 | Tagged , , , , , , , , , , , , | Leave a comment

On the road again… to San Diego


This week I traveled from Dublin to San Diego to attend “The Experts Conference” (TEC). The conference organizers took care of tickets and I was routed via JFK on Delta rather than my normal choice (probably Aer Lingus to Chicago and onwards on United). The good news was that Delta surprised me with their service, which was better than I remembered or anticipated.

The other point of interest was the chance to see the Space Shuttle mounted on a 747 in a hanger in JFK. I took the photo below with the camera in my Nokia Lumia 800 out of the window of our plane as we taxied to the gate. I was surprised at the quality of the resulting  image as I really didn’t expect much due to the motion of the plane, the intervening aircraft windows (never clean and usually with a curve), distance, and so on. It just goes to prove that the cameras built into phones these days are really quite good, which is one of the reasons why traditional “point and shoot” cameras are under pressure in the market.

Space Shuttle on a 747 at JFK airport (April 28, 2012)

The remainder of the journey to San Diego passed uneventfully, apart of course from the fact that my glasses disintegrated en route with both arms snapping away from the lenses. I put this down to the impact against an air bag that I had during last week when my car was rammed by a driver who went through a red light. In any case, my glasses were fixed by the very efficient team at Eyes on Fifth in San Diego. I should really remember to bring my eye prescription with me on trips as it’s impossible to get new glasses without an examination and grinding of new lenses and that usually takes just too much time when you’re on the road. Yet another thing to add to my list of things to bring away.

If you’re looking for a good read, you might consider Stumbling through the tulips: An American Family in Holland by Dan Martin, whose blog provides a weekly insight into the trials and tribulations of his family life in Switzerland. Dan, who calls himself “the world’s first baby-boomer”, is an interesting guy whom I worked with at Digital, Compaq, and HP. He was the project manager for the introduction of Microsoft Exchange at Digital and he and I didn’t quite see eye-to-eye on that topic way back in 1996. We managed to patch up our disagreement and have gotten on well since. Dan writes in a particularly quixotic manner and you might just enjoy his book if you’re looking for something in the travel domain.

On another topic, I couldn’t agree more with the opinion expressed by David Gewirtz in his article on ZDnet.com about the rip-off that consumers experience when they buy electronic accessories. David focused on HDMI cables and the extraordinary prices that stores want to charge compared to online sources. Now, I’m quite sure that a) it costs a reasonable amount to build, test, package, and ship a high-quality cable that will handle the demands of PlayStations, Xboxes, Hi-Def TVs and so on, and b) inventory, display, sell, and provide after-sales service for the same items. But with consumer electronic prices for most items dropping in so many areas, it’s baffling that HDMI cables occupy such a lofty position on the pricing spectrum.  As David says “Most HDMI cables will work just fine. You don’t need to buy HDMI cables strung from the gold in Rapunzel’s hair.” Indeed.

Now back to the business in hand. TEC really begins today and I have a keynote to deliver and a number of sessions that I want to at least drop into, including “Writing your own Extensible Storage Engine (ESE/Jet Blue) application” by Brett Shirley from Microsoft. Given that ESE is Exchange server’s database engine and not much information exists about its internal working in the wild, the title is probably enough to scare people off and it will be interesting to see how many turn up. Brett insulted me about my defunct coding experience last night so I’ll just have to go along and heckle him today.

– Tony

Posted in Technology, Travel | 3 Comments

The Citadel of St. Tropez


Port of St. Tropez

Most people who visit St. Tropez on the French Riviera probably think about the glamour side of the town – its up-market shops, the restaurants dotted around the port that can charge an extraordinary amount for an ordinary pizza, the yachts that broadcast the fact that their owners drip with cash, and the film stars and other well-known individuals who spend time in the town. And yes, St. Tropez offers all of these things once you manage to arrive in town after navigating slow and cluttered roads en route and find a place to park.

But if you’re planning a visit to St. Tropez, why don’t you discover a somewhat unknown gem in the town’s Citadel, or fortified castle, that occupies a hill overlooking the town and the bay. The Citadel was built between 1590 and 1607 to protect St. Tropez against enemies that have varied over the years, but most often the Spanish or English. It’s a protected landmark that is currently being renovated. The main castle building is inaccessible but all of the ramparts and surrounding out-buildings are available to be explored.

The town of St. Tropez viewed from the Citadel

Getting to the Citadel is simple. Go to the port and head upwards. St. Tropez is small enough so that you won’t get lost. Once you reach the park that surrounds the Citadel you’ll face a reasonably steep ascent up some rugged steps that will probably be an unpleasant but short climb in the midst of a Riviera summer.

Head of Louis XIV, the Sun King, on a cannon at the Citadel of St. Tropez

The entrance fee is EUR2.00 each and you’re free to wander around the buildings for as long as you like, subject to seasonal opening hours. Various cannon and other artifacts are to be found in the grounds, but I suspect that most will head over towards the sea to take in the breathtaking views over the Mediterranean.

View of the Riviera from the ramparts of the Citadel of St. Tropez

Apart from the views, my favourite parts of the Citadel were the walled garden and amphitheatre. I imagine that a concert held in the Citadel on a warm summer’s night must be a very pleasant event to attend. Or, as you can see in the photo, it’s a nice place to stretch out for 40 winks and allow the day to just pass on by…

Amphitheater in the Citadel of St. Tropez

Walled garden in the Citadel of St. Tropez

So if you can tear yourself away from the other attractions offered by St. Tropez, take an hour or so to climb the hill to explore the Citadel. Kids will enjoy the adventure and adults will enjoy being able to work up a thirst for some of the excellent rosé wine that is grown around St. Tropez.

– Tony

Follow Tony @12Knocksinna

Posted in Travel | Tagged , , , , , , | 3 Comments

A billion? Maybe good value if they’re good patents


At first glance, the news that Microsoft has agreed to pay AOL more than $1 billion for a trove of 925 patents (plus access to 300 patents kept by AOL) might seem to be an expensive purchase to some. Microsoft then offset some of their expenditure by selling access to some 600 patents to Facebook for $550 million. Facebook, who was interested in the AOL patent collection, has agreed to purchase 650 patents and patent applications from the portfolio acquired by Microsoft and to license the patents retained by Microsoft.

I have no idea what’s covered by the patents that have been traded, but aside from selling a good proportion of the AOL patents to Facebook, I think that the overall deal could be reasonable business, even if it’s just a matter of Microsoft equipping itself with some extra protection against the patent trolls that search out fundamental patents and use them as weapons to extract cash from companies who might infringe the patents in some way. Often companies pay out quickly when faced with the prospect of expensive and protracted law suits that can drag on for years. Having an extensive selection of patents, in this case gathered from companies like Netscape and CompuServe as well as AOL itself, provides a certain degree of protection against law suits because a broad and deep patent portfolio can be used to retaliate against any attempt to sue. The wider and broader the portfolio, the more likely that a patent exists that can be used to countersue. For this reason, patent trolls typically don’t like taking on companies such as Microsoft, IBM, or HP that hold tens of thousands of patents, but it does happen.

There are good and bad points about having a large patent portfolio. The biggest downside is the sheer cost of maintaining the portfolio as fees have to be paid to keep the patents current. Apart from fees, there’s the small matter of lawyer and other personnel expenses incurred to keep track of patents, including the need to examine and defend incoming claims. Old and obsolete patents have to be trimmed from the portfolio when they are no longer useful, new patent ideas have to be examined and tested to establish whether the ideas are in fact patentable, and if this is the case, patent applications have to be created in the form required by patent examiners and submitted for review. It can take many years before a patent is actually granted. Two-three years would be deemed fast for the U.S. patent office (USPO) to make a decision while four-five years is average.

It’s worth mentioning that a huge portfolio does not necessarily mean that all of the patents are useful at either a technical or commercial level. Indeed, in my time working with patent lawyers, the rule of thumb that I often heard cited was that only 1-2% of a patent portfolio is interesting for one of the reasons described below, up to 10% are useful in terms of licensing or other purposes, but the other 90% are marginally useful if at all. A company like HP or IBM will have many patents that were interesting and useful ten years ago but aren’t at all useful today. Think of the patents dealing with floppy disks, for instance.

Apart from avoiding the unwanted attentions of patent trolls, large patent portfolios offer three major advantages. First, you have the chance to make cross-patent licensing agreements with other companies. Essentially, this means that you can access the technology described in the patents owned by the other company and that both companies agree that they will not sue the other if the technology is used in a product. It’s common to find that the largest IT companies have many cross-patent agreements in place that are renewed every three to five years.

The second advantage is the ability to block competitors by denying them the ability to use specific technology. A good patent is one that describes fundamental technology that is then built on and used by many products. Think of the patents that describe the gestures used to control tablet and smartphone devices, such as the “slide to unlock” patents that Apple has been granted by the USPO and now form the basis of law suits against companies such as HTC. US Patent 7,657,849 claims:

“A device with a touch-sensitive display may be unlocked via gestures performed on the touch-sensitive display. The device is unlocked if contact with the display corresponds to a predefined gesture for unlocking the device… The performance of the predefined gesture with respect to the unlock image may include moving the unlock image to a predefined location and/or moving the unlock image along a predefined path.”

Sounds like this might be the way that you’d think any high-end smartphone might be unlocked today! The patent was filed by Apple in December 2005 and granted in February 2010.

Patents that control fundamental methods are very attractive because every company that wants to work in a particular space is likely to want to implement a feature that might infringe the patent and therefore opens that company up to litigation and potentially highly expensive fees.

The third advantage is gained through lucrative licensing fees. If you hold a patent that everyone wants to use then you can charge for the privilege of using your technology. The best thing about this revenue is its high margin, usually higher than even the best software margins. Companies that take good care of their patent portfolio can extract huge profits through licensing fees.

No one writes a check for a billion dollars lightly. And although it seems like a lot of money to pay for 925 patents, I imagine that Microsoft did a lot of due diligence to ensure that they’re getting something valuable in return. In this case, that something means some patents they can use immediately, most likely for defense purposes, and some that they can license or sell on, which is exactly what they’ve done with Facebook.

It seems like Microsoft has chosen to retain some 275 of the AOL patents. I assume that these are of reasonable quality and of direct relevance to what Microsoft wants to do in the future. But in any case, even if the patents are not really interesting in an immediate commercial sense, given that they do come from the likes of Netscape, CompuServe, and AOL, the patents will at least describe part of IT’s history. And that’s always nice to preserve.

– Tony

Follow my ramblings @12Knocksinna

Posted in Technology | Tagged | 1 Comment

Exchange 2010 SP2 RU2: Deep-dive into what’s in this roll-up update


On Monday, April 16, Microsoft released roll-up update 2 (RU2) for Exchange 2010 SP2. There was nothing strange or unusual in this process because it follows Microsoft’s practice of issuing cumulative roll-up updates for Exchange at regular intervals. On the same day, Microsoft (or rather, the splendidly named Customer Experience Team (CXP) within the Exchange product group) released RU7 for Exchange 2007 SP3. Microsoft has to keep Exchange 2007 updated in the same way as the current release to ensure that its quality remains high, support costs stay under control, and an eye is kept on interoperability with the upcoming Exchange 2013 release, expected towards the end of 2012. No doubt more details will be revealed about interoperability requirements in the future as the Exchange 2013 software winds its way through the release process.

As described in KB2661854, Exchange 2010 SP2 RU2 contains 43 separately documented fixes (by comparison, Exchange 2010 SP2 RU1 contains 58 fixes). The EHLO blog post called your attention to the following:

  • KB2696913 You cannot log on to Outlook Web App when a proxy is set up in an Exchange Server 2010 environment
  • KB2688667 High CPU in W3WP when processing recurrence items who fall on DST cutover
  • KB2592398 PR_INTERNET_MESSAGE_ID is the same on messages resent by Outlook
  • KB2630808 EwsAllowMacOutlook Setting Not Honored
  • KB2661277 Android/Iphones stuck with 451 during Cross forest proxy in datacenter
  • KB2678414 Contact name doesn’t display company if name fields are left blank

I thought that it would be interesting to take a look at all of the fixes to see whether any critical or especially complex problems are addressed in Exchange 2010 SP2 RU2. Below you can find the complete list of fixes complete with pointers to the relevant Microsoft KB, Microsoft’s description of the bug, and some (hopefully pertinent) comments from me. I’ve organized the 43 fixes into a set of categories that I think makes it a little easier to understand where improvements have been made. To begin, here’s a high-level view of what I consider to be the most important or interesting updates:

A couple of the fixes address infinite loops that occur under certain conditions. However, given that Exchange 2010 has been in production for nearly three years now, these seem to be very much edge cases that should not affect the vast majority of servers in operational conditions.

Users of Outlook 2011 for Mac will be happy to see that there are some fixed for Exchange Web Services to match the recently released Office for Mac 2011 SP2 update.

Another set of fixes address various problems with ActiveSync, such as the fix to ensure that if a device updates its operating system (for example, updating an iPhone from iOS 4 to iOS 5), that fact is updated into the list of connected devices maintained for the user account in Active Directory and can therefore be reported upon accurately if an administrator runs the Get-ActiveSyncDeviceStatistics cmdlet. Another fix addresses the CAS proxy problem that emerged after Microsoft released Exchange 2010 SP2 RU1 when it emerged that CAS servers that ran different versions didn’t care for each other’s traffic very much.

My favorite fix is the one that helps the organization that’s been struggling to distribute its 2GB-plus Offline Address Book (OAB). Based on my knowledge of other large organizations, my guess is that this organization has more than 750,000 mailboxes, give or take a hundred thousand. They’ll be happy that Microsoft has now addressed the problem that caused the OAB distribution mechanism to have a brain fart when confronted with such an amount of data.

Another interesting limit that comes to light is the fact that an LDAP query underpinning a dynamic distribution group (DDG) is limited to 32,000 characters. Such a query is a very complex LDAP construct and must have taken enormous care in its coding. I can’t think of such a complex query but I’m sure that someone else can.

I imagine that most if not all of these fixes have already found their way into Exchange Online in Office 365 through the regular update mechanism used by Microsoft to introduce fixes and new features into their cloud service. This usually happens without users being aware of the change as Microsoft deploys updates by transferring mailboxes off Office 365 servers, removing them from service, and then updating the complete box before reintroducing the upgraded server back into the production pool.

Remember that each roll-up update represents a cumulative set of fixes. When you deploy the latest RU, you’re effectively deploying the most up-to-date software that Microsoft can provide that includes all previous fixes from all previous roll-up updates. Remember too that you can only apply a roll-up update after you’ve successfully installed the underlying base software. In other words, you need to deploy Exchange 2010 SP2 before you can apply Exchange 2010 SP2 RU2.

In a nutshell, I don’t see anything earth-shattering in the list of fixes. Exchange 2010 SP2 RU2 appears to be very much a “clean up and make good” release that should be worth a fast deployment, assuming that you do your own due diligence to make sure that none of the fixes reveal an unanticipated problem in your production environment. That means that you should test the new software by installing and running it on servers that mimic your real-life operating environment. If SP2 RU2 works well there, you’ll know that it will work well in production.

Have fun!

– Tony

Follow Tony’s ramblings @12Knocksinna

ActiveSync issues

Number Description Comments
2661277 An ActiveSync user cannot access a mailbox in an Exchange Server 2010 forest Two Exchange organizations are connected with a trust relationship. An ActiveSync user attempts to access  a mailbox in the other organization and this fails because the CAS server logs an incorrect protocol version (in other words, it thinks it can’t do what it’s being asked to do).
2678361 The user-agent information about an Exchange ActiveSync device is not updated in an Exchange Server 2010 environment If a user updates client O/S information on an ActiveSync device that information is not synchronized back to Exchange and the data reported by the Get-ActiveSyncDeviceStatistics cmdlet (which depends on Active Directory) is incorrect.

Calendar issues

Number Description Comments
2519806 A meeting request that is sent by an external user or by using a non-Microsoft email system is stamped as Busy instead of Tentative in an Exchange Server 2010 environment Not a horrible bug, but annoying if you depend on free/busy data to figure out when you have free time.
2649499 Updates for a meeting request are sent to all attendees directly in an Exchange Server 2010 environment Changes made to a meeting request using an ActiveSync device whose attendee list is subsequently changed using Outlook cause updates to go to all attendees rather than the originator having the option to send the update only to those who were added to the meeting. Kind of esoteric, but important to those who live or die by their calendar.
2694289 Resource mailbox does not forward meeting request to delegates after one of the delegates’ mailbox is disabled in an Exchange Server 2010 environment You’d expect that messages would be forwarded to the remaining resource delegates after one of their number has been disabled… it doesn’t happen unless this bug is fixed.

Outlook issues

Number Description Comments
2592398 mail messages in the Sent Items folder have the same PR_INTERNET_MESSAGE_ID property in an Exchange Server 2010 environment Messages created using Outlook template files inherit the same value in the Internet_Message_ID MAPI property. Probably not going to be seen in most deployments.
2601301 Customized contact objects revert to the default form after a public folder database replication in an Exchange Server 2010 environment Again a pretty esoteric bug caused when contact objects stored in public folders that use a custom form are replicated to another database and are then displayed using the standard form.
2636883 Returned message items can disappear from the search results view when you use Outlook in online mode in an Exchange Server 2010 environment. Items that are found in a search and are subsequently marked as “read” disappear from the search results. Most Outlook users work in cached Exchange mode so they won’t have experienced this bug.
2673087 Error message when you try to copy the Inbox folder to another folder in Outlook in online mode in an Exchange Server 2010 environment The RPC Client Access service has a bug that prevents Outlook (MAPI clients) being able to copy the Inbox folder to another folder in online mode. This only happens when rules exist for the Inbox. I’m kind of surprised that it has taken this long for such a bug to be revealed – this must demonstrate the relative lack of people who run Outlook in online mode.
2678414 The display name of a contact in address book is empty in an Exchange Server 2010 environment If you create a contact and insert a value into the company field and don’t put anything into the full name field, the subsequent display name contact entry is blank. In other words, Exchange doesn’t assume that you want to use the company name as the full name.

General system administration issues

Number Description Comments
2625450 You cannot generate an OAB file that is larger than 2GB in an Exchange Server 2010 environment OK. I’m impressed. It must be a massive organization that hit the 2GB limit for their Offline Address Book and uncovered this bug. Not likely in 99.995% of Exchange 2010 deployments but nice to know that things will work if your company grows to be much larger than it is now.
2632201 MAPI_E_INVALID_PARAMETER errors occur when a MAPI application receives notifications in an Exchange Server 2010 environment Fixes an internal bug where applications register with the Store for notifications about updates that occur at the root folder level.
2641753 An email message from an Exchange Server 2003 user is forwarded incorrectly to an external recipient of an Exchange Server 2010 user mailbox Exchange 2010 treats ANSI strings in messages created on servers running non-English versions as ASCII and problems (duplicate messages, etc.) occur when messages are forwarded from Exchange 2003 users.
2652730 You encounter failures when you run the Test-ECPConnectivity cmdlet to test Exchange Control Panel connectivity in an Exchange Server 2010 environment A bug in the Test-ECPConnectivity cmdlet causes problems when it attempts to connect to a Client Access Server.
2657103 CPU resources are used up when you use the Set-MailboxMessageConfiguration cmdlet in an Exchange Server 2010 environment The CPU is consumed by an infinite loop caused when the cmdlet attempts to write a configuration file out endlessly.
2661294 An email address policy does not generate the email addresses of recipients correctly in an Exchange Server 2010 environment Address policies that generate addresses starting with “sa”, “sb”, or “sc” followed by hex numbers of less than five digits don’t work (Exchange treats them as a special pattern). I can’t quite work out why anyone would want to use such an address policy, but I guess someone does!
2664761 DPM protection agent service may stop responding on Exchange Server 2010 servers that are protected by System Center DPM 2010 Deadlock occurs because two threads are contenting for the same Exchange 2010 diagnostic file causing headaches for DPM.
2677847 The Microsoft Exchange File Distribution service consumes large amounts of memory in an Exchange Server 2010 environment The replication handler for OAB consumes a large amount of CPU during OAB synchronization if the MSExchangeFDS service is running on a computer that also has the CAS role installed.
2664365 Certain mailbox statistics properties are not updated when a user uses a POP3 or IMAP4 client to access a mailbox in an Exchange 2010 environment User logons that connect using POP3 or IMAP4 don’t update the LastLogonTime, LastLogoffTime, and LastLoggedOnUserAccount properties so the Get-MailboxStatistics cmdlet can’t display the information.

Multi-organization and federation issues

Number Description Comments
2644920 The Get-FederatedDomainProof cmdlet fails in an Exchange 2010 SP1 environment Cryptographic problems get in the way of forming a trust between Exchange 2010 SP1 and Microsoft Federation Gateway.
2660178 “More than one mailbox has the same e-mail address” error message when you try to manage a mailbox in a tenant organization in an Exchange Server 2010 SP1 Hosting mode environment Well, hosting mode is now deprecated so this shouldn’t be of concern. The problem happens because ECP uses a global scope when it checks for users rather than looking within just a single (hosted) organization.
2672225 A user in a trusted account forest cannot use the EMC to manage an Exchange Server 2010 SP2 server The SID of the user attempting to run EMC is corrupt or invalid and EMC can’t load, so it can’t manage Exchange…

Client Access Server issues

Number Description Comments
2694280 Whatif switch does not work in the Set-MoveRequest or Resume-MoveRequest cmdlets in an Exchange Server 2010 environment A small problem in coding…  However, the WhatIf switch is probably not hyper-critical for these cmdlets.
2696913 You cannot log on to Outlook Web App when a proxy is set up in an Exchange Server 2010 environment The infamous inter-CAS proxying bug due to invalid versions introduced in Exchange 2010 SP2 RU1 is fixed so that it will no longer occur
2665806 Error message when you open an RTF email message that has inline attachments in an Exchange Server 2010 environment If you import data from a PST using the New-MailboxImportRequest cmdlet that includes RTF messages with inline attachments, Exchange deals with the attachments as if they were OLE objects. Of course, they are not, so the user can’t see the attachments.

Exchange Web Services (EWS) issues

Number Description Comments
2556766 Slow performance when you create many contacts by using Exchange Web Services (EWS) in an Exchange Server 2010 environment Caused by EWS issuing twice as many LDAP requests as necessary, but the effect is probably only seen when Global Catalog servers are already stressed.
2630808 A user can log on to a mailbox by using Outlook for Mac 2011 unexpectedly in an Exchange Server 2010 environment Outlook for Mac 2011 is based on EWS. A bug in Exchange meant that the user agent string provided by Outlook for Mac was not recognized by Exchange and so disabled users could continue to log on. There’s an easy workaround, but it’s good to resolve these interoperability issues.
2641249 Error message when you use the “Folder.Bind” method in an Exchange Server 2010 environment Binding to a folder with EWS fails when the folder is corrupt for some reason. New error handling handles the problem more elegantly.
2681464 An EWS application crashes when it calls the GetStreamingEvents operation in an Exchange 2010 environment The EWS managed API sends back a compressed stream to the EWS application when the request of the GetStreamingEvents operation times out. However, the EWS application cannot handle the compressed stream and crashes when it tries to parse the stream.

Outlook Web App (OWA) issues

Number Description Comments
2635223 A hidden user is still displayed in the Organization information of Address Book in OWA in an Exchange Server 2010 environment Hidden users were revealed by Outlook Web App (OWA) if they were direct reports of a manager in the GAL. Basically, when Exchange creates the list of direct reports for OWA to display, it ignored the fact that the user was hidden. Although the hidden user wouldn’t show up in the GAL, being revealed to those who browse through organizational detail is a bad thing as you could reveal something like a new hire that hadn’t yet been generally announced.
2644144 A read receipt is not sent when a receiver does not expand a conversation to preview the message by using OWA in an Exchange 2010 environment Messages sent using OWA that require a read receipt don’t generate the receipt if the recipient reads the message using the OWA reading pane.
2649679 Text in tables is displayed incorrectly in the Conversation view in Outlook Web App in an Exchange Server 2010 environment The CSS file used by OWA to display messages in conversation view doesn’t do a good job of handling tables!
2663581 The OK button is not displayed when you change your password in Outlook Web App by using Firefox in an Exchange Server 2010 environment A problem in the OWA code stops it displaying the OK button when prompting the user with “Your password has been changed. Click OK to logon with the new password”.
2685996 Error message when a user who does not have a mailbox tries to move or delete an item that is in a shared mailbox by using Outlook Web App Premium If a user account doesn’t have an associated mailbox, OWA isn’t able to create the necessary security context to allow the user to move or delete items in a shared mailbox, even if that user has Full Access permission for the mailbox.
2688667 W3wp.exe consumes excessive CPU resources on Exchange Server 2010 Client Access servers when users open recurring calendar items in mailboxes by using Outlook Web App or EWS The problem occurs when OWA or EWS clients attempt to open recurring events after DST changes. The problem is that the DST change makes the time for subsequent events invalid and so an infinite loop occurs when clients attempt to access the calendar items.
2694473 File name of a saved attachment is incorrect when you use OWA in Firefox 8 in an Exchange Server 2010 environment The OWA code for Firefox 8 has a hard-coded value in quotes for attachments that it uses instead of the name provided by the user. So it’s just wrong.
2696905 Day of the week is not localized in MailTips in Outlook Web App in an Exchange Server 2010 environment A localization error causes English language text to appear in MailTips based on days of the week. So you end up with situations like German text surrounding the word “Monday”.

Public folder issues

Number Description Comments
2636387 Event ID 3022 is logged and you cannot replicate a public folder from one Exchange Server 2010 server to another Hmmm… the description says that the problem occurs because “the size of a certain property, which belongs to the public folder, exceeds the size limit.”  No detail is provided as to what the errant property is, so we shall have to remain in the dark here.
2645587 An external email message is not delivered to mail-enabled public folders and you do not receive NDR messages in an Exchange 2010 environment Messages sent to multiple addressees, some of which are public folders, will not be delivered to the PF if the message contains one or more invalid addressees. Exchange stops delivery once it hits an invalid address and never gets to attempt delivering the message to the PF. No NDR is sent so it’s like the delivery was attempted to a black hole.

Transport issues

Number Description Comments
2693078 EdgeTransport.exe process crashes in an Exchange 2010 environment The Transport system has some problems handling messages with subject lines formatted in the GB2313 or CHINESEBIG5 character sets that cause the process to crash.
2694414 The update tracking information option does not work in an Exchange Server 2010 environment A bug in the Transport system suppresses tracking information so Outlook never receives it.
2694474 Incorrect delivery report when you send an email message to a recipient who has configured an external forwarding address in an Exchange Server 2010 environment Delivery reports don’t work so well when a mailbox is configured to forward mail to an external address and copies of forwarded messages are not retained in the mailbox. This is because message tracking looks in the mailbox rather than checking to see whether the message was transferred to the external recipient.
2696857 EdgeTransport.exe process crashes without sending an NDR message when you send a message to a distribution group in an Exchange Server 2010 environment Messages sent to a dynamic distribution groups (DDG) based on LDAP filters that contain more than 32,000 characters (a very complex filter) crash and you don’t get an NDR. This problem also affects the scenario where you use the DDG as a delivery restriction for another group. Essentially, the limit for an LDAP filter is blown!
Posted in Email, Exchange, Exchange 2010 | Tagged , , , , , , | 6 Comments