Quick but important survey about Exchange 2010 licensing


Paul Robichaux caused some ripples in the calm waters of Microsoft licensing when he wrote about a question that he had received from a Windows IT Pro Magazine reader. The reader was upgrading an Exchange 2003 deployment to Exchange 2010 gradually by introducing a new Client Access Server (CAS) only to be told (presumably by his local Microsoft office) that he had to immediately purchase an Exchange 2010 Client Access License (CAL) for each of his 300 users!

The logic offered was: “Since all incoming mail must go through the Exchange 2010 Client Access Server, it uses Exchange 2010. Therefore, the new CAL is required

If I were to be picky, I’d say that the logic is flawed because incoming mail certainly flows through a hub transport server but never through a CAS. Now, if the statement was that all incoming client connections must go through the CAS, that would be a different and far more accurate position. In any case, Paul asked Microsoft to cast some light into the darker and lesser-known corners of their licensing architecture and to justify why they would seemingly penalize people who are attempting to do what Microsoft badly wants – to move from Exchange 2003 to Exchange 2010.  As I understand it, discussions continue…

In any case, Paul has created a survey that he would like your help with. The idea is to collect some data to help argue the case with Microsoft and hopefully persuade them to take a more reasonable approach to the need to buy Exchange 2010 CALs. Please consider completing the survey – no personal data is collected.

– Tony

Posted in Exchange, Exchange 2010 | Tagged , | Leave a comment

Chatting about Exchange 2010 SP2 and Office 365 with Matt Gervais


Last week, Matt Gervais, site editor of SearchExchange.com, tracked me down in France to chat about my views about Exchange 2010 SP2, Office 365, virtualizing Exchange, and whether Exchange administrators are threatened by the cloud. Matt captured our conversation in a podcast, which you can now download from SearchExchange.com.

Now that I’ve updated you all, it’s time to go back to the pool…

– Tony

Posted in Exchange 2010, Office 365 | Tagged , , , | Leave a comment

Wandering down through France


It was time to head back from Ireland to France. This time we decided to take the ferry from Rosslare to Roscoff on the basis that the route was shorter and we’d spend less time cooped up on the boat being a captive audience for Irish Ferries. We’d also have a chance to drive through Brittany and a change of scene is always welcome as Normandy is well-trodden ground for us.

The M.V. Oscar Wilde left Rosslare at 16:00 local and arrived in Roscoff on time at 10:30 local, an elapsed voyage time of 17 and a half hours (there’s an hour difference between Ireland and France). This is a couple of hours shorter than the alternate route to Cherbourg so it’s the right decision if you don’t like the rolling motion of the ferry. The voyage passed without incident. Despite some wind-whipped white-topped waves outside Rosslare ferryport, the ferry departed smoothly in some nice sunshine that offered the chance for us to bask on the sundeck. There was a good crowd embarked and this made reservations in the “nice” restaurants difficult to obtain, so we went with the crowd and ate in the Left Bank self-service. All I can say about the food is that it filled both a gap in time and some space in our stomachs. No more…

Roscoff is a much smaller town than Cherbourg and the route southwards is predominantly on single-carriageway roads until you get close to Rennes. By comparison, the roads from Cherbourg towards Caen and then south to Le Mans are all dual-carriageway as they form part of a “Euro-route”. As the speed limit is typically at 110 kph or higher and trucks don’t act so much as mobile blocks, it’s easier to make ground faster from Cherbourg.

In any case, we pressed on towards Tours. For the first time, the car was equipped with a “télépéage” badge from Vinci. This is a transmitter similar to those used in many other countries that you stick to the windscreen of the car to allow you to pass through special toll booths. The newer booths allow rolling transits of up to 30 kph but even those that force you to come to a complete halt are usually far quicker to get through than the booths that serve cars that pay tolls with cash or credit cards, especially those on popular tourist routes where people unfamiliar with the French autoroute system frustrate everyone else with their relative slowness at navigating the tolls.

By their nature, the French don’t like queuing as much as the Anglo-Saxons, and any slowness at toll booths is likely to be met by expressions of disgust, beeps on the horn, or passionate reminders that one should engage one’s brain into gear before attempting to operate any complex mechanism such as a toll payment system. At least, that’s the nice translation of what is normally said.

You don’t have to be a French resident to get a télépéage badge. We use the “temps libre” version, which means that Vinci sends us badge free of charge and only charges a monthly EUR2.00 fee for the months that we actually use the badge. Apart from that, you are charged whatever the normal toll fee is for the section of autoroute you use – there’s no discount for using télépéage.  Bills are sent monthly and collected via direct debit from your bank account. However, as there is seldom any queue past a couple of cars for the télépéage booths, the big benefit is to be able to save time by avoiding the long queues that often form at peak times at tolls throughout the autoroute network.

Although it’s more than possible to do so, driving the 1,350km from Roscoff to the Côte d’Azur isn’t something that most people will do in one day. We elected to stop in Vierzon (roughly half-way) and booked a room at the Arche Hotel. I usually use Booking.com to find hotels and had chosen the Arche Hotel on the basis of its location and that it offered a free underground car park (too many people traveling through France suffer car break-ins when en route to a destination) and free wifi. The reviews about the hotel were OK too.

Evening view from Arche Hotel

Some who stay in the Arche Hotel will be offended by the decoration, which appeared to be a melange of American movie posters, pictures of 1950s movie stars, and French antiques. It made a pleasant change to the bland ordinariness of that afflicts some many other hotels today. The room was OK and had a nice view over the Cher river. As we arrived on a Sunday, the hotel restaurant wasn’t open so we went looking for food in the town. We didn’t have far to go as we found an excellent pizzeria called La Scala just around the corner. The restaurant was busy with many locals coming in to eat (always a good sign) and we passed a pleasant couple of hours people-watching, including looking at the fishermen who were trying their luck in the river just outside the doors of the restaurant.

Early morning view of the bridge over the Cher

We set off again early next morning and had a good drive down via St. Etienne, Valence, Orange, and Aix-en-Provence without meeting any real “bouchons” (French for cork in a bottle – literally, a traffic jam). The weather improved steadily as we drove south and was at a comfortable 26 degrees C as we entered the Var.

We’re now settled in the small village of Flayosc and intend to stay here for a lot of the summer. Internet access makes the world much smaller and we are well connected to Ireland and other contacts around the globe. Possibly too well connected for our health and sanity, but we shall test that theory over the next while.

– Tony

Posted in Travel | Tagged , , , , , | Leave a comment

Office 365: Launch on June 28


Nope - I wasn't invited. Were you?

The invitations have been printed and Steve Ballmer is making the final touches to the PowerPoint that he’s scheduled to present at Microsoft’s formal launch of Office 365. The grand event takes place in New York City on June 28 before a carefully-assembled audience of industry observers, journalists, and others.

All of this is good news. The software is obviously ready – I’ve been a very content user for several months now and think that Exchange Online and SharePoint Online (the parts that I use) are very solid. There’s no reason to believe that Lync Online is any different. The only issue I see is the different administrative interfaces that are scattered across the product suite. There’s no collective or cohesive interface that brings the three products together into an integrated whole, something no doubt that’s indicative of the fact that each product is developed by a different team of engineers who don’t really seem to work to a common plan. I hope that Microsoft address this issue in the future.

Apart from my small gripe, which won’t be visible to any user and is totally unimportant to the vast bulk of the population who might use Office 365, is there anything that might cause you not to consider making the plunge to become a fully-fledged cloud-based organization? Well, there are a couple of niggling doubts that creep into my mind, some provoked by remarks made by Group Manager Kevin Allison at The Experts Conference (TEC) in Las Vegas in April. You can read my report from the keynote here.

Kevin has a very interesting job as he’s responsible for the development of both the on-premises and cloud versions of Exchange plus the operation of Exchange Online. In short, the buck very much stops at his desk. As such, anything he says should be taken seriously. Two of the points Kevin made during his keynote at TEC are pertinent to this discussion. The first is that Microsoft has a considerable lead time (five months) between the forecast of customer demand and the point when the necessary infrastructure is deployed in their datacenters to meet that demand. Everything goes smoothly if the sales motion with customers is aligned with the smooth deployment of capacity (servers, software, operations oversight, network, power, cabling, cooling, etc.) to meet the result of selling.

Companies pay a lot of attention to predicting likely customer demand and tracking results against those predictions but the potential always exists that demand could surge and surpass the ability of the provider to deploy capacity to meet that demand. My view is that there are a lot of companies that currently run Exchange 2003 who are prime candidates for early movement to Office 365 simply because it’s an easier and possibly cheaper option than a traditional on-premises upgrade to Exchange 2010. If this feeling is correct, then Microsoft might be faced with the welcome but difficult problem that occurs when demand is too high and sufficient capacity cannot be put in place to meet that demand. Success has some downsides!

The second factor that Kevin discussed is that currently there is a relatively small number of people available to help customers move to Office 365. If you look for a consultant today who has experience of large-scale Office 365 (or BPOS) deployment, you might have to wait before someone is available. This scarcity of resource is caused by the knowledge gap that always occurs when new technology shifts happen in the market. Expertise and experience is particularly important for the larger, more complex projects but is also a critical factor for small to medium companies that can typically take an easier migration path to Office 365. The knowledge gap will close over time as Microsoft trains more people and independent consulting companies beef up their own level of expertise around Office 365, but a shortage exists today. Again, this won’t be a problem if the predicted demand matches capacity but could be an issue if Office 365 is tremendously successful. In other words, if you think that you want to move to Office 365 soon, you should make sure that you have secured the right level of expertise available to solve issues such as migration, interoperability, monitoring, single sign-on, security, and privacy to allow the project to proceed as planned.

The last point I’ll make as we head to the formal launch of Office 365 is to repeat the very good advice offered by Mark Minasi during his keynote at Spring Connections 2011 when he emphasized that it is in the interest of every company to understand exactly how much it costs them to provide email today and how good the quality of service is when measured by the business. If IT departments don’t understand their costs and whether they are meeting the needs of the business, they are in no position to make an intelligent decision about whether Office 365 offers real advantages in terms of cost, access to new technology that makes a real difference to the company by enabling solutions to real business problems, and long-term usefulness and flexibility when compared to an on-premises deployment of Exchange 2010.

– Tony

Posted in Cloud, Exchange 2010, Office 365 | Tagged , , | Leave a comment

Has an email disclaimer any legal effect?


On May 7, I posted about the Economist article “Spare us the email yada-yada” that asserted that email disclaimers have no legal effect. Financial Times columnist Lucy Kellaway subsequently waded into the debate on June 5 with her column that described a memo sent by the new Philips CEO Frans van Houten to his worldwide employees. The memo is pretty typical of those beloved by senior executives who wish to set a “tone” for the company and lay out some grand plan as to how the company will live up to the aspirations contained in the memo. According to the disclaimer, Lucy commits an offence by sharing the content because she’s not one of the intended recipients (as if company executives never shared their grand visions with journalists).

Of course, many similar memos contain little more than bland pontifications that add exactly zero value to the average employee. However, the thoughts of the wise and senior executives must be protected and that’s why the technology community has been forced by lawyers around the world to make sure that all outgoing email has some legal mumbo-jumbo appended.

Most disclaimers proudly assert the company’s ownership of the email and threaten that fire and brimstone will descend upon the unwitting head of those outside the intended recipient list who have gotten hold of a copy (obviously by illegal means). And if fire and brimstone isn’t sufficient, anyone who is so unwise as to not immediately delete any and all copies of the said message and erase all knowledge of the content from their brain will be assailed by the combined forces of all the legal professionals that the company can muster. In short, if you so much as open a message and glance upon its content to even figure out why you might have received the email, it will eventually lead to a lifetime of penury, your family will be sold into slavery to pay the legal fees, and some pretty nasty stuff will happen to you as the legal pros extract due compensation for your grievous sin.

Or so the legal professionals who drafted the six-paragraph disclaimer hope. And I guess it’s true that there are situations when the disclosure of confidential information that comes into the possession of someone who shouldn’t receive it will break laws and result in penalties. For example, if you received a memo detailing the draft quarterly results of a public company some days before those results were due to be published and you then shared those results via Twitter or your blog – or sent them on to your stockbroker with a request to sell or buy shares in the company – then it’s true that you’ve probably committed an illegal act that will be viewed seriously by the regulatory authorities in most countries. But let’s face it – the vast majority of email that circulates within a company contains little real value to anyone except the recipients and can probably be immediately deleted without the company falling into chaos, so adding the three-paragraph disclaimer is really the equivalent of strapping a chastity belt onto the most unloved person in the harem (yes, I know that could be construed as a politically incorrect statement, but I feel that the analogy is apt).

Even improvements made in products such as Exchange 2010 where transport rules can produce more graphically intense disclaimers, incorporate Active Directory information, ignore encrypted messages, and not stamp disclaimers on the replies in message threads don’t address the foundational point that only some messages really need to be protected. Applying multi-color disclaimer text to a message may cheer up the IT administrator and prove their mastery over HTML commands, but it’s like slapping lipstick on a pig: nicer to look at but still useless.

The only real way of achieving real protection over the content of email is to deploy a system that allows selective and granular access to the content and the operations that recipients can perform after they receive messages. Active Directory Rights Management Services (AD RMS) can actually do this by allowing senders to select from templates that clearly define what recipients can do with messages after they receive them: operations such as forwarding, replying, and printing can be allowed or denied.

AD RMS isn’t the first software that attempts to solve the problem of protecting sensitive content. I can recall products that did much the same for email in the 1990s. However, all of these products seem to run into similar problems:

  • It’s extra software to deploy and maintain (cost, testing, and time implications)
  • Specific clients may be required (a real pain if you have to deploy to thousands of desktops)
  • Protected messages may not be accessible or function properly outside the boundaries of the organization
  • Asking users to selectively protect confidential information is not always the best way to ensure compliance. Human beings forget or make mistakes all the time and some information will invariably leak. Security is not the favorite topic of most users!

Microsoft actually tried to do something about the last point with the combination of AD RMS, Outlook 2010, and Exchange 2010 through the introduction of Outlook Protection Rules. These rules are based on AD RMS templates and work on the basis that Outlook checks outgoing messages for specific recipients (individuals or groups) that are covered by Outlook Protection Rules. If these recipients are detected, Outlook automatically stamps the message with the AD RMS template specified in the rule. Sounds good – and it works, but only if you have deployed the complete infrastructure of AD RMS, Outlook 2010, and Exchange 2010. For now, Outlook Protection Rules remain an interesting exercise in computer science for most companies.

The net result of all of this collaborative effort by legal and IT professionals to protect email through disclaimers is that the text generated simply occupies terabytes of useless and duplicated space in email databases across the world. Not really a good situation to be in but that’s where we are until someone is successfully sued for ignoring the warnings contained in an email disclaimer and the principle becomes respected throughout the world. Somehow I don’t see that happening anytime soon, but I have been known to be wrong before.

– Tony

For more information about AD RMS and Exchange 2010, see pages 1072-1080 (chapter 15) of Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition. Outlook Protection Rules are described on pages 1080-1081.

Posted in Exchange, Exchange 2010 | Tagged , , , | 1 Comment

Filers verus Pilers


Many moons ago, good filing habits were deemed to be an important part of office life. Documents had to be correctly placed in the right folder and carefully deposited in a file cabinet to allow the office to operate properly. Everything was neatly arranged, easily found, and everyone was happy.

Email began to become more common in offices from the mid 1980s onwards. System designers anticipated that users would employ the tried and trusted methods for filing paper documents with messages and created email servers and clients that allowed users to set up hierarchies of folders. Various extensions and additions to email servers such as ALL-IN-1 shared drawers (1992-1993) allowed documents and messages to be held in shared repositories. Much the same approach to folders and filing was taken in PC LAN-based email systems and subsequently appeared in Exchange 4.0 (1996). Filing continued unabated.

And then Google introduced Gmail. All of a sudden, users had massive quotas to store messages and there was no reason to delete messages any longer. But the real impulse to transform users from filers to “pilers” was provided by Gmail’s user interface. Folders were largely eliminated in favour of views; messages could exist in many different views which could be used to identify particular sets of messages extracted from massive piles held in the mailbox. Basic views such as Inbox, Sent Items, and Trash made Gmail look and behave somewhat like a traditional email client but it never delivered the same experience as the more structured and traditional approach embodied in clients such as Outlook.

Piling isn’t all bad. It’s certainly a less disciplined approach to mailbox organization than when messages are carefully moved into folders or deleted as they are read but that doesn’t matter too much if the client provides intelligent access to mailbox contents, including great search capabilities. And, as its protagonists point out, piling email instead of filing email saves a great deal of time.

Unsurprisingly given Google’s heritage, Gmail certainly succeeds with search, but only its inventors could have loved its original user interface. Gmail’s basic window on information organized messages into conversations, grouping messages from the same topic together and presenting them as a unified whole. New users often liked conversations but conversations were a bit of a culture shock for hard-core filers. Google eventually acknowledged that some people don’t like viewing email through conversations when they introduced the ability to turn off conversations and use views that order messages according to the time that they arrived into the mailbox. Google has tweaked Gmail in many other ways but its user interface is still not as elegant as other web-based email clients. I sometimes wonder whether the impending arrival of Office 365 and its much more elegant Office Web App interface will put any commercial pressure on Google to either improve the web interface for Gmail or its somewhat haphazard support for Outlook.

Although one can quibble about its appearance, there’s no doubt that Gmail has exerted a huge influence over the development of email. Consumers enjoyed massive mailboxes for years before most corporate users could even think about moving away from restricted 500MB limits; conversation views are now supported by many other email clients and servers, including Exchange 2010; but most of all, Gmail made it acceptable to discard the filing habit and become a piler, someone who simply left messages where they arrived or exited the mailbox and never bothered to move or copy messages into other folders. The three-folder mailbox (Inbox, Sent Items, Trash) became the de facto method of operation for new users as they took habits learned from consumer email systems into the workplace.

In terms of Exchange, a number of recent technical advances have contributed to easier piling. First, email servers support higher item counts in folders before performance degrade. All email products have a number of very important folders that are the focus for the majority of user operations. Exchange refers to them as “critical folders” and include folders such as the Inbox, Sent Items, and Deleted Items. The guidance offered by Microsoft has changed dramatically over the past three versions (See this EHLO post for a PowerShell script to locate folders with high item counts):

  • Exchange 2003: 5,000 items
  • Exchange 2007: 20,000 items
  • Exchange 2010: 100,000 items

The thought of one hundred thousand messages in the Inbox is a horrific prospect for a filer and a triumph for a piler!

The major reason driving the major increase in the number of items that Exchange supports in a critical folder has been changes made to the Exchange database schema. Microsoft tweaked it in Exchange 2007 and have conducted the major overhaul since 1996 in Exchange 2010. It’s actually surprising that it took Microsoft so long to improve matters as the way people work with email has changed so dramatically over the 15 years that Exchange has been available. When we deployed Exchange 4.0 in 1996, servers typically supported less than 200 users, mailboxes had quotas of between 50MB and 100MB, and a heavy day saw the arrival of 15-20 messages. And of course, Microsoft hadn’t yet got the Internet so Exchange operated in a world where SMTP was just one of the protocols to which Exchange could connect. X.400 was far more important, if only because the Message Transfer Agent (MTA) was built on X.400.

The second major factor is better search facilities. Outlook clients operating in cached Exchange mode can use Windows Desktop Search (WDS) to search items held in the offline store (OST) and of course, Exchange has its own content indexing facility to make sure that clients that don’t perform local searches (such as Outlook Web App) can find information easily. Google always emphasized easy searching as a big feature of Gmail and it’s fair to say that all major email servers available today see search as a fundamental part of their feature set. Indeed, Exchange 2010 indexes content as mailboxes are moved from database to database to ensure that searches are possible as soon as the attributes are flipped in Active Directory to point the client to the new mailbox location.

The third influence is a combination of faster laptop disks and more intelligent clients. Laptop disks have become larger over the years but never really improved in speed. This wasn’t a huge issue when mailboxes were small as the resulting OSTs were also reasonably sized. The OST file structure isn’t particularly efficient and performance traditionally suffered as soon as file sizes went over 1.5GB, or roughly the size of an OST equivalent to a 1GB mailbox (the overhead of the OST format can be up to 50% of the online mailbox) . Indeed, the maximum file size of an older ANSI-format OST file is 2GB whereas newer Unicode-format OST files can be a maximum of 50GB (see TechNet for more recommendations about deploying OST files with Exchange). If you feel that you need to configure larger PST or OST files for use with Outlook 2010, you can follow the instructions contained in this KB article.

Given the tendency towards large mailboxes, it’s obvious that some improvements needed to be made else we would all have been soon struggling with performance after the initial delight of receiving a 5GB or 10GB mailbox. With these OST sizes even a 7200 rpm laptop disk will experience some delays, so there’s no substitute for a solid state disk (SSD) if you’re interested in big OST files. The good news is that the price of SSD drives has been coming down so they are now reasonably cost-efficient. In fact, given that the real price of laptops has decreased dramatically while performance has increased over the last decade, it makes sense to spend a little extra money on a fast drive to be able to exploit the true potential of the computer.

Outlook 2010 is better able to deal with large OSTs too. I don’t have any evidence that would stand up in court to prove this point, but it is my experience that the 64-bit version of Outlook 2010 running on Windows 7 provides much smoother performance with large OSTs than its Outlook 2007 or Outlook 2003 (both 32-bit) predecessors.

In summary, it doesn’t really matter now whether you are a piler or a filer as today’s technology will hide the flaws of either approach. Instead of expecting users to behave in a certain manner, attention is now focused on how to manage the information that users accumulate in an intelligent manner. That’s why the ability for administrators to set policies that control how content is managed automatically by servers is important; the really interesting thing for me is to see how more automation can be incorporated into software over the next few years to help people organize, locate, and utilize their information better. Auto-tagging seemed like a great example of what I’m looking forward to but unfortunately its implementation was flawed in Exchange 2010 RTM and Microsoft withdrew the code in Exchange 2010 SP1. Hopefully, auto-tagging and other software will appear in future versions to help us all keep our ever-swelling mailboxes under some sort of control!

– Tony

For more information about the changes made to the database schema in Exchange 2010, see chapter 7 of Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition.

Posted in Email, Exchange, Exchange 2010, Office 365 | Tagged , , , | Leave a comment

Hotmail’s ActiveSync support


For as long as I have been using an iPhone, I have been frustrated by the inability of Hotmail to support a modern email access protocol. Hotmail seemed to be fixed in the mists of time at a point where POP3 was deemed to be sufficient to meet the needs of users. And while POP3 does a reasonable job of allowing clients to access an inbox, that’s about the height of its capabilities. In particular, synchronization of messages read or deleted on the iPhone seemed to be beyond POP3. The result was that I had to process messages twice – once when they appeared on the iPhone, and again later on to clean up my mailbox.

IMAP4 support would have been a solution. Other email systems such as Gmail use IMAP4 to good effect to support clients like the iPhone, but there was no sign that Hotmail would ever support this protocol, so I lived with the frustration.

And then I found out about Hotmail’s support of ActiveSync. This is not news as Microsoft made ActiveSync available for Hotmail in August 2010, but I must have been asleep when that particular announcement crossed the wires. ActiveSync is usually associated with Exchange but in essence it’s just an email access protocol that works over HTTP. ActiveSync is a more modern protocol than IMAP4 and Microsoft has been very successful in licensing the protocol to companies such as Apple, Google, Nokia, and Motorola (amongst many others), so enabling ActiveSync support for Hotmail allows clients running on any ActiveSync device to download messages, contacts, and calendar information. Filters and searches work too. For more information, you can view a video about ActiveSync support for Hotmail.

In any case, configuring ActiveSync to access Hotmail on an iPad or iPhone is easy.

  1. Go to Settings
  2. Select Mail, Contacts, and Calendar
  3. Select Add Account
  4. Select “Microsoft Exchange” as the new account type (this really means that the device will use ActiveSync as its synchronization protocol rather than attempting to get to Hotmail via Exchange)
  5. Enter your email address in the Email field. This will be something like “John.Doe@Hotmail.com”
  6. Leave the Domain field blank.
  7. Enter your email address in the Username field.
  8. Enter your Hotmail password in the Password field.
  9. Enter a description (something like “Hotmail via ActiveSync”).
  10. Click Next. The device will attempt to connect and should come back with a request for you to enter a server name. Enter “m.hotmail.com” and click Next.
  11. The device will verify the connection and then ask what you want to synchronize. The options are Mail, Contacts, and Calendar. The default is to synchronize Mail only as you might create duplicates if you synchronize the other two data types with Outlook via iTunes.
  12. Click Save to complete adding the account.
  13. Go back to Mail and check your Inbox. The device will make an ActiveSync connection and download any new messages. Older messages are not synchronized.

And that’s all there is to it. Nice, simple, effective, and liberation from POP3. Pity it took me so long to catch up with the news…

– Tony

Posted in Email, Technology | Tagged , | 2 Comments

Musing on searching


The publication of the post by the Exchange team to reveal the secret registry instruction to allow multi-mailbox searches to interrogate more than 25,000 mailboxes got me thinking. First,  I thought that the era of registry hacks was over for Exchange. But on reflection I don’t think that we are on our way back to the bad old days of Exchange 2000 and Exchange 2003 when Microsoft published copious registry hacks to influence the way that the software operated and figuring out just what had been changed on a server became a real problem for support professionals.

Of course, these weren’t the first versions of the product to use secret registry settings and the standard was set by the famous “Squeaky Lobster” hack that you had to input to reveal advanced performance counters on an Exchange 5.5 server. Exchange 2000 introduced a huge variety of new features. The administration interface lagged somewhat and, unlike today, the developers were not allowed to introduce new UI in a service pack. So they enabled features and tweaks through registry hacks. The disease rapidly spread throughout Exchange to a point where I doubt that even the most devoted Exchange nerd could keep up.

The regime of a new Vice President of the Exchange group changed things and we don’t have so many registry settings to tweak today. Of course, you can argue that registry settings have been replaced by obtuse XML-formatted configuration files such as those used by the Mailbox Replication Service (MRS) or the transport service.This is true, and XML configuration files suffer from the same fatal flaw as registry settings in terms of being server-specific and not friendly to the needs of a distributed environment. They also suffer from the problem of language and debugging in that it is all too easy to make a mistake when you edit one of Exchange’s configuration file. The product doesn’t include an intelligent editor for these files, possibly because it’s the developers’ way of saying “hands off – don’t edit this”, so most administrators resort to Notepad and make changes on the “suck it and see” principle. Sounds very much like editing the registry…

In any case, returning back to multi-mailbox discovery searches, it’s a nice thing to know that administrators in large organizations can bring servers to their knees by launching searches that span 100,000 mailboxes and gather tens of gigabytes of data, possibly even dragging all that data across the network to the default discovery mailbox that’s still located on the first Exchange 2010 mailbox server installed into the organization. Clearly not a good thing to do and indicative of the need for planning before the deployment and use of multi-mailbox searches.

What other issues might affect these searches? Here are a number of tips that you might like to bear in mind.

  • The UI for discovery searches is not revealed by the Exchange Control Panel (ECP) unless your account holds the Discovery Management RBAC role. Obvious, but often overlooked… There’s no way to execute searches from the Exchange Management Console (EMC), so this is one of the items of functionality that is unique to ECP. If you don’t like using ECP, you can create mailbox searches using the New-MailboxSearch cmdlet to create new searches, Get-MailboxSearch to return details of searches, Set-MailboxSearch to update search criteria, Start-MailboxSearch to start a search, and Remove-MailboxSearch to remove the search criteria from the arbitration mailbox (see below).
  • Mailbox searches depend on the content indexes that Exchange populates as items arrive into mailbox databases. Even though Exchange 2007 supports content indexes, you can only search data hosted on Exchange 2010 mailbox servers. This means that you have to complete your migration from Exchange 2003 or Exchange 2007 before discovery searches are really feasible. Of course, you can short-circuit the process by moving the mailboxes that are involved in a discovery action to Exchange 2010 servers.
  • Discovery searches can find items in the Recoverable Items folder (aka the “dumpster”) or those on retention or legal hold because these items are held in folders that are invisible to users but are indexed.
  • Exchange can search message properties (for example, subject, addressee list) very effectively because these data are available in the mailbox databases. Attachments have to be made discoverable to Exchange before their content can be incorporated into the indexes. Microsoft makes the Office 2010 filter pack available to allow you to install the IFilters necessary to index Word, Excel, PowerPoint, Visio, and so on and the pack must be installed on all mailbox servers (for content indexing) and transport servers (to allow transport rules to examine content in en-route messages). These filters cover the vast bulk of documents circulating in corporate environments with the glaring exception of PDF. Adobe has an IFilter available for PDF but some have reported better results with the version available from Foxit Software. You know you have problems with IFilters when searches report a high number of unsearchable items (the properties of these items will be searched – the item is unsearchable if its content is inaccessible). Of course, in this context, a high number is linked to the total number of items searched. If you search 10,000 mailboxes it’s probably acceptable to have 250 unsearchable items (but still a good idea to understand what these items are) while 2,500 unsearchable items might be problematic.
  • Determining the effectiveness of your search parameters is not easy. Exchange will report the mailboxes that it scanned and the number of hits that it generated but it’s hard to understand whether you have found the desired information until you look through the captured items. Clearly you need to experiment with search criteria (Exchange uses the AQS syntax for searches so you can construct very complex and precise searches) to hone in on the right material and it may take several attempts until you know you have the right search. Exchange allows you to test search criteria without capturing any data and that’s absolutely the way to proceed until you know you’re looking in the right place. After that, you can decide to capture either deduplicated or all data. A deduplicated search captures the first instance of an item no matter how many mailboxes in which it is found. An “all-in” search captures each and every instance of an item. Obviously, it’s the nature of email that many items occur in multiple mailboxes so a deduplicated search (introduced in Exchange 2010 SP1) captures far less data.
  • As mentioned above, the first Exchange 2010 mailbox server installed into the organization hosts the default discovery mailbox. The mailbox is disabled but visible through the admin tools with the name “Discovery Search Mailbox”. This mailbox is used to store the copies of items recovered by searches so it has a large 50GB quota. It can be moved to another server if appropriate or you can create additional search mailboxes for use with specific investigations. To create a new discovery mailbox, use a command like this:

New-Mailbox -Name 'Discovery Mailbox for ABC Investigation' -Discovery -UserPrincipalName 'ABCDiscoveryMailbox@contoso.com' -Database 'MB2'

Note that I’m careful to assign the new discovery mailbox in a specific mailbox database. Ideally, this database should be close (in network terms) to the databases that contain the mailboxes that will be searched to minimize the amount of network traffic generated when discovered items are captured and stored in the discovery mailbox. Remember that if the discovery mailbox is in a database that has copies, Exchange will need to replicate the search results to all servers that host database copies, so a big search can have a very real impact on many aspects of system performance.

New discovery mailboxes are immediately available as a target for search results but they are not automatically accessible to the members of the Discovery Management role group. This is by design as the intention is to allow for the separation between the work done by the people who create and execute searches and those who review the gathered results. You have to specifically change the permissions on the newly-added discovery mailbox to make it available to those who have the authority to review the material captured there. Discovery searches can turn up huge masses of confidential business and personal data so it’s obviously critical to keep close control over the users who can access discovery mailboxes. It’s also a good idea to agree guidelines with your legal advisors as to how long the results of discovery searches should be kept as obviously you don’t want confidential material being kept for longer than it should be.

Exchange 2010 stores the metadata (the parameters used to describe the search) for searches in a hidden system mailbox called “SystemMailbox{e0d1c29-89c3-4034-b678-e6c29d823ed9). Thankfully, you won’t have to type that name too often. You can see this mailbox listed with this command:

Get-Mailbox -Arbitration

Overall, I like the structure that Microsoft has established in Exchange 2010 for multi-mailbox searches. I don’t like the tools available to analyze the effectiveness of searches or to review the results that are captured in the discovery mailboxes. Hopefully Microsoft will improve matters in future releases.

– Tony

For more information about multi-mailbox discovery searches, read chapter 15 of Microsoft Exchange Server 2010 Inside Out (pages 1033-1049), also available at Amazon.co.uk. The book is also available in a Kindle edition

Posted in Exchange 2010 | Tagged , , , , , , , , , | 5 Comments

Thoughts on Microsoft Certified Master accreditation


On May 16, Microsoft announced that the exams that lead to accreditation as a Microsoft Certified Master (MCM) for Exchange 2010 will be available in centers worldwide. This development, similar to the announcement about MCM for SQL made in late 2010, interests me from multiple perspectives – as a commentator on all aspects of Exchange, as someone who informs others about the technology through seminars and writing, and as someone who was involved with Microsoft in the early days of Exchange accreditation.

Some history is appropriate to set context. The training that Microsoft made available for the first generation of Exchange (4.0 to 5.5) was truly awful. It was very much of the “click, click, click, complete – and have a nice day” variety as it never attempted to bring the trainee much past the surface veneer of the product’s capabilities. The training was appropriate for people who wanted to run a 20-user server but that was about the limit of its usefulness. Given that a large part of the target base ran PC LAN-based systems such as cc:Mail and MS-Mail, the focus for training wasn’t altogether surprising.

Things got even worse when Microsoft released Exchange 2000 in 1999. This was the first version to be “enterprise ready” and it had a much tighter connection to Windows 2000 with its dependency on the Active Directory. More stuff could go wrong and did go wrong but the available training from Microsoft remained weak in terms of the enterprise. This forced companies such as Compaq to build their own training to equip consultants with sufficient knowledge to take on the challenge of large-scale deployments. I was intimately involved in the project to deliver Windows 2000 and Exchange 2000 academies, which Compaq also made available to its customers.

To their credit, the Exchange development group knew that things were not good. It was in their interest to increase the amount of knowledge in the field because this stopped customers and consultants doing silly things that could be avoided that later developed into support problems that were expensive to fix. Discussions began between Compaq and Microsoft as to how to improve matters. Microsoft planned to start the famous “Ranger” training program to build a team of very high quality troubleshooters who could work on the most complex of deployments. At that time, Compaq declined to become involved with Ranger. There were many reasons for this, including the fact that we had invested very heavily in training Compaq consultants and had developed a set of best practice and deployment aids that were unique in the industry. Compaq didn’t see that it was in our commercial interest to share this information with competitors such as Avanade, IBM, and Microsoft Consulting Services. Remember, this was in a time when there wasn’t such a massive amount of information available about technology on the Internet so consulting companies guarded their information jealously.

Microsoft proceeded to roll out their Ranger training program from 2002 onwards as part of the preparation for the introduction of Exchange 2003 (code name “Titanium”). This program was tough and in-depth and required successful candidates to come to Redmond for a four-week immersion in Exchange before passing a series of written and practical exams. The cost of the course was an issue too, even for large companies, but it was manageable for those who chose to attend because they saw the value in becoming one of the few who attained Ranger status.

The original information received by Compaq (or more correctly, HP following the merger in May 2002) in June 2002 described the Ranger program as follows:

Ranger Summary:

The goal of the Exchange Ranger Program is to increase Exchange product knowledge in the field by bringing Exchange field and partner experts to Redmond for comprehensive Exchange training. The Ranger program is a multi-faceted initiative to help increase deployment success and customer satisfaction, while decreasing PSS support costs.

Five rotations will go through the training prior to RTM.

  • Train 12 consultants per rotation to a very high level of competence with Exchange 2003
  • Final Qualification is a lab-based exam in which consultants will have to solve 11 common customer problems and one rare problem in 3-4 hours.
  • Follow-through with weekly conference calls and visits toRedmondto track effectiveness and create a closed feedback loop.

The training consists of 4 phases:

Selection

  • Phone Screen
  • Interviews
  • Orientation

Qualification

  • 4 Weeks Intensive Training
  • 7am-7pm 6 days/week
  • end-to-end messaging
  • 1 test/lab exam per week
  • Failing 2 tests=washout

Week 1- 6 days 7-7

Sunday Off

Week 2-4 days 7-7

Friday, Saturday and Sunday off

Week 3-6 days 7-7

Sunday Off

Week 4-6 days 7-7

Daily Schedule:

4 hours class

5 hours lab

3 hours breaks and eating

Labs are open on weekends

You’ll notice that time for eating is thankfully built into the schedule! Although the current training events might not be so onerous, all the evidence suggests that the same focus exists on presenting a mass of in-depth quality material that attendees have to assimilate during their stay before being tested to achieve accreditation. Some have compared the experience to the proverbial drinking from a fire hose, except that the drinking lasts a lot longer than normal.

Attending an MCM rotation isn’t an easy decision to make. Apart from the hefty training fee charged by Microsoft (a significant barrier to entry for some unless their company agreed to pay) plus costs for travel, accommodation, and “lost opportunity” (billing hours), forcing people to come to a single worldwide location for an extended period is pretty disruptive on home life. On the flip side, it’s also disruptive for the Exchange group as they have to provide program managers and other subject matter experts to deliver the training. But even more important, it limits the number of people who can go through the training and hope to achieve accreditation. In short, there was no good way to scale up the accreditation program to deal with the likely demand as the number of Exchange deployments grew in the market.

No one wants to weaken the worth of an accreditation by reducing standards. I think it was for this reason that Microsoft chose to keep the program focused around the product immersion in Redmond as it evolved into MCM. The move away from the original Ranger concept was part of the effort to more closely align the original initiative by the Exchange development group with the overall training architecture developed by Microsoft Learning. Like most education programs, the architecture had a broad base of people who passed exams in various technologies, the MCSEs, then those who possessed very focused and high-level accreditation in specific technologies (MCMs), with the top of the pyramid being those who successfully passed through the Microsoft Certified Architect (MCA) process.

These developments have occurred since Microsoft introduced the MCA accreditation in 2005. However, MCM (Exchange), which I think switched over from Ranger sometime in 2008, continued to couple the in-depth training with the accreditation and the number of people who have achieved the accreditation remains small. I don’t have the exact numbers but I imagine that they are in the small hundreds, which isn’t a lot given the hundreds of millions of Exchange mailboxes in use daily.

The big news is therefore that people who consider themselves experts in Exchange 2010 can now put themselves forward for examination at test centers around the world to see whether they can attain the status of MCM (Exchange). I think this is a good thing as it’s always positive when barriers are removed from progress. However, I wonder whether many will be successful at taking the MCM (Exchange) tests without having the benefit of the three-week immersion in Redmond. There’s a natural suspicion that the tests might be focused on the content taught in Redmond and might not therefore be as viable for those who have not been immersed. There’s also the fact that all of us have different strengths and our work tends to favour our strengths – being forced into subjects during three weeks of training can help close those gaps. For example, I tend not to do very much with Exchange 2010 Unified Messaging (UM) because I don’t have a background in telephony and the subject has never interested me all that much. On the other hand, folks such as Paul Robichaux (who teaches potential MCMs on UM), live and breathe this stuff. I know I couldn’t pass an exam on Exchange 2010 UM without investing time and effort to swot up on the material, so I’ll probably never be an MCM.

Is an MCM-qualified individual an advantage to a project? Probably – as long as they gel with the other members of the project team and don’t attempt to force their “MCM view of the world” down other peoples’ throats. Remember that the value of a consultant is limited to 50% of the overall project; the other 50% comes from people who know the company, are closer to the business objectives, and understand how office politics work.

Is holding MCM accreditation a positive step in someone’s career? Again the answer is probably. In this case, it depends whether the accreditation is considered as a stepping stone to future development rather than the zenith of technical achievement. Technology has a nasty habit of evolving and it evolves faster now than at any time in history. The other trend to keep in mind is that best practice is only “best” at the time when it is formed; practice develops and mutates over time as experience with technology is gained, so a statement that everything is done according to “best practice” deserves investigation to determine what the best practice is based upon.

Overall, I recommend the MCM (Exchange) program to both companies and individuals as a source of well-founded knowledge and as a good thing for personal development. We’ll have to see how successful people are in their attempts to attain the accreditation without traveling to Redmond – time will tell!

– Tony

Posted in Exchange 2010, Training | Tagged , | 4 Comments

HP releases videos about E5000 messaging system


Last February, Paul Robichaux, Brian Desmond and I were invited by HP to come to Cupertino to look over the HP E5000 messaging system. Over two days we had the chance to quiz Dean Steadman, the HP product manager, and Jeff Mealiffe, from the Exchange development group (Jeff specializes in topics such as performance and scalability), about how HP and Microsoft had worked together to build the first appliance-like server for Exchange 2010, aka  “DAG in a box”. You can read all about the E5000 plus how we went about making some videos in this post. In any case, HP has now released a series of six videos that they have produced from the raw footage that they shot in February. It’s taken a long time to create these masterpieces, a delay that is surely accounted for by the need to make Paul, Brian, and myself look pretty. Or at least, prettier than we are in real life. This was a real challenge for some, despite the best effort of a professional make-up artist.

Paul is made up with little perceived effect

The six videos are:

  • Introduction to HP E5000 Hardware, a walk through of the hardware engineering that’s incorporated into the E5000 to make it as resilient as possible.
  • HP E5000: Complete and Optimized: a roundtable discussion of why HP and Microsoft felt that the time was right to build an appliance for Exchange and how the two companies collaborated to make the E5000 happen.
  • HP E5000: Simple and Cost Efficient, describing the support setup for the E5000 (essentially, HP is the single point of contact for the hardware, Windows, and Exchange 2010).
  • HP E5000: Resilient/Highly Available, figuring out whether it’s possible to really build an appliance for a Windows server appliance like Exchange that still needs some tender loving care.
  • HP E5000: Large Low Cost Mailboxes. A roundtable discussion about whether people really need more than a standard 100MB mailbox (it was good enough for my father, it should be good enough for me…)
  • HP E5000: Installation & Startup, looking at how the E5000 is configured, including the wizard that HP has built to remove as much of the mundane processing as possible and make installations really easy.

Apart from the MVP series, HP has released other videos about the E5000. I thought that the one featuring Dean Steadman, the product manager for the E5000, was reasonably interesting. There are other more marketing-like videos available on YouTube. HP has also used selected cuts from the videos at events such as Spring Connections in Orlando, when I had the opportunity to see myself frighten away many attendees who ventured near the HP stand. Such is life.

Frightening people at Connections

Have fun with the videos!

– Tony

Posted in Exchange, Exchange 2010 | Tagged , , , , | 3 Comments