Visiting Omaha Beach and some Wi-Fi woes


Omaha Beach

Omaha Beach

Those of you who know me will realize that I am a bit of a history buff, so it should come as no surprise that the last night on a recent trip to France found us in Vierville-sur-Mer en route to the Cherbourg ferry. Vierville-sur-Mer is a very small village, but it’s famous (in some places) because of its location on Omaha Beach, possibly the most famous of the five assault D-Day beaches in 1944. Given the recent 70th anniversary of the D-Day landings, it seemed like an appropriate place to stop.

Large parts of Normandy are rural and the area around Omaha Beach is lacking in good hotels. The British beaches (Juno, Sword, and Anvil) are better served because of the larger villages and towns within easy reach, including Bayeux and Caen. By comparison, the U.S. Army landed in the countryside, and so it remains today.

We were happy to end up in the Hotel du Casino, which occupies a spectacular position overlooking the western end of Omaha Beach. Looking to the west, you can see Pointe du Hoc, while the complete beach unfolds itself to the east toward the American cemetery at Colleville-sur-Mer.

Hotel du Casino

Hotel du Casino

The U.S. National Guard memorial is just outside the door of the hotel and is built on top of a “Type 677” casemate containing an 88mm cannon that did horrible damage on D-Day. One of the five “draws” (D-1) that led through the cliffs fronting the beach goes by the door of the hotel up to the village proper. Here are many stone houses that were fought over during D-Day and a 12th century Norman stone church whose bell tower was destroyed that afternoon and rebuilt after the war.

Church2

Vierville-sur-Mer church tower

Vierville-sur-Mer church tower

Vierville is the location of the “Dog” and “Charlie” landings and was the most heavily defended part of Omaha Beach. The devastation depicted in the opening of the film “Saving Private Ryan” was based on the experience of the 29th Infantry Division and the huge casualties that they incurred in the opening hours of June 6. The film didn’t use Omaha Beach as a location and used Curracloe Beach in County Wexford, Ireland instead. Having walked Curracloe too, I can report that the two beaches are indeed very similar.

The book “Omaha Beach: D-Day, June 6, 1944” by Joseph Balkoski is a great resource for explaining the sequence of events as they happened on D-Day. An equally good companion volume covers the landings at Utah Beach. Looking at the surviving fortifications today, it’s easy to realize how the enfilade fire from machine guns and artillery was able to wreak havoc on the landing troops as they made their way across soft sand to the relative safety of the shale and sea wall at the high tide mark.

Another thing that you soon notice is the speed of the advancing tide. A decision had been taken by the Allies to land at low tide so that the landing craft and other small boats that took troops in would be able to see the mined obstacles set by the Germans. But the tide here rises terrifically quickly and soon advances to the sea wall, meaning that wounded soldiers had to be rescued and moved or left to drown. It’s a sobering thought.

The hotel was fine except that its Wi-Fi system stubbornly refused to work, which meant in turn that I got no work done. What was worse was the authentication system that the hotel used to issue usernames and passcodes to connect to the non-functioning Wi-Fi system. I couldn’t help wondering exactly how many people passing by would attempt to steal their bandwidth and whether it wouldn’t just be easier if an open Wi-Fi system was offered. In any case, it stayed down and I stayed off the air.

We returned to Ireland on the MV Oscar Wilde on the Irish Ferries route between Cherbourg and Rosslare. Weirdly, the Wi-Fi system on the ferry also refused to co-operate and further impacted my ability to work. I ended up trying to help the chief engineer and chief purser to figure out what the problem might be. The system used on the ferry is designed for maritime vessels and controlled via satellite from Norway, so local tweaking of servers and the like was out of the question. The most I could do was identify its total failure to allocate IP addresses or a DNS server to connecting clients. Interestingly, one of the crew was able to connect and I was able to clone their IP settings and connect my PC. The network worked but it just wasn’t servicing clients. Isn’t that always the way?

Follow Tony @12Knocksinna

About these ads
Posted in Travel | Tagged , , | Leave a comment

Dealing with Inbox Clutter – with or without vacations


The news that Daimler has decided to implement software that deletes email received by users when they are on vacation might make other companies believe that such a course is a very good thing. After all, you’re not supposed to be thinking about work when you are on vacation. Right?

I’m not so sure. But then again, I am someone who attempts to operate a “zero latency inbox.” In other words, email is processed as soon as possible after it arrives by being assigned to a folder for later processing, ignored, or deleted. The idea is to prevent an accumulation of messages in the Inbox. I admit that this technique was much easier to use in the days when not quite so much email circulated, but it’s something that I have done for thirty-odd years now.

Arriving back from vacation to find hundreds or thousands of messages demanding attention in an Inbox can be depressing. In fact, it can be hard to know where to start to work through the backlog. What messages are most important? What contain information about problems that are long since solved? Or information that has been superseded by developments since? Or notice of meetings and other events that have happened in the interim? Not to mention the detritus found in all mailboxes formed of auto-replies, out of office messages, service items, and so on.

Microsoft is attempting to solve the problem in two ways. First, they have the People View feature that is now implemented in Outlook Web App (OWA) for Exchange Online (Office 365), but not yet for on-premises Exchange. The idea is that Exchange identifies the correspondents that are most important to a user and then filters messages that arrive from those people so that they show up in a set of special views displayed by OWA. Behind the scenes, an agent analyses inbound traffic to the Inbox and figures out the most important correspondents on a regular basis. Thus, when you arrive back from vacation, OWA can show you neatly sorted lists of messages that should be important to you.

I’ve been using People View for three months or so and find it an interesting and worthwhile feature that will assist people who receive a lot of email. However, I consider the “Clutter” feature much more interesting because it will discard a lot of the rubbish (the clutter) that obscures really important messages. Clutter does this by using a mailbox assistant to learn about your email habits. For example, if you always respond to messages from “Jane” and tend to ignore those that come from “New company announcements”, then Clutter knows that Jane is important to you and that her messages should be prominently displayed whereas it’s safe to shuttle company announcements off into a holding folder. Those messages can still be accessed but they are removed from view until required.

Of course, when People View and Clutter are operating in tandem, they will be able to provide a filtered list of important messages from important correspondents and hopefully reduce the hundreds of messages that arrive during a vacation (or even overnight) into more digestible chunks of prioritized communications.

Clutter won’t be with us until the end of 2014 and the signs are that this will be a cloud feature that might not make it to an on-premises version anytime soon, for such are the complexities involved in the machine learning algorithms that attempt to make sense of user behaviour.

Neither feature will reduce the stress that some people feel when their mobile devices birr, whirr, or buzz to announce the arrival of new messages. Turning off the device is one solution but that can be hard when a single mobile device is used as the fulcrum of private and business communications. The temptation always exists to have a quick look at what’s happening back at the office, the latest state of inter-departmental politics, who is stabbing whom in the back (in the nicest possible way), or what deep and dark plot is being hatched against competitors. Some business email is actually fun and informative, but most can be depressing when you’re supposed to be on vacation.

I’m not sure that Daimler is right to suppress all email for holidaying employees. I’m sure that they made the decision in the best possible way after some in-depth studies, employee questionnaires, and so on. But I wonder what will happen the first time some executive misses an important opportunity because software quashed their email…

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Office 365, Outlook | Tagged , , , , , , , , | Leave a comment

The need for a cloud Plan B


For most important decisions that you take in life it’s wise to have a Plan B. Something that you can do to reverse course should the unexpected occur and you need to revert to a previous position. And so it is with the cloud, an option that is being embraced with increasing conviction by many companies for different reasons, with perceived cost savings being high on that list.

But, I wonder, as I see companies rush to move workload to Office 365 or Google Apps or applications to Azure or Amazon Web Services, is their Plan B? What, for instance, will they do if ISPs struggle to cope with an ever-increasing volume for Internet traffic? I don’t think that the Internet is “full” or that a shortage of IPV4 addresses will cause real problems, at least in the short term, but it is conceivable that network infrastructures could come under real strain in different parts of the world over the next few years.

And then there’s support, which remains the Achilles heel of all cloud services. From a business perspective, I understand why customers are forced through the interminable question and answer routine from first-tier support. From a technical perspective, I understand why it is necessary to be so careful to avoid making any change within a datacenter for fear of knock-on effects on other tenants. But I fear that the combination creates a situation where customers are frustrated at the speed of problem resolution and the perceived quality of the interaction with support personnel.

I’m sure that the cloud support personnel who work the phones are frustrated too. They are constrained about what they can actually do, even when they work at the second tier and handle escalated calls. Because every tenant is unknown to the support agent, each call begins with an awful lot of information gathering, much of which seems unnecessary to the customer. And if that information doesn’t reveal a solution in a problem database, some inspired guesswork is all that can be done before a call eventually winds its way through to engineering.

Support doesn’t always flow smoothly in an on-premises environment, but at least all of those involved know the details of the environment and are aware of the background and history of the systems that are involved. A degree of closeness exists with on-premises debugging that simply cannot happen when dealing with cloud support, which can then lead to frustration and dissatisfaction.

The continuing debate about government snooping and what traffic three-letter agencies might or might not be intercepting and examining is also not helping to increase confidence and trust in cloud systems. Microsoft has just issued a document titled “10 reasons to trust Microsoft in the cloud” where the largest section is devoted to the strong stance they have taken over government access to data. I guess this proves how concerned Microsoft is on this point.

So it’s possible that despite the delivery of a solid SLA over the last few years, the promise of ever-changing or “evergreen” technology, and the flexibility of cloud systems in terms of their ability to deal with workload peaks, some companies will decide that their Plan B is to revert to on-premises servers.

The question then is “how to move from the cloud”? Microsoft and Google provide all sorts of help to move work to the cloud but little is said (naturally) about the reverse. In fact, Microsoft has an advantage in this area because of the work that they have done in Exchange 2010 and Exchange 2013 to facilitate hybrid connectivity. Moving email back to on-premises is largely done by transferring mailboxes back, ensuring that mailflow is handled by on-premises servers, and then breaking the connection.

Even after moving Exchange back on-premises, a solid case can be made to retain the Office 365 tenant to maintain flexibility and future options (you wouldn’t want some other company to come in and take over your tenant) and perhaps to handle other workload such as Lync.

Of course, Exchange’s hybrid connectivity only deals with one part of Office 365 and separate (manual) arrangements would have to be made to transfer SharePoint and Lync workload back to on-premises servers, should you decide to take this action.

Don’t take this as a definitive statement on how to move email off Office 365. Clearly, if you decide to make the change, you’ll do the necessary planning and testing to ensure that the move goes smoothly.

No manager likes to be forced to reverse course on a strategic IT decision. So much energy, time, and political resources have been expended to move to the cloud that reverting to on-premises servers will exhibit all the signs of a failure, and no one likes to be associated with those kind of projects. However, sometimes you have to bite the bullet and do what is right for the company.

Moving to the cloud is absolutely the right course for many companies to take and I remain a happy Office 365 user. But choice is good and flexibility is great, and having a solid Plan B in your pocket is a great comfort – and absolutely something that you need to consider.

Follow Tony @12Knocksinna

Posted in Cloud, Exchange, Exchange 2013, Office 365, Technology | Tagged , , , , , , | 1 Comment

Retention policies: the fat-busters of the Exchange world


Exchange messaging records management (or MRM for those who deal in acronyms) has long been an interest of mine, if only because it seems to me that it makes a heap of sense to have some intelligence that processes user mailboxes to clear rubbish out on a regular basis. Not that all mailboxes contain such rubbish, you know. Clearly mine does not and is well known in select circles for such cleanliness, which is why I was well-qualified to speak about retention policies at the Microsoft Exchange Conference (MEC) last April.

We live in a world of bloated mailboxes crammed full of items that really should be kept for as long as it takes to locate the “delete” key. It amuses me just how much of a mailbox is occupied by out of office notices, non-delivery receipts, and copies of service messages informing the recipient that some long-gone event is imminent, and so on. Not to mention the proliferation of content caused by that fateful decision so long ago to make Outlook’s default option to include the text of a preceding message in its reply. How many petabytes of crud has built up in mailboxes? No one knows, but it’s a bit like those fat balls that accumulate in public sewers – a mess that is only dealt with when it causes a problem.

In any case, the introduction of retention policies in Exchange 2010 was a very welcome step forward in addressing the problem. Of course, it was Microsoft’s second attempt at MRM because they had shipped MRM 1.0 in Exchange 2007. That didn’t work out so well because MRM forced users to move items into folders. MRM 2.0 as used in Exchange 2010, 2013, and Exchange Online (Office 365) is so much better because it automates the clean-up process and does everything in the background. Following my previous analogy, like water flushing away in a well-maintained sewer, the Exchange Managed Folder Assistant (MFA) scours mailboxes to remove old and unwanted items to keep mailboxes and databases as clean as possible.

Administrators have to do some work to set up an MRM framework. Retention policies have to be defined and then assigned to mailboxes. The retention policies are built from tags and these have to be considered in terms of what folders they should control and what action should be taken. In most cases, it’s sufficient to define a policy that contains a set of tags to control the major default folders (Inbox, Sent Items, Deleted Items) so that these folders are cleaned up by removing items after 30 days or so, some personal tags to allow users to mark items for long-term retention, and a default tag that applies to the rest of the mailbox to clean up aging items that have been stuffed into folders and forgotten.

To get the ball rolling, Microsoft provides a default MRM policy in Exchange 2010 and Exchange 2013. This policy is there to support the deployment of archive mailboxes and is applied automatically to a mailbox when it is enabled with an archive. The logic is good because the effect of the retention policy is to move items from the primary mailbox into the archive as soon as they are aged out by the default tag contained in the policy. The effect on some installations who deploy the retention policy without realizing its effect and impact can be “interesting.”

The use of retention policies are covered in all of the Office 365 plans (the spreadsheet accessed through this link is a very good resource to know what’s covered by a particular plan). In an on-premises environment, you don’t need to own enterprise CALs to use the default MRM policy. This is because the policy is there to support archive mailboxes, which are also covered by the standard Exchange CAL. However, enterprise CALs are required as soon as you begin to define and assign your own retention policies. This isn’t usually a big thing because most companies have bought enterprise CALs for other purposes, like deploying custom ActiveSync policies.

But if you’re in a situation where you only have standard CALs and want to use retention policies, you can modify the default MRM policy to fit your own purposes. This includes adding or removing tags from the policy, changing the retention period or retention action for tags, or disabling tags. In the past this was not thought to be possible, probably due to a misunderstanding of the licensing terms and some confusion about what was and was not covered by the standard CAL. But Microsoft has recently updated the TechNet documentation to explicitly state that you can modify the default MRM policy to your heart’s content.

Note that the default MRM policy is automatically assigned to new Exchange Online mailboxes as they are created. This is different to Exchange on-premises where the policy is only assigned when an archive is enabled. The logic here must be that assigning a retention policy from scratch makes sure that mailboxes stay under some form of control, even if their users are unaware of the fact (and administrators too, if you hadn’t learned about this). An archive mailbox is not automatically created for Exchange Online users, so MFA ignores the tags in the policy that have a retention action of “move to archive.”

For more information on how to design, deploy, and debug retention policies and MFA, see chapter 11 of Exchange 2013 Inside Out: Mailbox and High Availability.

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange, Exchange 2010, Exchange 2013, Office 365 | Tagged , , , , , , , | 3 Comments

Fast track Office 365 migrations by Microsoft pose some challenges for partners


Microsoft’s announcement that they intended to accelerate the on-boarding (migration) of customer seats to Office 365 through the “FastTrack” program surprised some attending the Worldwide Partner Conference (WPC) last month.

Apparently the idea is that Microsoft will hire (or has already hired) between 250 and 600 new employees to take on the “on-boarding” workload involved in moving customers over from their existing email systems to Exchange Online. The new service will swing into motion sometime toward the end of the year and will focus on closing the gap between the purchase of Office 365 subscriptions and when those subscriptions are actually used.

Apparently some understandable frustration exists within Microsoft that considerable effort often goes into winning an Office 365 deal only for it to stagnate for a substantial period before going live. Despite the upward trajectory for Office 365, the feeling is that things should be going better and that a won but unimplemented deal is always prone to being overturned by a competitor – in this case, potentially a move to Google Apps. Providing an accelerator to deals should move things along faster, or so the theory goes.

The announcement surprised partners who traditionally viewed this kind of work as their bailiwick. With or without Microsoft assistance, a partner would win the deal and then work with the customer to progress through design, planning, preparation, and migration. Given the success of Office 365, migration projects are a lucrative source of income for third party consultants. And that up-to-now reasonably predictable income stream might have underpinned the investment made by partners to build their capability to deliver Office 365 services.

Helping customers to move to Office 365 faster makes perfect sense from a Microsoft perspective. In the early days of Office 365, migrations tended to be more problematic than today. Years of experience, tips and techniques, and better software – both Office 365 itself and the migration utilities sold by third parties – means that migrations are often not as technically challenging as before. Migrations are still boring and mundane work but at least the work is well understood.

But migrations to any new messaging platform can encounter lurking potholes, largely because there’s usually a period when old and new systems run alongside each other. Small companies of less than a couple of hundred users can move over a weekend, but once you deal with larger numbers the need to move mailbox data from on-premises servers across the Internet to Office 365 slows the pace. All of which means that complications like directory synchronization come into play. In short, migrations can be an extended and messy business.

It seems that Microsoft will offer their new services to customers who have more than 150 seats to move to Office 365. According to an August 15 report, Microsoft’s team will be able to migrate users from Exchange 2003 or later versions, Lotus Domino 7.0.3 and later, and IMAP-accessible systems such as Gmail.

It’s unclear whether some of the Microsoft engineers will ever go onsite or if they will always operate from call centers. Details of the exact onboarding tasks that Microsoft will take care of are not yet available but are likely to include the most straightforward and easily scripted processes such as mailbox moves and user provisioning. However, as reported, quite a lot of work will remain as:

Customers will be responsible for the migration of client-side data — including .pst files, local Outlook settings and local contacts — and post-migration support… Many Microsoft partners offer these types of services.

To make such a venture possible, I imagine that Microsoft will follow a very precise playbook that outlines exactly how to prepare to execute a migration. Some new tools to automate steps can be anticipated. Any deviation from the playbook in terms of the characteristics of a customer’s on-premises system or the data to be migrated will possibly mean that it will fall outside the terms of this service and have to be referred to a partner. For example, if substantial pre-work is required to update the on-premises environment before a migration can start, that work is likely to be left to the customer or farmed out to a partner.

Exchange 2003 and Exchange 2007 servers are probably the most common email servers found in the target base of customers considering a move to Office 365 and not all of these systems will have been well maintained over the years. I can see some challenges in figuring out basic stuff such as making sure that sufficient network capacity exists to allow mailboxes to be moved. It will be interesting to see how Microsoft validates a customer environment before starting any real on-boarding work. Sometimes it takes the eyes of a skilled human to detect lurking problems in an IT system and that won’t happen when everything is done from a call center.

And you’ll notice no mention of SharePoint and Lync migration or co-existence with on-premises versions: this effort is all about getting Office 365 moving by accelerating the transition of email to the cloud. Given that a lot of small companies don’t use SharePoint or Lync, providing an email-centric migration service is the most effective course to take. According to reports, Microsoft told partners at WPC that this initiative will allow the partners to focus on more complex migration projects where more customized interaction (and therefore billing) is required to enable customers to move. In other words, instead of doing ten small migration projects that can now be handled by a Microsoft call center, a partner can focus on two or three more “interesting” projects and devote the same amount of effort to those engagements.

To ease the pain for partners, Microsoft will continue to fund Office 365 engagements through deployment funds that cover $15,000 for the first 1,000 seats in a project plus $5/seat afterwards up to a limit of $60,000 per customer. These funds cover migration work that cannot be done from a central point.

Continuing funds will help, but partners really need to focus on the big picture and ask themselves whether their business should center on any activity that can be automated and transitioned to a call center. Long-term success is better gained through high-value, high-knowledge activities. Perhaps this Microsoft move will be sufficient to persuade partners who have made a decent business from Office 365 on-boarding that they should concentrate on other aspects of Office 365, like SharePoint deployment, enterprise Lync, or even making sense of Yammer?

Follow Tony @12Knocksinna

Posted in Cloud, Office 365 | Tagged , , | 10 Comments

Thoughts about Microsoft’s unified technology conference


As expected, Microsoft’s announcement that they would run an uber-conference focused on the enterprise to replace TechEd and the individual Exchange, Lync, SharePoint, Project, and MMS conferences did not receive universal approval. The new conference takes place in Chicago next May and, if TechEd is anything to go by, should attract a crowd in the region of 15,000. Perhaps more, especially if they are excited by the promises made by Julia White when she announced that the event will deliver:

  • “Clearer visibility into Microsoft’s future technology vision and roadmap
  • Unparalleled access to Microsoft senior leaders and the developers who write the code – many of whom will present and engage with you and answer your questions
  • A broader range of learning opportunities across all of Microsoft’s technologies, including actionable best practices from industry experts
  • Deep community interaction with the top technology professionals and industry peers in structured and informal settings
  • Epic after-hour festivities for you to unwind and turn up the fun!”

I’m sad to see the Microsoft Exchange Conference (MEC) fall by the wayside once again, after just two events following its relaunch in September 2012. The MEC organizers ran two very good conferences and managed to satisfy the demands of the Exchange community with a well-balanced mixture of both Microsoft and independent content delivered with humor and passion.

But MEC is no more and the hashtag #IamMEC is replaced by #MECisgone. Such is life.

Much as I loathe the notion of attending an uber-conference with all its attendant pressure for hotels, spaces in session rooms, seats for meals, long queues at registration, problems with stressed Wi-Fi networks, and a crowded tradeshow, I see advantages for both attendees and Microsoft in the new approach.

The advantages for Microsoft are more obvious. Instead of splitting their resources across six conferences, they focus on just one. I assume that the budgets for most Microsoft conferences are cost-neutral as attendee fees match the costs of each event, so running one large conference instead of five or six smaller events shouldn’t make much difference there.

The fact that Microsoft won’t have to ship engineers, marketing people, and other staff from conference to conference during the year should result in big savings. For instance, it was silly to have MEC and TechEd run in Austin and Houston within six weeks of each other in April and May. Time is the most precious resource, after all, and this decision should free thousands of weeks of people time for more productive activities. Like improving product quality, creating new features, or better communication with customers. And because it is a single conference, it should be possible to ensure that the best speakers with the deepest technical knowledge are available.

The hope is that Microsoft will reinvest some of the time saved by not running the now-canned conferences into the planning that is necessary to deliver compelling content for the various groups that will flock to the new event. A quick look at the likely groups finds:

  • Exchange
  • SharePoint
  • Lync
  • Windows Server (Management)
  • Project
  • Cloud
  • Azure

You’ll notice that I don’t include Office 365 in the list. This is because the Office 365 services (Exchange Online etc.) can be addressed in the “base” technology tracks. In reality, outside of the services, the remaining stuff that you might discuss about Office 365 are licensing details and other mundane stuff that really doesn’t light up a conference agenda.

Coordinating so many tracks (which could be then represented by multiple subsidiary tracks) in a coherent agenda that allocates the right rooms to the right speakers at the right time is a difficult task. After all, you don’t want a session covering an arcane (but important) detail of a product to be delivered by a barely coherent speaker to an audience of two in a large conference hall when people cannot get into other more popular sessions. I imagine that some sessions from popular speakers will need to be repeated as well, which increases the scheduling complexity.

Although some will look for dedicated content, others will like the fact that a single conference provides coverage of the complete Microsoft enterprise technology stack. It should be possible to cherry pick great content across the complete spectrum to update yourself for different tasks, such as planning the migration of an infrastructure to Windows 2012 R2 complete with upgrades for all of the applications, monitoring framework, third party products, and even home-grown applications.

Attendees will be happy if the schedule is well-organized and contains lots of compelling content delivered by speakers who know their stuff. Given that new (“Wave 16”) versions of the Office servers are due toward the end of 2016 is helpful in this respect as people will be interested to find out what new features have been included in the servers and how they will coexist with existing technology.

Apart from the prospect of dealing with crowds (for everything), the biggest attendee issue I have heard voiced is that not everyone will be able to attend a single conference. Take the average IT shop where a number of people divide up the responsibilities for different technologies. In the past, these people would have each been able to take a week off to attend the conference that best matched their area of competence. Now, they will have to make a decision as to who should attend the mega-conference and figure out how they can share the information gained at the conference. To offset the problem, the fact that Microsoft does an excellent job of sharing conference material through MSDN’s Channel 9 should give those who are left behind confidence that they’ll have access to a lot of the content presented at the conference without being there,

At this point in time it’s too early to know whether the new technology conference will be a solid replacement for the set of conferences that it has displaced. But I’m positive for now and will remain so unless the agenda is a mess and Microsoft takes too many short cuts. I don’t think they will, but I will be watching.

Follow Tony @12Knocksinna

Posted in Technology, Training | Tagged , , , , | 2 Comments

Exchange Unwashed Digest – July 2014


Another month has gone by and it was a busy one for my “Exchange Unwashed” blog on WindowsITPro.com. I published thirteen posts covering everything from the thorny question of just how many paid subscribers Microsoft has for Office 365 to some lingering system registry entries for long-departed Exchange 2013 servers. Here’s what happened in July 2014:

Office 365 by the numbers – an ever-increasing trajectory (July 31): Microsoft does not publish details of how many subscribers connect to Office 365 so you have to make some educated guesses as to how they’re doing. The most recent data was a report of a $2.5 billion annual run rate given at the recent Worldwide Partner Conference (WPC). Plugging that into an Excel worksheet indicates that Office 365 now has 9.53% of the installed Exchange base. But the trajectory is upward and the installed base is moving. The question is how quickly it will move over the next few years and how much will end up in the cloud. Recent legal debate and U.S. rulings that Microsoft has to provide copies of email kept on Office 365 servers in Ireland won’t help reassure non-U.S. customers that their data is safe in the cloud, so it will be interesting to track how things evolve over the next few months.

Yammer – a technology still looking for a solution? (July 29): I have been doing my best to use Yammer over the last few months to gain an insight into its value. So far I haven’t found too much, at least not over my existing tools. But Microsoft is working hard to integrate Yammer with their other Office servers so things might get better. Then again, they might not…

Lingering entries for long-deleted servers (July 24): Exchange 2013 is pretty good at cleaning up after itself, but it does leave references to long-departed servers in the system registry on servers. And this causes Managed Availability to have a little fit. Just a little one.

The changing nature of email NDRs (July 22): A heck of a lot of non-delivery notifications are issued by email systems daily. The question is whether you can adjust the content of the NDRs to make the recipients better understand why the email didn’t get through. This post looks at some NDRs from common email systems and concludes which is the best.

Challenges await as Microsoft dumps MEC, TechEd (and other conferences) for a mega-event (July 21): It looks like many of us will be heading to Chicago, IL in the first week of May 2015 to attend Microsoft’s mega-IT technology event. The sad thing is that this event replaces the like of MEC, which I enjoyed immensely. The good thing is that it puts a bullet through TechEd, which I did not. We’ll just have to wait and see how good the new event is and whether Microsoft can deal with the organizational and logistical challenges that await.

Protect users by suppressing Outlook’s conflict resolution reports (July 17): Outlook generates conflict resolution reports when it encounters a problem with an item, such as when two clients operate on the same item at roughly the same time. Normally things go smoothly and Outlook resolves the conflict, but then it has to tell the user that a conflict existed and what was done to resolve the issue. And the problem is that most users don’t care and don’t understand the report. So do everyone a favor and suppress the reports. Outlook will still do its thing and users won’t be bothered. It’s a win-win all round.

Outlook Apps – a new approach worth considering (July 15): The history of Exchange APIs is studded with many failures. Things like CDO Routing Objects – a nice idea but badly implemented and poorly supported. Now we have Outlook Apps, which really seem to be a nice thing because they work across multiple platforms and come with a lot of example code to get you going. Worth looking at!

Microsoft Learning insults Exchange professionals with simply awful video (July 14): I received a lot of feedback after I posted this note about a video that I thought was really bad, mostly because it was a very unprofessional way (IMHO) to treat an important topic. Thankfully Microsoft Learning agreed and they took the video down. See what you think.

Inconsistent searches all too commonly seen in Exchange (July 10). Last week a ZDNet contributor posted a note explaining why he had junked Outlook after many years and had moved to Gmail. The poor search facilities available in Outlook and the inconsistent results that are achieved when searching online and offline were part of his problem. The good news is that Microsoft is aware that they need to do better with search. This can’t happen soon enough.

Codename “Oslo” is now the Delve product (July 9): Much hype accompanied the announcement that Microsoft was working on a product designed to make better sense of all of the information floating around corporate IT systems, especially in places like Outlook mailboxes and SharePoint document libraries. The new product was codenamed Oslo. Now it’s Delve and it’s coming to Office 365 users by the end of 2014. It will be interesting to see how Microsoft resolves issues like privacy and inadvertent disclosure of information when Delve appears.

Security Design Change for Office 365 Public Folders Causes Inbound Email to NDR (July 8): Microsoft said that they were committed to providing better up-front information about upcoming changes to Office 365 through their online roadmap, which is really very good. But then we’ve had a series of small but irritating changes that no one seemed to realize should have been communicated. This one was a good change in that it increased security for mail-enabled public folders and restored parity with Exchange 2010, but it broke stuff for customers without warning, which is never good.

Two recent Microsoft changes affecting Office 365; one reversed, one partially (July 3): Another set of unannounced changes. The first was when Microsoft took over domains operated by No-IP.com and caused mail routing to break for legitimate customers. The second was a botched attempt to introduce new administrative roles within Office 365. The No-IP situation is now resolved and the new roles have been withdrawn – but they’ll be back.

Directory flaw led to Exchange Online outage (July 1): The 7-hour Exchange Online outage on June 24 was caused by the directory infrastructure failing to keep pace with authentication requests. The issue revealed a problem that Microsoft is fixing and it’s also a warning for on-premises customers – if Active Directory goes bad, lots of strange and screwy things happen with Exchange. Period.

I can’t pretend that I will be as productive in August. Vacation beckons.

Follow Tony @12Knocksinna

Posted in Cloud, Email, Exchange 2013, Office 365, Technology | Tagged , , , , , , , , , , , , , , | 3 Comments