Gmail as Gaeilge

I was charmed to learn yesterday that Google has added Irish (Gaeilge) to the list of supported languages for Gmail (the local news report is here). Not that I speak much Irish, even after having it drummed into my skull for 14 years at primary and secondary school (teaching techniques were primitive in the 1960s and 70s). But a small bit of grá (love) for the language lingers like it does for many other Irish people, even if we are only ever moved to attempt to use the language when the need arises to confuse foreigners (and ourselves, if the truth be known), mostly in bars and similar settings abroad. So off I went to select Irish as the language of choice for Gmail…

Selecting Irish as the preferred language for Gmail

Selecting Irish as the preferred language for Gmail

After saving the selection, Gmail reinitialized and I was back in my school days, figuring out what all the translated terms meant. As you can see, it’s only the dates and text strings in the user interface that are translated – no attempt is made to translate the content of messages as they arrive, but perhaps that’s the next trick for Google. Some interesting things can be learned from the translation. For instance, what do you think this means?

Iontach, níl turscar ar bith anseo!

Well, we discover here that “turscar” is the Irish for “spam” because the equivalent in English is “Hurray. No spam found here!”

Gmail as Gaeilge

Gmail as Gaeilge

I’ve been down this path before. Microsoft launched an Irish language pack for Windows XP in 2005. I downloaded it and upgraded the computer that our kids used at the time (no one had individual laptops or iPads then) and said nothing. Cries of anguish erupted when they found that Windows had been given an Irish makeover!  It’s actually very interesting to see how long it takes to locate common user interface components when new labels are applied. After a few days the cries of pain were too much to endure and I removed the Windows language pack to revert to English.

A Windows 7 language pack is available for Irish and one is also listed in the set available for Windows 8.1. Somehow I don’t think I will repeat my previous experiment, but I will leave Gmail in Irish to see how I get on, at least for a little while.

Follow Tony @12Knocksinna

Posted in Cloud, Email | Tagged , , , , | Leave a comment

A brief history of Exchange Time Management

It might just be me, but the sounds of bitter complaints by users whose calendars have been thrown into confusion by some combination of user error, Exchange server bug, and client mess-up appear to have quietened recently. At least, my mailbox is less full of sad tales of corrupted meetings or failed appointments. Things must be improving.

Office systems that include time management functionality have been around for a very long time and we have enjoyed the vicissitudes of their malfunctions for most of that time. Digital Equipment Corporation’s ALL-IN-1 Office Automation system offered scheduling features in 1982. Its V2.0 release was updated at the behest of the White House in 1984 to allow for minute-by-minute appointments for President Reagan and his senior staff.

Thirty years ago we had a much simpler environment to manage. Few people had email and all used wired terminals to connect to their accounts on timeshared computers. It was much easier to code the basics of time management such as setting up a meeting, sending out notices of that meeting, and handling the responses.

The first versions of Exchange used Schedule+ calendaring to maintain backward compatibility with Microsoft Mail. Once the first versions of Outlook (1997) came along to replace the original “Exchange Viewer” client, Exchange provided native time management. We were still in the era of wired terminals and the new challenge was interoperability. Not only between different email and calendaring systems, but even between different versions of Exchange. The different views of what an appointment or meeting was across various email servers contributed to some angst. The introduction of the iCalendar format has helped to solve the interoperability issue between different electronic calendars, but only relatively recently.

User expectations of time management were also growing as features such as delegate access were added. Multiple access to a single calendar is absolutely a wonderful idea from a user perspective but it creates many challenges, especially when you factor in different clients. Performance is also problematic when clients have to open multiple calendars. The worst example I have encountered is an administrator who was expected to manage the calendars of sixteen managers. Suffice to say that she had plenty of opportunities to brew coffee as Outlook struggled to cope with the load.

Mobile clients have a natural affinity to calendars. If you are on the road, you need to know what is in your calendar, and a mobile client that has no access to calendar is hamstrung. Early BlackBerry devices certainly could get to user calendars and cheerfully proceeded afterwards to wreak havoc on calendar items. We’re still seeing corrupt calendar items (“bad items”) dropped by the Mailbox Replication Service when mailboxes are moved on Exchange 2010 and 2013 servers. To be fair to RIM, some of the protocols used by Exchange were either difficult to understand (like MAPI) or badly documented. Microsoft now publishes full specifications on the different protocols and interfaces used by Exchange and that seems to have helped.

Telling someone about a protocol is one thing. Expecting them to use it properly is another. ActiveSync (EAS) is the best example. The EAS protocol documents are available to developers to help them work out how mobile clients should interoperate with Exchange. But a long history exists of mobile clients messing up user calendars, most notably the splendid efforts of several versions of Apple iOS to impose its will on the server, as in the infamous “calendar hijacking” issue in late 2012.

iPhone and iPad devices are terrifically popular as a percentage of mobile devices found in Exchange deployments, a fact that possibly influenced Microsoft’s decision to buy Acompli and its excellent iOS client over Thanksgiving. Apple and Microsoft have worked hard to improve the client-server communications between iOS and Exchange and apart from a recent hiccup with iOS 8.1, the updates have generally been better. Because of the work done on the server, the improvement is generally felt across the spectrum of mobile clients. Sure, bumps will happen along the road, but it is true that mobile clients are much better today at dealing with complex calendaring operations than they were in the past.

Thirty years of computer-based time management has taught us a lot. In terms of Exchange, I think you need to keep three basic principles in mind as you design or execute deployments in order to minimize the risk of things going wrong. These are:

  • The newer the client, the better. Generally speaking, the most recent versions of Outlook and mobile clients are better at dealing with all aspects of calendaring.
  • The newer the server, the better. Microsoft has poured effort into bullet-proofing Exchange so that malfunctioning clients can’t impact the server. Most of this work has happened since Exchange 2010 SP2. You benefit by using Exchange 2010 SP3 (with the latest roll-up update). Exchange 2013 and Exchange Online also include code to keep the demands of clients under control.
  • Be sensible in user placement. The basic idea is to keep users who work together on the same server. If someone needs to manage calendars, all the mailboxes for those calendars should be on the same server. You create risk once you distribute operations. For instance, having an administrator located on Exchange Online attempting to manage multiple on-premises calendars is a recipe for disaster. Sure, it will probably work. But note that word…

Going back to ALL-IN-1, even though we had hard-wired terminals connected to central computers, we had problems with time management at that point too. I got a support call to go to the White House once but never made it. All because I was an “alien.”  Oh well…

Follow Tony @12Knocksinna

About these ads
Posted in Cloud, Email, Exchange, Exchange 2013, Office 365 | Tagged , , , , , , , , | Leave a comment

Exchange Unwashed Digest – November 2014

The normal flow of new features, updates, and bugs flowed across the “Exchange Unwashed” desk during November. Some of the bugs were puzzling, others were infuriating, but everything was interesting – at least to me.

A clash between S/MIME and transport rules (Nov 27): Data Loss Prevention (DLP) was a big new feature in Exchange 2013 that has now been extended to SharePoint Online. DLP depends on special forms of transport rules to ensure that messages containing sensitive data don’t get outside an organization. As it happens, S/MIME messages might stop DLP working and so cause messages to be blocked. I never realized that this might happen until it did…

OWA functionality gap widens between Office 365 and Exchange 2013 (Nov 25): If you’re a user of Office 365, you’re probably aware that lots of new features have been popping up recently (People View, Clutter, and Groups). All great stuff, but these features aren’t in on-premises Exchange and might not ever feature in an on-premises release. Other UI tweaks have also popped up in Outlook Web App (OWA) that aren’t yet in on-premises Exchange, all of which underlines the growing functionality gap that has appeared between the cloud and on-premises versions.

Office 365 Groups problem exposes the seamy side of evergreen software (Nov 20). Another Office 365 story, this time one exploring how the rush to get new features to users and fulfill the promise of “evergreen” technology can have a downside. Office 365 Groups are nice (apart from the total lack of management features) but a security weakness was discovered after they were made available to users. The developers moved to fix the problem but didn’t communicate and the net result was that folks lost access to documents stored in the OneDrive for Business sites used by Groups. All a bit of a mess…

FAQ: Answers to common Office 365 Clutter questions (Nov 18). I really like the new Clutter feature now available in OWA for Office 365 but it’s clear that lots of people are struggling to understand how they might use it. So I put together this FAQ based on my own experience and a discussion with the developers. See what you think.

OWA updates in Office 365 help with Chrome showModalDialog issue but no joy for on-premises versions (Nov 13). Google shipped Chrome 37 and broke OWA and EAC, which use the showModalDialog method for common operations like attaching files to messages. Google hasn’t done much to help since September, but Microsoft has updated OWA (but only for Office 365) to remove the use of the method. It’s a help but overall still a mess.

Microsoft pulls Exchange updates to fix installer problem (Nov 11). We were all set to welcome Exchange 2013 CU7 when news emerged that Microsoft had pulled the update because a security fix might affect OWA. Given the criticism that has been leveled at Microsoft about product quality the call was a good one. We now await CU7 sometime in December…

Clutter arrives to impose order on Office 365 mailboxes (Nov 11). This is the post to welcome the initial appearance of Clutter in Office 365 tenant domains that had signed up for “First Release.” Between this and the FAQ referred to above, you should get a pretty good view of how useful Clutter might be to you.

Forcing Exchange’s Admin Center to use a specific language (Nov 7). You know what it’s like. It’s a winter afternoon and you have nothing much to do, which leads you to think that you want to perform Exchange administration in Welsh or German or some other language. And then you find out that EAC will allow you to do just that. Which is nice…

Active Directory schema extensions now a feature of every Exchange update (Nov 5). There was a time when the thought of a schema update would reduce an Active Directory administrator to a gibbering wreck. But that was in 2000 or thereabouts. We’re more mature about these updates now. After all, just wait for a few months and a new schema update will come along…

iOS 8 ActiveSync problem causes out-of-date meetings (Nov 3). Ah yes, the sheer quality of iOS mail app upgrades are a source of wonder and bemusement. Wonder because Apple keep on screwing up the ActiveSync connection to Exchange; bemusement because there’s no good reason why bugs keep on appearing in this part of iOS. There’s no conspiracy here at all. Apple and Microsoft are good friends. Move on please…

December brings us the holidays and all that well-feeling to friend and foe. I hope that the month brings a similar mix of events to report. Actually, I’m pretty sure that it will!

Follow Tony @12Knocksinna

Posted in Cloud, Exchange, Office 365 | Tagged , , , , , , | 2 Comments

Brick backups and Exchange – not recommended

I was recently asked whether a company should invest in brick-level backup for Exchange 2010. This request came as quite a surprise because I hadn’t run into anyone who was interested in this kind of technology for a number of years. Curiosity got the better of me so I agreed to have a look into the issue.

The first order of business is to understand why the requirement for brick-level backup might exist. Generally it’s because a company wants to be able to take selective backups of one or more mailboxes rather than a complete database. I could understand why this would be the case in the days when we used the NT Backup utility or third-party products to backup to slow tapes. Even though the databases were much smaller than those that are often encountered with Exchange 2010 or Exchange 2013, backups could be excruciatingly slow. In addition, tapes failed, usually right at the end of the backup or even worse, in such a way that an administrator might not notice the problem.

Neither Exchange 2010 or Exchange 2013 support tape backups, at least not using Windows Server Backup. Databases are now backed up to disk using Windows Volume ShadowCopy Services (VSS)-based utilities and operations proceed much faster than before. Indeed, many commentators have made the case that databases protected by sufficient copies (at least 3, preferably 4) to maintain adequate redundancy in Database Availability Groups (DAGs) do not require a traditional backup regime. It’s undeniable that company policy might require backups to be taken for purposes such as long-term offsite storage, but the case can be made that the advent of the DAG has created a whole new environment within which to consider how and when to take backups.

Does the ability to take faster and less frequent backups eliminate the need for brick-level backups? Well, another thing to factor into the equation is the small matter of support. Microsoft’s policy is pretty straightforward.

There are several third-party backup programs that can back up and restore individual mailboxes, rather than whole databases. Because those backup solutions do not follow Microsoft backup guidelines and technology, they are not directly supported.

Microsoft is obviously not in favor of brick-level backup solutions. The reason why this stance exists might well be that Microsoft doesn’t provide any supported method of performing a brick-level backup or, possibly even more critical, a restore operation that extracts information from a brick-level backup and merges the data seamlessly into a user’s mailbox. It’s easy enough to restore a complete mailbox using information contained in a complete database backup but much more complex to consider all of the issues that might crop up when faced with the problem of restoring data back into a mailbox containing some information. For example, how do you fix up a calendar meeting request so that it accurately contains all of the attendee responses? How do you handle conflicts when data in the mailbox differs from that in the backup? And so on.

Because Microsoft doesn’t provide software vendors with supported methods to perform brick-level backups and restores they have to come up with all sorts of innovative methods to access Exchange databases. For example, although I have no hands-on experience of the product, the online description of iBackup for Exchange indicates that it uses the Export-Mailbox and Import-Mailbox cmdlets (updated by New-MailboxExportRequest and New-MailboxImportRequest from Exchange 2010 SP1 onwards) to write to and read from intermediate PSTs. iBackup goes on to say that a brick-level backup “is not a method for a the complete backup or recovery of the Exchange database.” Quite so.

I emailed the PR contact for iBackup to ask them what methods they endorse and received no response. In any case, although using cmdlets is certainly a supported method to access database content, I suspect that it would be horribly inefficient and slow if you had to process more than a handful of mailboxes. Indeed, as noted by Acronis, brick-level backups can be 20-30 times slower than a full database backup.

It’s also good to ask the question whether the desired functionality can be achieved using standard product features. For example, if all you need to do is to take a backup of a single mailbox, why couldn’t you use the New-MailboxExportRequest cmdlet to export everything out to one or more PSTs? And if you need to restore content back from a database, perhaps you can use Exchange’s ability to mount a recovery database using a backup copy and then recover the data from it using the New-MailboxRestoreRequest cmdlet. It seems like this is what many vendors do, albeit with a nice wrapper around the cmdlets.

Don’t get me wrong – I think that there are good solutions to examine in this space and innovation does exist. For example, Veeam Software offers an interesting utility called Veeam Explorer for Exchange, part of their Backup and Replication suite. You can use this tool to open Exchange databases and recover information from individual messages to complete mailboxes. A 30-day free license is available from Veeam to allow you to test the software in your own environment to make sure that it meshes with your operational processes and whatever regulatory regime your company might operate under. As with any utility that has the ability to open and extract information from user mailboxes, you also need to pay attention to privacy and ensure that access is controlled at all times. Veeam is run by Ratmir Timashev, who previously ran Aelita Software before selling it to Quest (subsequently sold to Dell), so he knows the Exchange market. I am more positive about buying software from companies led by people who have a strong track record in the industry.

The company that asked the original question eventually decided to stick with a standard backup regime. I don’t pretend that brick-level backup is a bad concept. In fact, I’m sure that it has its place and can add value in the right circumstances. It’s just that it didn’t satisfy requirements in this case as Microsoft’s lack of support was the deciding factor. But I think the salient fact that most of what they wanted to do could be accomplished using standard product features had an influence too. This just goes to show that a solution designed to solve a problem in a certain environment isn’t necessarily as valid or as useful given current technology. Hasn’t this always been the case?

Follow Tony’s ramblings on Twitter.

Posted in Exchange, Exchange 2010, Exchange 2013 | Tagged , , , | 6 Comments

ePublishing for Technology: a new book on Exchange 2013 High Availability

Time is both the greatest enemy and greatest friend of technical books. I know that seems like a statement which makes little sense, but truth lurks in these words.

We all know that technology now evolves at an ever-increasing cadence. The upshot is that the traditional publishing cycle struggles to keep up. In the past, an author would have time to consider several betas of a new product and then the final version before settling down to write text that (after technical and copying editing) would be accurate and valid for a couple of years. The publishers were happy because the investment they made in bringing a book to market could be recouped over that period; authors were happy because the hundreds of hours of work required to create the text would be compensated for through royalty payments.

The cloud has had a terrific effect on all of us, most positive as new features and functionality are revealed every week. But this makes it really difficult for authors who write about technology because their text ages dreadfully quickly, even as the first printed copies of books appear.

Take Exchange 2013 for example. Paul Robichaux and I declined to write our “Exchange 2013 Inside Out” books based on the first (RTM) version because past history had taught us the wisdom of waiting for at least six months to see how a new server functioned when revealed to the harsh judgment of customer deployments. Even though some kudos can be gained through first to market status, books rushed out to coincide with the first availability of a new product are invariably flawed, and in the case of Exchange, they can be horribly flawed.

So we worked away in the background to create and hone content, going through the exacting editorial process managed by Microsoft Press to ensure that the books were as good as a team of technical reviewers, copy editors, indexers, design artists, and series editors can deliver. We eventually ended up with material that is up to date with Exchange 2013 CU2, but that’s five cumulative updates ago!

A lot has happened since CU2 appeared. I would argue that the content of Exchange 2013 Inside Out: Mailbox and High Availability and Exchange 2013 Inside Out: Connectivity, Clients, and UM are still valuable resources because although some details have changed since Paul and I stopped writing in September 2013, the concepts and general descriptions of technology have not. Some of the content could be rewritten now because we have more knowledge about a topic or Microsoft has made decisions that affect how we might describe things. Modern public folders are an example as the scalability issues that have forced Microsoft to focus on some reimplementation and tuning in this area were not known when I wrote that chapter and I would definitely have some different advice to offer today.

Still, the books are valuable resources and have largely stood the test of passing cumulative updates as long as you treat them as a starting point for understanding Exchange and supplement what you find in the Inside Out series with information published since Microsoft released Exchange 2013 CU2.

Which brings me to “Deploying and Managing High Availability for Exchange 2013”, a new eBook authored by a high-powered trio of very experienced Exchange MVPs: Paul Cunningham (“Exchange Server Pro”), Michael Van Horenbeeck (“Van Hybrid”), and Steve Goodman (all-round nice guy and co-host of the regular UC Architects podcast). That’s a pretty good line-up of talent to focus on a topic like High Availability.

Spread over 210 pages of content and 43 of a useful lab guide, the book addresses the following areas:

  • Client Access server High Availability
  • Mailbox Server High Availability
  • Transport High Availability
  • High Availability for Unified Messaging
  • Managing and Monitoring High Availability
  • High Availability for Hybrid Deployments

The best thing about the book is its practical nature. The content is approached from the perspective of an administrator who needs to get things done and there are lots of examples included to show you what commands need to be executed to perform different tasks.

The interests of the authors shine through too. Paul has long been a dedicated fan of Database Availability Groups (DAGs), so the coverage of how to put a DAG into operation is detailed and exact. Michael’s interests cover hybrid connectivity (obviously), but also the murky world of Managed Availability, so there’s plenty on that topic. And I suspect that Steve had something to say about certificates and their proper use within an Exchange deployment.

You can buy an electronic (PDF or EPUB format) copy of the book here. The cost is a very reasonable $34.99 (check the site for a discount). That might seem high for an eBook, but consider how much you have to pay for an hour of a consultant’s time and it makes perfect sense to acquire some knowledge by buying a book.

No book is perfect and I am sure that people will find points on which they disagree with the authors in this book. But that’s missing the point. A book about technology should never be deemed to be the last word on a subject, especially when dealing with servers that are deployed into a huge variety of different on-premises environments where one implementation differs from the next. It is the role and responsibility of an administrator to accumulate knowledge from books like this and then put that knowledge to work by placing it in context with the operational environment and business needs of their company. This book provides a lot of useful information that will help people immediately but it is important that readers surround the knowledge contained in the book with their own experience, background, and opinions.

And because no book is perfect, it’s good to know that this eBook can be updated pretty quickly if new information comes to hand. For example, the thinking around DAGs evolved significantly with the introduction of the simplified DAG in Exchange 2013 SP1. It will evolve again when Microsoft allows witness servers for multi-site deployments to be located in Azure early next year. And so on.

I believe that the future for technology books is not in the printed form. Sure, we will continue to have some books that are suitable for printing, but I think that the vast bulk of the market for books covering commercial application servers like Exchange will soon be in electronic format. Given the release cadence, it just makes sense.

Follow Tony @12Knocksinna

Posted in Exchange, Exchange 2013 | Tagged , , , , , | 2 Comments

Creating a new address list for Exchange Online (Office 365)

A question was posed in the Exchange IT Pro group of Microsoft’s Office 365 Network:

Is there any way I can add folders to directory on Outlook for example add a folder “staff” for users to click it and all the staff come up. I have added a picture. Thanks :) By the way I’m using Office 365 Exchange Online.”

The accompanying screen shot showed the People section of Outlook Web App (OWA) and all indications were that the request wanted a way to create a new “Staff” entry under the “Directory” tree on the left-hand side of the screen. By default, this tree includes entries for “All Rooms”, “All Users”, and so on.

My reply was that a new address list would serve the purpose, assuming that you could create a recipient filter to isolate the mail-enabled objects that you wanted to display when the list was accessed. However, a few differences exist in creating an address list for Exchange Online than for Exchange 2013 (documented in pages 345-349 in chapter 7 of “Microsoft Exchange 2013 Inside Out: Mailbox and High Availability”), so here’s a brief overview of what needs to be done.

First, like everything else in Exchange, RBAC controls access to the cmdlets that control the ability to work with address lists. As it happens, the “Address Lists” role is not assigned to any administrative role group, so the first task is to assign the role to a suitable role group. Follow these steps:

  1. Open the Exchange Administration Center (EAC) in Office 365
  2. Click “permissions”
  3. Click “admin roles”
  4. Select the role group that you want to amend. I chose “Organization Management” as it is the usual role group used by tenant administrators. Click on the pencil icon to edit the role group.
  5. Add the “Address Lists” role to the set of roles included in the group and save.

Adding the Address Lists role to Organization Management

The next step requires PowerShell because EAC does not include an option to allow you to create a new address list. Start PowerShell and connect to Office 365 (use these commands if you don’t already have them in your PowerShell profile). When you connect, RBAC will load all the cmdlets that you are allowed to run into the PowerShell session, including the Address Lists cmdlets.

Next, run the New-AddressList cmdlet to create the new list. You need to provide two pieces of data – the name of the address list as seen by users and the recipient filter used by Exchange to extract items from the directory for display in the list. The example shown below is a very simple filter that extracts user mailboxes whose “StateOrProvince” property is set to “Ireland.”

New-AddressList –Name ‘Ireland Users’ –RecipientFilter {((RecipientType –eq ‘UserMailbox’) –and (StateOrProvince –eq ‘Ireland’))}

Running the New-AddressList cmdlet

Normally, after a new address list is created with on-premises Exchange 2010 or Exchange 2013, you would run the Update-AddressList cmdlet to update the list. You don’t have to do this for Exchange Online (the cmdlet is not available to you), possibly because any update activity that could soak up a lot of system resources is handled behind the scenes.

In fact, there is a twist in how Exchange Online handles pre-existing recipients that should be included in custom address lists. Essentially, because Exchange Online does not make the Update-AddressList cmdlet available to administrators, any pre-existing recipient whose properties match the query for a new address list will not be evaluated for list membership. Evaluation only occurs when a recipient object is created or updated, so if you create a new address list and recipients already exist that should be in the list, you have to update those recipients to “force” Exchange Online to include them in the list. For more information, see Greg Taylor’s EHLO post on the topic.

The new address list shows up in OWA

After a couple of minutes, you should be able to go to OWA, access People, and see the new address list under “Directory” – and better again, if the recipient filter works and the right information has been populated about the objects you want to display, you will see a populated list. The last point is important – address lists can only work if they can find objects according to the filter criteria you specify. If an object is missing some value then it won’t be found. For instance, if a user doesn’t have “Ireland” in their StateOrProvince” property, then they won’t appear in the “Ireland Users” view.

Follow Tony @12Knocksinna

Posted in Cloud, Office 365 | Tagged , , , , | 4 Comments

Should we spend less time discussing software bugs?

Is there a vague possibility that the technical community spends too much time complaining about software bugs? This rather startling proposition came up during a discussion with Paul Cunningham, of ExchangeServerPro fame.

Paul remarked that we seem to spend a lot of time finding bugs, reporting them, checking for workarounds, and describing them in blogs and other social media, all of which takes away from the normal work of IT professionals. He then wondered whether “it had always been this way?”

The simple answer is that it hasn’t. Although we might consider current software releases to be flawed, previous software was equally if not more flawed. For example, Exchange 2000 was a perfectly horrible release that combined with Windows 2000 and the first iteration of Active Directory to deliver a horrific deployment experience for anyone who had to upgrade from Exchange 5.5 and Windows NT 4.0. But the flaws (and there were many) of Exchange 2000 never received the kind of full-on publicity that Exchange 2013 gets today.

In fact, ten or fifteen years ago, news coverage tended to concentrate on features rather than problems. Limited communications and restricted outlets for discussions forced correspondents to focus on new functionality rather than bugs. Magazines weren’t interested in publishing in-depth articles about flawed features – it took them far too long to get an article through the publishing process and a fix might have been made available before the article appeared. And anyway, advertisers weren’t interested in publications that discussed problems – they wanted articles about new and exciting technology and “how-to” features to explain how to put the technology into action. Selling print ads was very important then.

Today, it’s all about page views as online magazines, blogs, forums, and other media jostle for prominence in a crowded Internet-centric marketplace. News appears fast and bad news is good for page views, so many articles are devoted to the exposure of flaws in products from mobile phones to laptops to enterprise software. Reports are written up in articles and blogs and flash messages are sent to the world via Twitter, Facebook, Yammer, and Google+ to let people know that new information is available. The news is subsequently explored, interpreted, updated, discussed, and generally dissected ad nausem.

Some of the exposure is good. For example, letting people know about potential security holes is absolutely the right thing to do. Calling companies to account when they let flawed and incomplete products into the market also delivers a service to the industry as it enables customers to make better buying decisions. But I sometimes wonder if those of us who write about IT sometimes make far too big a deal of bugs.

All software has bugs and all products will eventually expose their bugs to users. The question is whether the bugs that are discovered are important or not. I think that the best articles on bugs provide an analysis of why the bug has appeared, what it means in practice, and whether any workarounds exist. These articles help users to understand the impact of a bug in the context of their IT operations and decide what action they should take. Perhaps they can proceed with deployment or maybe they have to wait for a hot fix or software update. In either case, the decision is made in a state of knowledge.

I despair when I see a blizzard of “me too” posts appearing like a rash across blogs, all faithfully reporting the discovery of a new bug without adding an iota of intelligence or analysis to the debate. Too many people rush to publish without thinking about why a bug has appeared and what its impact might really be.

But software vendors don’t help themselves when they push software out that contains obvious bugs that should have been picked up during testing. These are the worst kind of bugs because customers expect that vendors will test their products thoroughly before release and depend on the quality of the testing to know when a product is ready for deployment. No customer can hope to be able to test a product in the same way or depth as a major software vendor as they don’t have the staff, the tools, or the expertise. So we need products to be released when they are ready and validated through testing and not to make an arbitrary date set by senior management or marketing.

Perhaps we should spend less time discussing some bugs (important bugs will always be visible) and more time thinking about the best use of technology to solve business problems. That, after all, is the real aim of the IT game.

Follow Tony @12Knocksinna

Posted in Technology | Tagged , , | Leave a comment