A peek into the future from Exchange’s General Manager


One of the joys of attending a conference as a speaker is the chance to listen to others once your session is over. My keynote at TEC was at 8AM on Monday morning so I had most of Monday and Tuesday to pick other sessions to attend before I had to depart for my flight to London and then back to Dublin. An excellent variety of good speakers were on the agenda. Exchange sessions by Greg Taylor, Ross Smith, Paul Robichaux, Scott Schnoll, Lee Mackey and Paul Bowden competed in my mind with Active Directory sessions by Guido Grillenmeier and Brian Desmond. I stayed with the Exchange track most of the time and enjoyed the vast majority of the sessions that I attended.

Tuesday’s keynote for the Exchange track was given by Kevin Allison, the General Manager responsible for the development and support of Exchange for both the on-premises and cloud platforms.

Kevin started by reviewing the current adoption rate of Exchange 2010, which he estimated to be roughly a year ahead of Exchange 2007 in terms of customer deployments. This didn’t come as a surprising revelation because Exchange 2007 was the first release in a new generation of product and marked a significant change in the deployment and management techniques used for Exchange 2003. By comparison, Exchange 2010 builds on the architecture established by Exchange 2007 and is therefore a more familiar target for customers. In addition, Exchange 2010 has a more compelling range of functionality to convince customers to upgrade, not least the Database Availability Group (DAG).

Kevin said that most Exchange 2003 customers are now engaged in planning activities to upgrade their infrastructure. Of course, they now have a big choice to make in that customers can now opt for a traditional on-premises deployment or choose to move into the cloud with Office 365. He remarked that he has seen a different attitude to cloud deployments where customer expectations are that the mechanics move much faster than traditional projects. Most large companies can take up to a year to plan, prepare, and deploy a new version of Exchange whereas customers who sign up for Office 365 seem to expect that they can sign the contract on Monday and start to move mailboxes on Friday. Of course, It is true that new customers can go through a rapid onboarding process and be up and running on Office 365 in a matter of hours but it is neither feasible nor practical to commence a cloud deployment so rapidly if you have to migrate thousands of mailboxes, even if the end target is to have all mailboxes eventually in Office 365 rather than using a hybrid approach where some mailboxes remain on-premises.

Factors that slow down cloud deployments include the need to plan for co-existence and to have high fidelity in data exchange between on-premises and cloud (for instance, you’d like users to be able to see free/busy information from both sides). Other major time soaks include preparing mailboxes to be moved (for example, making sure that your Active Directory is ready to synchronize with the cloud) and the small matter of the network-constrained task of moving user mailbox data from on-premises servers across the Internet to servers running in Microsoft’s datacenters. No one has yet invented a method to transfer gigabytes of mailbox data in seconds! Mailboxes typically move in batches of a few hundred and each batch has to be prepared, moved, and verified. And then there’s the small matter of preparing users for a mailbox move.

Interestingly, Kevin acknowledged that human resources are a very real limit. Microsoft simply doesn’t have all of the people that might be required to help customers move to Office 365 if a large part of their installed base makes the decision to move to the cloud. Planning and executing moves are human-intensive processes and some of the expectations about timelines that might be set in the sales cycle are unattainable.  However, on the upside, the need for help to ensure successful Office 365 deployments is a huge opportunity for consultants and resellers who might otherwise feel that Microsoft is taking away work that surrounds traditional deployments.

Kevin revealed that 65 million Exchange online users are currently deployed across 8 datacenters using a single infrastructure with over 6 million active users logging on daily. The figure for total users is made up from several sources, including Microsoft’s own internal users, Live@EDU, and customers.

Enterprise mail users put 10 times the load on cloud infrastructure than consumers. This isn’t surprising because consumers might generate five or six messages daily and never use some of the extended features of Exchange whereas enterprise users connect with a variety of devices, are perpetually communicating, and use all manner of features that are simply uninteresting to a consumer such as mailbox delegates or calendar scheduling assistants. As Office 365 rolls out, Microsoft will have more enterprise users to deal with and the proportion of active users will grow as will the demand that users exert on the infrastructure, which then creates all manner of interesting challenges to maintain Service Level Agreements (SLAs). Consumers don’t tend to care very much about SLAs (if they even realize that such a thing exists) but enterprise customers care deeply about the capability of the service provider to deliver a high quality of service according to the contract that was signed.

Kevin remarked that Microsoft’s need to keep such a massive infrastructure going requires a huge amount of ongoing work to identify and isolate faults more quickly through monitoring and event recording. The good thing is that the lessons that Microsoft is learning from running their cloud datacenters is incorporated into the code base used by on-premises servers. Kevin offered search as an example as it wasn’t originally able to deal with hundreds of thousands of items held in the kind of very large mailboxes that are common today. Indeed, the temptation offered by 25GB cloud mailboxes is that no one will ever delete or file any message so you end up with massive inboxes that become a single large searchable repository.

Enterprise customers are gaining from the work done in cloud to solve architectural and scaling limits that are encountered to deal with millions of mailboxes. Further gains come from understanding how to make DAGs work well and cope with disaster recovery (the current deployment uses “pods” of two paired datacenters where mailboxes are stored on databases with four copies, two copies being held in each datacenter) and the tuning of components such as MRS and how it is used to move mailboxes both from on-premises to the cloud and internally as databases are rebalanced.  Another interesting aspect is how Microsoft is amending their datacenter deployments so that they have truly resilient infrastructures where failures in other components such as DNS can’t have fundamental effects on Exchange Online.

Provisioning hardware into datacenters is a difficult exercise in capacity planning. Microsoft has to predict hardware requirements five months out to make sure that sufficient capacity is available to handle customer onboarding. Indeed, if a large proportion of the current installed base decides overnight to move to the cloud, Microsoft might have to turn customers away because they can’t provision servers quickly enough. Success has many different aspects!

Kevin observed that the experience that Microsoft has gained through the operation of Exchange Online has been invaluable in that it has helped them to realize where Exchange has become complex and needs to be improved to remove obstacles to deployment and operation. New wizards are being introduced to make tasks simpler for administrators and to hide complexity and tools like the Exchange Remote Connectivity Analyzer help to debug and resolve issues. Tooling is critical going forward because the network exerts a huge influence over availability in the cloud and administrators need tools from Microsoft to understand the effectiveness and quality of the service being delivered from the cloud.

In terms of what’s next for Exchange, Kevin briefly talked about features that will appear in Exchange 2010 SP2 later this year. Three major updates were discussed:

  • Address Book Policies (ABPs), also known as “GAL segmentation”. This feature is described for Exchange 2007 in a white paper but Microsoft knew that the approach taken (ACLs) would break in the Exchange 2010 architecture, which indeed happened in Exchange 2010 SP1. ABPs follow the same route as other policies (OWA, ActiveSync, etc.) applied to mailboxes in that an administrator can create policies that establish what objects in the GAL can be viewed by a user. The default policy is equivalent to today’s GAL – you can see everything. But administrators can narrow things down by establishing policies that might restrict a user to only being able to see GAL objects that belong to a specific department or country and then apply that policy to mailboxes using EMC or PowerShell. In effect, address book policies create virtual views into the GAL that administrators can amend to meet company requirements. See the Exchange team blog for some more information.
  • Device overload: Microsoft acknowledges that it’s difficult for administrators to know how well mobile devices work with Exchange and with the mass of clients that can connect to the server. Recent issues have occurred that caused recurring meetings to be deleted by some clients because of a lack of interoperability testing that might have surfaced the problem. A new ActiveSync testing lab is being established to help improve interoperability between devices produced by different vendors including RIM, Apple, Microsoft, and Android.
  • Hybrid co-existence, aka rich co-existence. Kevin noted that “We do not expect large customers – over 2500 seats – to have everyone in the cloud all at once.”  A hybrid deployment therefore requires the on-premise Exchange organization to be tailored so that it can share data effectively with Office 365. Today, some 46 individual settings have to be changed to make rich co-existence work well. Exchange 2010 SP2 includes a wizard that will reduce the number of settings that require administrator intervention to 6, so the process of establishing co-existence will be much simpler.

The session finished with a Q&A session. Kevin was asked about the future of email, which he acknowledged is hard to know because of the changing face of communication. He noted that Facebook and SMS updates are more used far more commonly than email by kids today and wondered how they would communicate in 10 years? Figuring out how to keep email relevant and useful is a real challenge that occupies the Exchange team as they plan future releases. Given the way that the product has evolved from the days when messages were an average 4kb in size and we thought that a 100-user server was large to the point where Exchange supports tens of millions of users in the cloud, I think that they have a fair chance of being successful.

– Tony

Posted in Cloud, Exchange 2010, Office 365 | Tagged , | 3 Comments

Office 365 reaches public beta


On April 18, Microsoft announced that Office 365 is now in public beta. This is a big thing because it marks the last major hurdle before Office 365 becomes publicly available, an event that is expected later on this summer, just in time to become a huge factor in the decision making process that companies running Exchange 2003 have to go through to determine their best path forward.

Why focus on Exchange 2003? Well, for one thing it is the version that the majority of the Exchange code base runs today. For another, it’s old and approaching the end of its formal support life and the choice that faces Exchange 2003 customers are:

  • Migrate to Office 365
  • Migrate part of the company to Exchange 2010 and part to Office 365; this is sometimes referred to as a “hybrid” deployment and requires extra planning to put the necessary federation components in place to allow a high degree of interoperability between the two sides.
  • Migrate all of the company to Exchange 2010

You’ll notice that the much-loved word “migrate” features in all of these options. There’s no getting away from the fact that work will be required to upgrade any email infrastructure running Exchange 2003 to a future platform. That infrastructure includes clients (Office 365 doesn’t support Outlook 2003 so that’s a factor to consider), hardware (no matter what option is taken, some servers will remain), and network (a small but important points as the network team rejigs the network to cope with the demands of Internet-centric activity). All migrations need careful planning and lots of hard work to be successful and anyone who tells you that moving to Office 365 will be easy is smoking strange but compelling material that is also mind-altering.

I’m not sure that companies running Exchange 2007 or Exchange 2010 will rush to Office 365 in the same manner. They already run modern versions of Exchange and there’s no pressing need to change because of a looming support deadline. These companies have the luxury of waiting to see how Office 365 performs in the harsh glare of production.

I think that Microsoft will strike a rich vein of interest in Office 365 from certain categories of customers:

  • New companies – why go to the bother of setting up an infrastructure when you can have Microsoft do all the work.
  • Small to medium companies that struggle to keep IT running (maybe because they have too many applications and too few people); it just makes sense to hand over email to Office 365 and gain SharePoint and Lync as added bonuses.
  • Other companies that are willing to cede control over email to use a standard platform where they are “one amongst many”.  In essence, they trade the flexibility of a dedicated on-premises deployment for the promises extended by email in the cloud, albeit having to accept the constraints that arise from the use of a standard platform. This transition has occurred before as other utilities attained maturity – the move to use a standard electric socket for instance.

If I am right, then Microsoft will face the challenges that flow from success such as the pressure to deploy hardware within their datacenters, to help customers prepare for migration, and to perform the actual migrations. A huge amount of mailbox may be about to flow into Microsoft’s datacenters!

Office 365 isn’t for everyone. Custom platforms are still best when you want control and I don’t see the end of on-premises Exchange anytime soon. For example, anyone considering Office 365 has to be very sure that the complete ecosystem that surrounds Exchange can move into the cloud. Can applications that interface to Exchange continue to work such as a Peoplesoft HR application that automatically provisions a new mailbox when a new employee is hired? Will home-grown or third-party applications that work with Exchange data continue to function? Will all of the departments in the company be able to use Exchange as effectively as before? Email is pervasive within companies and the lessons of every migration involving Exchange are that great care has to be taken to understand the full scope of the migration rather than just looking at a simple upgrade of server technology.

I’ve been using Office 365 for a couple of months now and can report that it is a solid offering. In fact, I wonder how the folks in Mountain View will respond as Google Apps now looks pretty tired and dated, not to mention handicapped by a user interface that only its inventors could appreciate.

An understated but massive advantage that Microsoft has in its competitive efforts against Google Apps is the close working association between Office 365 and Office, something that you appreciate immediately after the configuration to connect Outlook 2010 to Office 365 takes just a few seconds and the user experience thereafter is exactly the same as if connected to a mailbox on an on-premises Exchange 2010 server. Gmail just can’t compete here. As an email system Gmail gets the job done but putting it alongside Office 365 is now like comparing a Fiat 500 to a Maserati. Both cars get you from A to B, but I know the one I’d prefer to drive.

Google is smart and capable so it will be interesting to see how they up their game in future versions of Google Apps – competition is a wonderful factor in the encouragement of innovation.

I had a great time discussing these and other factors in the decisions that surround Office 365 at “The Experts Conference” in Las Vegas today. I imagine that these conversations will continue over the next few months. Expect more developments in the Office 365 story soon!

– Tony

Posted in Cloud, Exchange 2010, Office 365 | Tagged , , | Leave a comment

Scaling connections with Exchange 2010


It’s an undeniable fact of being an author of a book on a technical topic that you cannot cover everything in the number of pages that a publisher allocates. Sometimes this causes you to cut material that you think is perfectly good but is less interesting in the grand scheme of things. On other occasions the page limit provides a useful excuse for not covering something that should really have been in the book. Such is the case for the addition of Kerberos authentication for MAPI clients in Exchange 2010 SP1, which I missed out when I wrote Microsoft Exchange Server 2010 Inside Out.

The gap in our collective knowledge was plugged in TechNet. However, that nugget of information was probably overlooked in the mass of new data released around Exchange 2010 SP1 so it’s a good thing that the redoubtable Ross Smith IV has now blogged on the topic.

Enabling Kerberos authentication for MAPI clients is not for everyone. For one thing, it only works for domain-joined Outlook clients that connect inside the corporate firewall and does nothing to help with scaling for Outlook Anywhere connections, if that’s your interest. The need for an alternative to NTLM authentication arises from the fundamental change made in the Exchange 2010 architecture when the MAPI endpoint is moved from the mailbox server to the Client Access Server (CAS).

Lots of goodness results from this change, not least the huge transformation of a mailbox database into something that is truly portable between Exchange mailbox servers to provide the foundation for the Database Availability Group (DAG) and the whole high availability story that Microsoft now proudly proclaims. However, as Ross points out, the relocation of the MAPI endpoint has an effect on Outlook clients that previously connected to Exchange 2007 with Kerberos (a client-side setting) as the Exchange 2007 mailbox server that serviced the connection no longer acts as the MAPI endpoint. Instead, the connection is handled by a CAS server and the CAS server is highly unlikely to be called the same name as the previous mailbox server. In any case, the database that holds the mailbox that Outlook is interested in might now be managed within a DAG and who knows what mailbox server it is currently running on! Everything continues to work because Outlook will revert to NTLM authentication when its attempt to use Kerberos fails, but then you can run into some scalability issues. You are unlikely to see these issues in test environments and indeed, may not encounter them in production unless the connectivity load overtaxes the infrastructure.

An example is in order. It comes from an Exchange 2007 deployment but serves the purpose of illustration. Our project was somewhat ground-breaking as it involved using Outlook Anywhere to connect some 90,000 clients across the Internet to servers in an hosted datacenter – all of the traditional costs involved in outsourcing deals of running large network pipes between customer and hosting company were eliminated using this approach. It’s very similar to what happens when you connect to something like BPOS or Office 365 but we were a little ahead of the game in that many of the scalability limits that have subsequently been discovered and worked around were still unknown.

In any case, we had deployed an array of CAS servers behind another array of ISA servers to handle the incoming connections. All worked well until we approached a load broadly equivalent to 30,000 clients. At this point we went into meltdown, servers failed, and clients failed to connect. A war room involving the customer, Microsoft, and the hosting company swung into action and many attempts were made to resolve the issue. Additional ISA and CAS servers were installed, different protocols were isolated and routed to specific CAS servers, all manner of debugging techniques were used and crash dumps examined – all to no avail.

The problem persisted for nearly ten days until someone noticed that incoming authentication requests were not being handled smoothly by the domain controllers. The default number of secure channels assigned to handle NTLM authentication (as used by Outlook Anywhere connections) is 2 and this proved totally inadequate for our purposes. The number is controlled by the MaxConcurrentAPI value (this blog provides a good insight, but there are many other war stories that can be found using your favourite search engine).

Increasing the value of MaxConcurrentAPI (I could never understand why this setting bears such a name) in the system registry on the CAS servers to 4 cured our problem and allowed the war room to disband. The lesson that I took away from the event was that it is really hard to predict how high volumes of client connections will be handled by an infrastructure and that there are precious few tools to help test connection load. Changes subsequently made by Microsoft have helped. For example, the original Outlook 2007 client generated multiple unused and unwanted connections that stressed the infrastructure so Microsoft removed these connections in Outlook 2007 SP2. And now we have Exchange 2010 SP1 providing another way to avoid secure channel overload by using Kerberos authentication instead.

We live and learn through experience. I think most of the major connectivity challenges for large-scale Exchange deployments are now well understood and documented. Now that I’ve said that, I’m sure there will be some who inform me that I know nothing as there are some brand-new sparkling connectivity sink-holes for us to fall into…

– Tony

Posted in Exchange 2010 | Tagged , , , , | Leave a comment

Coupons for Exchange 2010 Maestro training now available


I guess there are coupons for just about everything these days so why wouldn’t I be surprised to see our marketing partners in Penton Media releasing coupons for the Exchange 2010 Maestro training events that Paul Robichaux and I are running in San Diego, London (England), and Greenwich CT.  Click on this link to view a video of me explaining why I think Maestro training is a good idea.

And if you’re interested in signing up to attend one of our seminars, head over to the Facebook page for the events to reveal the codes and then go off to our Maestro site to get more information about what we’re doing.

Now I shall head off to the splendidly named “The Experts Conference” (TEC) where I am keynoting next Monday morning. Click here to see the full agenda for the Exchange stream. Should be an interesting event as there are lots of great people lined up to attend. It will be interesting to see how TEC compares with TechEd (big, flabby, and too much marketing) and Connections (small, personable, but needing some extra voom). Expect a full report in due course. And if you’re at TEC, you can contact Paul, Brian Desmond, or myself for a different coupon code… it may not do much different, but it will at least come from one of the protagonists!

I’ll be doing a signing of my Microsoft Exchange Server 2010 Inside Out book at 6pm on Tuesday. O’Reilly is one of the TEC sponsors and has provided 25 copies of my book for free distribution on a first-come, first-served basis. Other O’Reilly (or Microsoft Press) authors will be there too – so for example you could pick up a signed copy of Active Directory: Designing, Deploying, and Running Active Directory from Brian Desmond if you’re really fast. Other books that are available include Windows Server® 2008 Terminal Services Resource Kit by Christa Anderson and I think that Anil Desai will probably have MCTS (Exam 70-643): Configuring Windows Server 2008 Applications Infrastructure self paced training kit along for him to sign and give away. Finally, Dux Raymond Sy will be there to sign SharePoint 2010: Best Practices for Upgrading and Migrating. Quite a library!

I’ll only be at the book signing (located at the foyer outside the Red Rock Ballrooms) for 15 minutes or so as I will have to go to the airport for my flight to London and then back to Dublin, so if you want a book please come along early!

– Tony

Posted in Exchange, Technology | Tagged , | Leave a comment

Doing the right thing on Twitter and Facebook


I realize that many of my friends outside the U.K. won’t be habitual readers of
“The Telegraph” so might have missed the opportunity to update their social skills through the release of Netiquette: Debrett’s Guide to Twitter and Facebook. Of course, they also probably don’t know that Debrett’s has a long history of telling the British upper class how to behave in all manner of circumstances and has provided guides to such esoterica as how to make polite conversation during dinner (when your dinner partners are boring) and the social hierarchy of the British nobility. Their background and history surely gives them unique competence to comment on the social gaffes that can afflict anyone who communicates through Twitter or Facebook.

Posting messages when sober seems like an excellent starting point, especially if you care to comment about your employers. Being polite in 140 characters seems excellent guidance and it is important to maintain standards in spelling and grammar too. After all, there’s no excuse for falling into the mire of text speak just to appear “cool” and need to impress some teenagers. The guide also usually warns about those who post just a tad too often as they might become boring. Becoming boring was never a good thing, even in the days of extended dinners over brandy and cigars, and Oscar Wilde would fully appreciate the advice to keep contributions snappy and to the point.

One point of etiquette that’s missing in the articles is guidance for appropriate profile photos or what should stay private and not be shared for any reason. For example, is it socially acceptable to pose holding a blow-up doll with a cigar dangling from your lips as featured in a photo that an acquaintance of mine memorably used to jazz up their Facebook account some years ago.  Or how often should one broadcast one’s location to the world (and who really cares?). Or perhaps whether it is appropriate and respectful to keep one’s head down during a talk at a conference or seminar to monitor incoming tweets and posts? Believe it or not, it is possible to notice the dipping heads, flying fingers, and lit screens from a podium when speaking at a conference. These days, instead of applause, you know when you’ve made a good point in a talk when the audience focuses on their smartphones in a race to broadcast to the world, which is kind of weird.

Another interesting article from the Telegraph suggests twenty types of tweets that should never be posted as they add little value or simply clutter up the information flow. Amongst the tweets to be avoided are frequent updates about sporting events, “what did I miss” questions, one-word tweets, and the vapid “I’m bored” update to the world. I suspect that few avid tweeters will take any notice of this advice and some will continue to pollute the Internet with contributions that can be immediately erased.

I am sure that Debretts will continue to expand their attempts to bring etiquette to the online world. If you want to stay up to date with their tweets, you can find them at @Debretts or read their musings online.

While tweets and Facebook updates are relatively new methods of communication, we’ve needed help in communicating electronically since the first email was sent between two PDP-10 computers in 1971. If you are looking for further help to refine how to communicate properly, you might consult Netiquette: Internet Etiquette in the Age of the Blog or How to Practice “Netiquette” – Being Courteous Online (UK Kindle only). On Amazon.com, I found recent titles such as Netiquette: A Student’s Guide to Digital Etiquette (2010) or NETiquette (On-Line Etiquette): Tips for Adults & Teens: Facebook, MySpace, Twitter! Terminology….and more. It seems like this is a reasonably busy area for commentary so clearly there are all manner of online gaffes currently happening that need to be eradicated!

– Tony

Posted in Technology | Tagged , , | Leave a comment

Microsoft reveals the truth about single-role servers


In the April 8 post on the Exchange team’s blog, a clear direction is given that single-role Exchange servers are not the preferred starting point for designs. In fact, a rather bold statement is made:

“… always start design discussions with multi-role, and that is the recommended solution from the Exchange team.”

There’s a lot of good information and recommendations in the post that I totally agree with and some that I don’t. For example, I don’t agree with the notion that you should start with RAID-less JBOD direct-attached disk configurations simply because most shops don’t have the time, energy, or operational efficiency to monitor JBOD disks and take action when failures occur – and they will. It seems to make a lot more sense to plan for a degree of robustness in the solution up-front and take advantage of all the smart technology that companies such as EMC, NetApp, and HP include in their disk controllers today. Sure, you can go cheap-and-cheerful with JBOD if you like and get the warm glow that results from a successful deployment, but be prepared for “interesting times” when disks fail over the course of server lifetime. Of course, if you’re a consultant who parachutes in to do a design and departs immediately upon payment, you don’t need to worry about long-term operational robustness, but that’s getting away from the point I originally started to discuss.

When Microsoft introduced Exchange 2007 way back in 2006, they made a big fuss about the wonders of single-role servers and the splendid code isolation that they had achieved by giving administrators the chance to install just the code required to do the job – and no more – on their Exchange servers. Less code was exposed to hackers and speedier performance was assured because excessive instructions and data couldn’t get in the way. The horrible mess of Exchange 2000 and Exchange 2003, which of course are multi-role servers and come equipped with all the code necessary to do whatever task is demanded of them, was discarded in a wonderful embrace of the notion of “less is better”.

I just wonder what’s happened in the five years since to make Microsoft recant and realize that multi-role servers are actually very flexible and the right option to begin with for all deployments. Of course, the default installation mode for Exchange 2007 and Exchange 2010 has always been to offer to install a multi-role MBX/HT/CAS server, so maybe the fuss and bother about the joy of single-role has been so much smoke and mirrors? As outlined in my own post of March 1, I suspected as much…

No, you say. Microsoft would never do such a thing to their faithful community of Exchange administrators. So there’s got to be another reason. I suggest that the answer is encapsulated in another sentence in the blog post:

In Exchange 2007, we did not support the Client Access or Hub Transport roles on clustered servers, so there was no way to have a high availability deployment…

The penny drops! Single-role deployments are useful and valuable in some specific circumstances, usually encountered in very large deployments, but the notion that single-role is wonderful is so much marketing powerfully pungent brown bovine emissions thrown out to disguise the fact that Exchange 2007 only ever aspires to be a partially highly-available application because the HT and CAS roles can’t be installed onto a server that operates in a Continuous Cluster Replication (CCR) or Standby Cluster Replication (SCR) configuration. The situation is very different with Exchange 2010 because the Database Availability Group (DAG) supports multi-role servers as well as dedicated mailbox servers, so you can include all of the necessary servers into a single highly-available entity (in Exchange terms anyway – there are other parts of the infrastructure that also need protection before you achieve true high availability).

Now that we’ve cleared up the confusion, we can consider what happens from this point. According to their blog (second only to TechNet in terms of accuracy, clarity, and insightfulness) Microsoft’s best practice is now firmly focused on multi-role servers. We can therefore anticipate that this trend will continue and that future engineering efforts will support this position.

No more single-role servers are needed unless you need them for a specific purpose. For example, virtualizing HT and CAS servers has always seemed to be a pretty good idea to me because these server roles are essentially stateless and it seems practical and logical to isolate these servers on virtualized machines for large deployments. But for smaller deployments built around a few servers in a DAG, follow the excellent recommendations of the EHLO post, keep everything simple and go with multi-role. You know it makes sense.

– Tony

For more information about how to configure and deploy Exchange 2010 SP1 servers, including how to approach many tickly design problems, see Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition.

Posted in Exchange 2010 | Tagged , , , , | 8 Comments

Catalan triumph in Barcelona


It’s been a big rugby weekend in Europe as the Heineken Cup quarter-finals were played in Ireland, England, and Spain. Despite Spain’s relatively low status in world rugby, it hosted two games, both moved from France. The Olympic stadium in Barcelona, where I was the TMO, was the location for the Perpignan vs Toulon game yesterday (April 9) while Biarritz and Toulouse met in San Sebastian on Sunday (and Toulouse won in extra time). Moving the games recognized the deep affiliation that Perpignan has with the region of Catalonia while Biarritz has connected with its Basque roots six times over the last few seasons by hosting Heineken games in San Sebastian.

Our game was a blast. 55,000 people (the famous FC Barcelona soccer club sold some 15,000 tickets and Toulon brought 10,000 fans) assembled in the stadium at the top of Mont Juic to attend. Suffice to say that the traffic was crazy and most who came by car found that they had to park quite a distance away and then had to climb in 26 degree C heat to get to the stadium. The views over Barcelona were spectacular but probably not enjoyed by those who struggled to the top.

The Olympic stadium was built for the 1992 games. It is an attractive bowl with the pitch surrounded by a running track. The well-watered and immaculately tended pitch is normally used for soccer and is a little short for rugby with the last few metres of the in-goal area covered in artificial turf. As is often the case when soccer grounds are used for rugby, the posts were not as high as you’d normally find for games involving top-class teams.

Irish Refereeing team in Barcelona: Tony Redmond, Trevor Collins, Dudley Philips, Alain Rolland, Peter Fitzgibbon, Leo Colgan

The first half was disappointing with typical cup rugby being played as both teams probed for weaknesses. The half finished with Toulon ahead by 11-6 after scoring a block-down try late in the half. Perpignan had struggled with their discipline and had two yellow cards in the half (dangerous play and a high tackle) and Toulon looked well positioned to win.

The second half was completely different as Toulon conceded ground, position, and penalties under ferocious pressure from Perpignan. I had one try to decide upon, swiftly followed by another when I was asked to look at the conversion following the try. The posts were low enough for the ball to pass over the top and make the assistant referees uncertain whether the conversion was good. I’ve never had to arbitrate on a kick before and the task was made a little more difficult by the “interesting” angles that French TV cameras used to track the ball’s progress. However, the balance of probability overwhelmingly indicated a good kick and that was the verdict.

Perpignan eventually won 29-25. The scoreline doesn’t tell the full story as Toulon nabbed a try at the death. They’ll be disappointed that the experience in their ranks couldn’t help them to weather the storm of Perpignan pressure and they’ll also be unhappy at the way that their weak discipline resulted in a penalty count of 10-4 against them. Some of the penalties were truly silly and not what you’d expect from professional rugby players.

Great joy ensued at the end of the game as the stadium erupted in a sea of red and yellow and noise as the Perpignan team did a lap of honour. They now take on Northampton in the semi-final in three weeks and have a sporting chance of making the Heineken Cup final for the second time (they lost to Toulouse in 2003 in Dublin).

Alain Rolland gives his H-Cup jersey to a young boy in the crowd

We eventually made it back to the dressing room, but not before Alain Rolland received great applause as he came to the entrance down into the changing areas. This reflects his status as a top international referee and the affection of the French for any referee who communicates to players on the pitch in French. Alain stripped off his H-Cup jersey and handed it over to a boy in the crowd – I’m sure this was a great souvenir of the occasion for the recipient.

Perpignan did a good job of hosts after the game and we passed a pleasant hour or so in a reception room deep in the stadium before retreating to our hotel. This was the Hilton Diagonal Mar, a nice hotel reasonably close to the ground. Like all the other major hotel chains, Hiltons can look the same anywhere and this hotel is very much like any other business hotel with marble everywhere being the rule of the day. It also boasts the staggeringly expensive daily rate for Wi-Fi at EUR29 for 24 hours. The Wi-Fi connection is provided by Swisscom but it’s clearly special as well as expensive as I was unable to connect over the two days.

On the upside, the Hilton’s staff were a pleasure to deal with, especially the staff in the dining room who fed us late on Friday night. On the downside, there were many details in my room that deserved more attention. For example, white paint drips all over the strip that separated the carpet from the marble tiles near the bathroom, a noticeable hole in the wall where the door handle had made an impression, exhausted batteries in the TV remote control (surely something that’s easy for housekeeping to check daily!), and worst of all – no walk-in shower in a bathroom that measured approximately 4.5m x 2.5m. I can’t think of the last top-grade hotel in a major city that had just a shower in the bath…  Call me picky, but with so much space to play with I cannot fathom why the architect didn’t include a walk-in shower… clearly it was much more important to have that slit window!

What! So much space and no walk-in shower?

In any case, the hotel did its job well and we were a happy bunch leaving there to get back to BCN airport for the flight home to Dublin. Now back to the real job…

– Tony

Posted in Rugby | Tagged , , | 1 Comment

The changing need for backups


New technology and different business requirements are seminal events that cause technologists to ponder on well-held tenets of their trade. The need for Exchange backups is one such instance. The technical developments are the elimination of streamed backups in Exchange 2010 (only VSS-based backups are now supported) allied to the availability of features such as archive mailboxes, the march towards 50GB primary mailboxes, and a range of new compliance and discovery features that are built into the product. These new developments create a terrain sufficiently different to what has gone before to question whether traditional backup strategies are now appropriate for Exchange.

Backups serve many purposes. The first and most obvious is to provide a warm blanket feeling for an administrator who knows that their data is safe because it’s been copied to some media that can be taken off-site and is available to be restored if a hardware problem occurs. The second is as a source for point-in-time (PIT) recovery should the need arise. The classic example here is to recover some information that a user has subsequently deleted, perhaps in an effort to cover their tracks. The requests to recover data from backups usually come from legal sources (internal or external) in the form of discovery actions. Other benefits exist such as the ability to satisfy audit requirements by removing backup media to a remote location, but the two purposes outlined above are the most common.

The need for a company to respond to an email discovery action is far more common today but these requests are not new; their popularity simply reflects the growth of email as a method for business communication that has supplanted the letter, telex, and fax. The first criminal investigation that I was aware of where backups were required was a U.K. Serious Fraud Squad inquiry into some financial offences in 1989. In this case, the investigators required backup tapes to be restored on a VAX/VMS system so that the ALL-IN-1 accounts of the people that they were interested in could be reviewed. There was no notion of dumpsters or single item recovery. Items of interest were printed off and provided to lawyers, who eventually took the decision whether something was important.

I was also briefly involved in an inquiry in Australia around 1996 that looked at the circumstances surrounding the crash of a small commuter airplane some years before. In this case, the QC (Queen’s Counsel – lawyer) who led the inquiry wanted to know whether the operating airline had applied the necessary maintenance procedures to an airplane that had crashed. The request from the investigators was to review email sent by 30 users over a period of 3 months, later reduced by the QC to a list of ten users for four weeks. Once again, daily backups had to be restored from tape to a VAX/VMS system to search for interesting ALL-IN-1 messages in the target accounts and a vast amount of 132-column wide line printer paper was consumed to capture information for legal review.

While backups serve to make data available on a PIT basis, it’s an indisputable fact that taking and storing backups is an expensive business in terms of people cost, media, time, and storage. The sad fact is that most of the data held in user mailboxes is simply useless in terms of business value and richly deserves to be consigned to the byte wastebasket as soon as it’s sent. Think of the interminable to-and-fro interchanges between users discussing the issues of the day (often badly, never insightful, usually dreadful) coupled with the great dross of read receipts, NDRs, calendar acceptances, and all the other rubbish that clog up mailboxes. None of this needs to be retained but all of it is lovingly preserved on backup tapes.

Some companies attempt to address the problem of data preservation by decreeing that users should delete all email after a month. The efforts of these companies are invariably sadly undermined by the single salient fact that users can create and populate PSTs to their heart’s content unless administrators exert control over PSTs through GPOs. Even then, users are very good at circumventing the best attempts of administrators to force them to do anything that a user doesn’t want to and few companies have ever succeeded in implementing a watertight retention policy that users comply with all of the time.

In fact, if a user is intent on committing some sort of offence, they will take steps to remove all traces of their actions from their online mailbox and will store anything they want to keep in a PST, safely tucked away from the view of the administrator and immune from discovery. Or, if they need to keep a copy of email “just in case”, a user might print it off and keep it safely hidden in a file cabinet. The advent of litigation hold in Exchange 2010 helps, but only after the point where an administrator runs Set-Mailbox -LitigationHoldEnabled $True (on an Exchange 2010 server). Everything else before the hold is established is forgotten, if not forgiven.

Other companies cope by saying that all mail should be retained and deploy sophisticated archiving systems that “scrape and stub” to remove messages into the archive and leave stubs pointing to the archive behind in the mailbox. Effective, but costly and prone to the same effects of user-driven movement of sometimes important information into PSTs. And of course, keeping data is a double-edged sword because if the data is available it can be discovered and is therefore both a potential defense and problem for a company. In fact, you can argue that having an online archive is far worse for a company than having the data available on backups because backup tapes tend to be recycled after a set period (30, 60, or whatever number of days seems appropriate) after which the data is inaccessible. On the other hand, an online archive remains fresh and discoverable by lawyers who care to look unless you deploy retention policies to remove items automatically after they reach a certain age.

To a certain degree, things are much easier now, especially with the new litigation hold and discovery search features introduced in Exchange 2010. PSTs remain the great unwashed (or rather unwanted) of the discovery fraternity as their existence (and the data that they hold) is usually outside the immediate control of central administrators. The good news is that I hear of some third-party software vendors who are busy figuring out solutions to the discovery and control of PST content through intelligent agents that can seek PSTs – even those located on PC local drives – and apply policies to the items held in the PSTs, including the ability to delete items over a certain age or move them into archive mailboxes on a server. We’ll see how these solutions work when they are released and are used in the harsh light of production!

Given all of this, it’s interesting to hear that Microsoft IT has eliminated backups in favour of running four database copies within a Database Availability Group configured with a 30-day single item recovery (SIR) period. In other words, Microsoft has deployed sufficient database copies within its DAGs to eliminate the need for backups that would traditionally protect against hardware and software failure. Features such as single page patching take care of page-level corruption and block-mode replication means that database copies are as up-to-date as possible. The 30 day SIR period allows users to recover items that they have deleted in error without resorting to frantic appeals to administrators to restore backup tapes. After 30 days users are plumb out of luck, but let’s face it, if you “remember” that you deleted something important after 31 days, that item might just not be as important as you think. Interestingly, Microsoft IT has eliminated the use of lagged database copies because they don’t see the value of these copies.

The approach taken by Microsoft IT won’t meet everyone’s needs. It’s too stark for some, too cutting-edge for others. It’s also true that it is tremendously easy to embark on such a radical approach when you have the collective wisdom of both the Windows and Exchange development groups close at hand should anything go wrong. However, the sheer fact that Microsoft IT uses this mechanism is sufficient to provoke the question whether others should do the same – or use a modified version of the approach. After all, Microsoft IT doesn’t always get it right and sometimes their deployment techniques are artistic rather than practical (for others). Anyone remember the seven-node WolfPack cluster that Microsoft deployed with Exchange 2003 where four nodes were active, two used for administrative activities, and one for backup? How many other similar deployments occurred: zero. How many people wanted to do the same: many… Makes you think!

The bottom line is that backup strategy deserves to be reviewed as technology evolves. It needs to be efficient, effective, and meet the company’s business needs. Simply keeping to a tried and trusted approach may give administrators a warm feeling that their data is being protected, but it’s possibly not the best way to proceed in a world where Exchange 2010 SP1 is available.

– Tony

Want to read more compelling and insightful information about Exchange 2010 SP1 – well, get yourself a copy of Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk. The book is also available in a Kindle edition. Other e-book formats for the book are available from the O’Reilly web site. And if you want to argue the case in person, come along to one of the Exchange 2010 Maestro Seminars that we’re running in 2011. Your brain may be fried, but you’ll have fun.

Posted in Exchange 2010 | Tagged , , , , | Leave a comment

Review: Spring 2011 Connections


The Spring 2011 Connections event took place at the JW Marriott Grande Lakes complex in Orlando from March 27-30. I like this event because it’s a one-stop opportunity to get a broad overview of what’s happening in different aspects of the Microsoft-centric industry – ASP, SilverLight, SharePoint, and of course, Exchange, where solid speakers such as Michael B. Smith, Jim McBee, and Mike Crowley share their experience and knowledge of what really works and what doesn’t when software is exposed to real-life deployments.

You can argue that TechEd does the same job and that’s certainly true, but I find TechEd to be too big, too dominated by Microsoft, and far too marketing-driven to be as valuable as it once was in the era when technical insight could only be gained by attending events like TechEd to listen to developers and others “in the know” who went past the words written in the technical manuals to explain how software and hardware really worked. Of course, today’s environment is much different as there’s a mass of technical information available online for anyone to interrogate, if they have the time to search (and discard the dross that masks the good stuff). TechEd seems to have lost its mojo in the last while and that’s the reason why I feel that I don’t need to attend it any more.

Connections has been going for nearly a decade now and it’s the only really independent event of its type that I know of. The splendidly named but much smaller “The Experts Conference” (TEC) prides itself on the depth of its technical content and I am looking forward to my first visit to TEC in Las Vegas later this month. Of course, TEC originates as an event run by NetPro to support their specialized business around Active Directory tools; today TEC is run by Quest and has a role in supporting their much wider Windows-centric business. It will be interesting to see how Quest runs TEC and how independent the sessions contained in the agenda are (I can promise that my keynote will be independent!). I hope to have a positive experience and be able to compare and contrast TEC with Connections then.

Getting back to Connections, Mark Minasi’s keynote was easily my favourite session. Mark is a very experienced speaker who normally talks about the minutiae of Windows, especially aspects of networking that you’d really prefer not to know anything about but should understand if you want a smooth deployment. In this case, Mark took an interesting approach to everyone’s topic du jour by looking at the promises of cloud technology through an economist’s eye.  Mark’s background and experience working as an economist in Washington DC allied to his knowledge of Windows Server gives him the credentials to comment. As usual, Mark included a number of crowd-pleasing one-liners such as “numerological proctology”, which he used to describe some of the accounting gymnastics advanced by cloud supporters to convince companies that the cloud will deliver great financial benefits if only they’d make the plunge. He called “articles on cloud computing a weapon of mass destruction in the hands of a CIO”, which I thought was a pretty good way of describing the havoc that can ensue if any technology is embraced without due diligence and alignment with business needs.

After enjoying the zingers that Mark launched against flaky economics, the audience was left with a great piece of advice when he told IT departments that they should have a solid grasp of the essential metrics that describe the services that they provide to their customers today together with data that validates the cost of those services. I think that this is very wise because there’s no way that you can have an intelligent discussion about cloud technology without knowing how well the in-house IT department works today and how much they cost. Mark also noted that IT departments that don’t know about their costs and can’t measure how well they serve their customer (in the eyes of the customer rather than an over-inflated opinion formed by the IT department), then the discussion about cloud technology will be dominated by cost and savings data produced by those who want to sell the technology, and that might not lead to a good outcome for either the business or the IT department.

Apart from the sessions, I like being able to use a compact-sized event like Connections to catch up with people to understand better what’s happening in the industry. The trade show was busy and people seemed to be doing business, even if some had to do so in the shadow of the video about the E5000 messaging system described here that was playing on continuous loop on the HP stand. I took the chance to meet with a number of companies that I have been helping with technology strategy, but can’t really say any more on that topic for obvious reasons.

Not even the best make-up artist can do much with this face...

On a slightly more public note, I had the chance to speak with journalists such as Matt Gervais of SearchExchange.com and recorded an interview with Richard Campbell of RunAs Radio. I also linked up with my friends at Penton Media to make a video pitch for the Exchange 2010 Maestro 3-day seminars that Paul Robichaux, Brian Desmond, and I are running in San Diego, London, and Greenwich during 2011. Penton also hosted a very nice get-together for the authors who write for Windows IT Pro magazine.

I don’t pretend that Connections is perfect because it’s certainly not. However, it’s a nice size and it’s run by nice people who take care of their speakers and do their absolute best to make sure that everyone gets value from their attendance. I enjoyed the Orlando event thoroughly and am looking forward to Fall 2011 Connections (Las Vegas, starting on October 31) plus the first European Connections event in Germany in June. All part of the rich tapestry of life…

– Tony

Posted in Exchange, Technology | Tagged , , , | Leave a comment

ALL-IN-1 trivia quiz from 1985


ALL-IN-1 was a forms-driven Office Automation (OA) product released by Digital Equipment Corporation (DEC). Its origin were in software called the Charlotte Package of Office Systems Solutions (CP/OSS) written in 1981-82 by DEC Software Services in Charlotte, NC. The software was subsequently released by DEC as ALL-IN-1 1.1 in late 1982, which is when I first made its acquaintance as I was made the first VAX/VMS System Manager for the Bank of Ireland in Dublin.

The trivia quiz presented here comes from 1985 and is based on ALL-IN-1 V2.0, a fundamental rewrite of the product that was a join effort between the corporate engineering teams in Reading, England, Spitbrook, NH, and Charlotte. I suspect that the details will only enthuse a few who worked with ALL-IN-1 in those days. However, it’s interesting to see some of the concepts used by ALL-IN-1 still survive in email servers today, some 26 years later. For example, the heart of ALL-IN-1 was a database that shares some broad similarities with today’s Exchange database (both used the concept of single instance storage until Exchange discarded this in Exchange 2010); ALL-IN-1 offered script processing to run commands when you didn’t want to be bothered with the menu structure – think running some PowerShell cmdlets instead of issuing commands in EMC. And of course, you could combine the ALL-IN-1 functions into scripts just like you can with PowerShell today, including the ability to invoke other products (in ALL-IN-1’s case, this was products such as DATATRIEVE (a powerful query and reporting language that was very advanced for its time), but any VMS program was accessible through DCL!

ALL-IN-1 competed against products such as IBM PROFS and IBM DISOSS at its inception. It later competed against the early PC LAN-based email products such as Lotus cc:Mail and Microsoft Mail where it was handicapped by its lack of PC clients. DEC attempted to rectify this deficiency in 1989 with the purchase of a DOS-based PC product called OATmail, renamed as PC ALL-IN-1. This turned out to be a disaster as the script-based fetching and sending of email was rudimentary and coarse when compared to a Windows-based product. DEC compounded the error by launching its own Windows client, TeamLinks, as a client for the UNIX-based DEC MailWorks email server in 1992. TeamLinks eventually supported ALL-IN-1, but it should have done so from the start. Such is the wisdom of hindsight.

ALL-IN-1 is still in use by some customers today. That’s an impressive record for software first conceived 30 years ago. I documented much of its development in two books called “ALL-IN-1: A Technical Odyssey” (1991) and “ALL-IN-1: Managing and Programming V3.0”  (1992) and two on TeamLinks (1993 and 1994). All are now well out of print, but I’m delighted to have copies on my bookshelf.

ALL-IN-1 also provided the basis of the one and only U.S. patent that I have ever been granted. In this case, I was the co-inventor of a method for automatically sorting and prioritizing email. Think of inbox rules today! It’s interesting to see that the patent has been cited by 47 other patents but I don’t think that DEC ever made very much of it – we treated the sorter as an interesting exercise in artificial intelligence at the time but the volume of email that was sent and received in those days was nothing close to what we handle today and the need for automatic assistance in sorting new messages as they arrived into an inbox was far less obvious.

The quiz was originally given to engineers and others who wanted to test their knowledge about ALL-IN-1. I guess it must be equivalent to some of the menu-driven certification tests that exist today. Ah, those were the days…

– Tony

**************************************

ALL-IN-1 V2 Developers’ Pre-Test

**************************************

> Prepared by the Charlotte FAC

> 12-Apr-1985

INTRODUCTION

This test is a pre-requisite to taking the ALL-IN-1 V2 Developers’ Training course. It can be used in general prior to embarking on learning about ALL-IN-1 internals, to assure yourself that you have gotten the concepts of ALL-IN-1, so that the internals which support them will make sense.

The test is designed to test knowledge of

o  using ALL-IN-1 (user interface),

o  writing applications in ALL-IN-1.

In addition, the test exercises skills you as a developer will need to ferret out a problem, trace behavior, etc. in ALL-IN-1.

> Most questions are multiple choice/one correct answer. Some sections of the test have different question types, and are flagged as such.

> The test presumes => ALL-IN-1 V2.0 <= as distributed (i.e.,

not customized or modified at site) unless otherwise stated.

> You should probably allot yourself a full working day to work

on the test, unless you are an ALL-IN-1 guru. The latter sort should require half an hour. [Computers, cyborgs, and other silicon life-forms will need 50-60 nanoseconds, excluding the trick questions.] In any case, your time will be well spent, from the knowledge that you gain by working on this test.

> Recommended preparation for the test:

o  Reading/familiarity with the ALL-IN-1 DocSet, esp.

Users Guide, and Applications Programmers Ref. (APR).

o  Working the exercises in Appendix A of the APR.

> Materials for taking the test:

o  the ALL-IN-1 Documentation Set,

o  a running ALL-IN-1 V2 system.

————————————————————————

** ALL-IN-1 Usage **    [USR]

1.  At an ALL-IN-1 menu choice field, how do you exit temporarily to the subprocess (VMS DCL), while still remaining in the ALL-IN-1 session?

a. Press {GOLD/7}

b. Enter “EXIT{CR}

c. Enter “${CR}

d. Press {CTRL/Y}.

2.  To display the FMS Named Data of the currently displayed form,

a. Press {GOLD/N}

b. Press {GOLD/V}

c. Enter “<OA$VIEW_NAMED_DATA

d. Enter “FD E ND{CR}

3.  To display the full text of all the currently stacked ALL-IN-1 error- and status-messages (with their message ID’s),

a. Enter “<OA$VIEW_MESSAGES

b. Press {DOWN}

c. Press {GOLD/W}

d. Enter “<GET OA$MSG_VIEW_ALL = 1

4.  To invoke the INTERRUPT menu while in the WPS-PLUS or DPE editor,

a. Enter {GOLD I}

b. Enter {GOLD K} then INT{CR}

c. Enter {GOLD *} then {GOLD I}

d. Enter {CTRL/Z} then “XLATE OA FORM INTERRUPT^Z

5.  To invoke a UDP which you have written,

a. Enter “FC UD{CR}

b. Enter “GOLD U{CR}” and then “name_of_your_UDP{CR}

c. Enter “<DO [your_user_dir]your_udp.SCP{CR}

d. Enter “<SCRIPT your_udp{CR}

6.  If a menu has additional options on a further menu, to access those options (and display the menu),

a. Enter “M{CR}

b. Enter “MORE{CR}

c. Press {GOLD/A}

d. Press {GOLD DOWN}

e. (a) or (c)

7.  To switch into HARDCOPY mode,

a. Press {GOLD/H}

b. Enter “<MODE C{CR}

c. Enter “<GET OA$HARDCOPY = 1{CR}

d. Enter “$ SET TERM/DEVICE=LA100

8.  Which is NOT a part of the User Profile record?

a. Your working day’s start and end time.

b. Privilege to enter ALL-IN-1 functions interactively.

c. Whether you have a printer port on your terminal.

d. Your system-wide nickname.

9.  Which of the following will NOT complete or exit an ARGUMENT or ENTRY form?

a. Pressing {CR}

b. Pressing {EXIT SCREEN}

c. Pressing {GOLD F}

d. Pressing {GOLD Z}

10.  Where in a form’s Named Data do form qualifiers appear?

a. On the .TYPE line.

b. After the menu options, if any.

c. With the CHOICE field, or first enterable field.

d. On the .FILE line.

11.  How do you provide for Recognition for a field on a form?

a. Include a /RECOG= field qualifier for that field in the Named Data.

b. Include a key-definition for “.GOLD L” in the form.

c. All enterable fields have built-in Recognition automatically.

d. Invoke the function <OA$VAL_RECOG in the field-definition.

12. How do you create a data-file for an ENTRY form?

a. One is automatically created, when you first invoke the form.

b. Use the “<CREATE form_name” function.

c. Use the “$ CREATE/FDL=form_fdl” DCL command.

d. Contact your system manager.

13.  How do you invoke a MENU form “ZEBRA” as an ARG form?

a. You cannot do this — the form-type is defined statically in

the .TYPE line of Named Data.

b. With the <FORM function — “<FORM ZEBRA ARG

c. With the arg-form-type function —  “<OA$FORM_ARG ZEBRA

d. Either b. or c.

14.  Where in general can you NOT invoke an ALL-IN-1 function?

a. From the subprocess, via DCL “$ WRITE OAMAILBOX” command.

b. From within a form’s Named Data.

c. From within a script.

d. None of the above

15.  To cause the GOLD/keypad-6 key to perform certain ALL-IN-1 function(s) when pressed,

a. Add a keypad-definition line to the ALLIN1 .CLD file.

b. In the form’s Named Data, define

NAME=".GOLD KP 6"    DATA="all_in_1_function(s)"

c. Interactively or in a script, invoke the function

<OA$KEYDEF "{GOLD KP 6}" "all_in_1_function(s)"

d. Edit the form DEFAULTKE.FRM from the form-library OA$LIB:MEMRES.

16.  Status (0 or 1) is by convention returned by ALL-IN-1 functions in the symbol

a. $STATUS

b. R0

c. OA$STS

d. OA$STATUS

17.  If WOMBAT.SCP is a DO-script residing in the TXL, you could list it on the terminal by entering

a. <OA$TXL_LIST WOMBAT,DO

b. <OA$TXL_LIST WOMBAT.SCP

c. <LIST TXL WOMBAT

d. <DO/LIST WOMBAT

18.  Where will ALL-IN-1 look for the DO-script DOCCREATE, as invoked by the WP menu Create option?

a. OA$LIB:DOCCREATE.SCP

b. in the TXL (section file OA$LIB:A1TXL.TXL)

c. OAUSER:DOCCREATE.SCP

d. any of the above: location of the script is site and user dependent

19.  To print a File Cabinet index, the following MERGE boilerplate is used:

a. PI.BLP

b. FCDIR.SCP

c. DOCDIR.TXT

d. FCDIR.BLP

20.  What form-type is the form invoked by the WP SEL option?

a. ARGUMENT

b. SELECT

c. ENTRY

d. MENU

21.  Which DSAB is used by MAIL (Electronic Messaging, ALL-IN-1 style) to construct an index via the Index option? (HINT: The FOR function is employed to do this.)

a. CAB$

b. DOCUMENT

c. OA$TXT_ASCII

d. WPINDX

————————————————————————–

** ALL-IN-1 V2.0 Applications Development **    [APP]

1.  All of the following are A1 V2 form types EXCEPT:

a.  MENU

b.  ARG

c.  SELECT

d.  DATA

e.  CALC

2.  Which of the following form qualifiers are found on all standard ALL-IN-1 full-screen menus?

a.  /CLEAR, /MENU, /HARD

b.  /TITLE, /CHOICE, /DATE

c.  /USER, /HARD, /QUERY

d.  /BEGIN, /MAIL, /RESET

3.  To give the user access to menu options not defined on the current menu screen, you would use:

a.  .KEEP

b.  /NEXT

c. .MORE or /MORE

d.  /CAPTIVE

e.  /OPTIONAL

4.  When using an entry form, “ADD”, “CHANGE”, “DELETE” and “INQUIRE” are valid parameters for which form qualifier?

a.  /TYPE

b.  /SELECT

c.  /MODE

d.  /RECORD

e.  /WALL_STREET

5.  To force a user to enter only a legal value into a field, you would use which field qualifier?

a.  /VALID

b.  /CHOICE

c.  /LIST

d.  /NO_ILLEGAL

6.  Which ALL-IN-1 function performs list processing, combining a form data-file with a list of entries?

a.  MAKE_FILE

b.  MERGE

c.  MIX

d.  MODIFY

e.  MUNGE

7.  Several DCL functions are also available as ALL-IN-1 functions.  Which of the following list is a DCL function but NOT an ALL-IN-1 function?

a.  LOGICAL

b.  APPEND

c.  STOP

d.  SPAWN

8.  In Word Processing, you have a current WPSPLUS document.  If you use the ‘S’ option, what happens?

a.  The document is selected

b.  The document is sent to the scratch pad

c.  The document is sent as an attachment to a mail message

d.  The document is sent as the text of a mail message

e.  The document is sent to Siberia

9.  From the electronic mail menu, you choose the option “SEL” to select a new mail message.  A form appears at the bottom of the screen asking you to enter the folder, title and number of the document you wish to select.  This form is a(n)

a.  Menu form

b.  Entry form

c.  Argument form

d.  Select form

e.  Chloro form

10.  The File Cabinet CREATE function creates a new document by making:

a.  an entry in DOCUMENT.DAT

b.  a new VMS text file

c.  an entry in DOCDB.DAT

d.  an entry in SDAF.DAT

e.  an entry in the network nodes table

11.  To add to your file cabinet a new document that has as its text file the text file of the current document, you would use the function:

a.  COPY

b.  CAB REFILE

c.  CAB CROSSFILE

d.  CAB SHARE_DOCUMENT

e.  CAB XEROX_DOCUMENT

12.  Five kinds of activities are maintained in the ALL-IN-1 Time Management sub-system.  Which one of the following is not one of them?

a.  REMINDERS

b.  ACTION ITEMS

c.  TO-DO LIST

d.  TASKS

e.  MEETINGS

f.  APPOINTMENTS

13.  To reschedule a meeting from Time Management:

a.  Use the “R” option which calls “CAL MOVE MEETING”

b.  Use the “C” option which calls “CAL CHANGE MEETING”

c.  Use the “E” option which calls “CAL RESCHEDULE MEETING”

d.  Use the “D” option which calls “CAL DELETE MEETING”

14.  You can change the addressees of a mail message by editing:

a.  The mail header

b.  The record in DOCDB.DAT

c.  The file “TO.TMP”

d.  The text of the message

e.  a or d above

15.  The two kinds of scripts in ALL-IN-1 are:

a.  Application and Flow Control

b.  Internal and External

c.  Iterative and Recursive

d.  Script and Do

e.  Dialog and Narrative

f.  Italic and Spencerian

16.  To test the user’s input from a CBI script, you would use:

a.  .JUDGE

b.  .INPUT

c.  .PROMPT

d.  .GET

e.  .TEST

17.  Match the special symbol name with its meaning:

__ a.  ALL-IN-1 Username                        1.  OA$CURDOC

__ b.  VMS filename of current document         2.  OA$DATE

__ c.  11-APR-1985                              3.  OA$SEL_KEY

__ d.  Number of records selected               4.  OA$CURDOC_FILENAME

__ e.  Folder/Number of current document        5.  OA$DATE_FULL

__ f.  Send a line to Datatrieve                6.  OA$SEL_COUNT

7.  OA$USER

8.  OA$DTR

9.  OA$FOLDER

18.  After editing the named data of a form, before testing out that form, you must do a:

a.  OA$FBT_COMPILE

b.  NEWLIB

c.  CLOSE_PRIOR

d.  WAIT

e.  Feasibility Study

19.  The Named Data of a form in ALL-IN-1 is NOT used to specify

a. menu options

b. form processing

c. keypad definitions

d. user access lists

e. associated data files

f. field processing

————————————————————————–

** ALL-IN-1 Symbols **  [SYM]

1. Which of the following is a special symbol?

A. #AI_TODO

B. $FILE_1

C. OA$GT_MENU_CHOICE

D. OA$CURMES

2. In which Bliss module are special symbols predefined for ALL-IN-1?

A. OA

B. OAINI

C. OAGBL

D. OASYM

3. Which of the following characters is used to identify Permanent Symbols?

A. ‘@’

B. ‘$’

C. ‘#’

D. ‘%’

4. How is a symbol processed if it begins with the character “@”?

A. The contents of the symbol is taken as a symbol-expression and itself evaluated.

B. The symbol points to a form field, whose contents are then fetched.

C. The contents of the symbol is a hidden DCL command which initiates an action in the subprocess.

D. The symbol is taken as a mail distribution list.

5. References to fields or symbols can include substring specifications.

What value is returned by “OA$DATE_FULL:3:3” if the full value of the symbol is: “12-MAR-1985”?

A. MAR

B. AR-

C. -MA

D. None of these

————————————————————————–

** FDL’s **     [FDL]

1.      Within the context of ALL-IN-1, what is the purpose of an FDL file?

a. As a reference to all the fields in a data file

b. To create new data files for the base ALL-IN-1 system and for users

c. for ALL-IN-1 to read when a data file is accessed

d. a and b

2.      Where are the base FDL files for ALL-IN-1 stored?

a. OA$FDL

b. OA$LIB

c. OA$DATA

d. OA$BUILD

3.      What FDL file could be used to create the user’s “username”.CAL file?

a. OA$LIB:USERNAME.CAL

b. OA$DATA:USERNAME.FDL

c. OA$LIB:CALENDAR.FDL

d. none of the above

4.      Which of the following information is contained in an FDL file?

a. the default size of the data file (number of blocks)

b. the default location of the data file

c. the field names of the data file

d. a and b

————————————————————————–

** File Cabinet **      [CAB]

1.      What is the file access method behind the File Cabinet Facility?

a. DBMS

b. RMS

c. mostly RMS, but some Datatrieve files

d. Rdb

2.      What do the letters in DAF stand for?

a. Direct Access File

b. Disc Access File

c. Document Attribute File

d. none of the above

3.      What function should be called to put a new document onto the data base?

a. CREATE

b. CABINET BEGIN

c. CABINET CREATE

d. NEWDOC

4.      What is the name of the Data Set Access Block controlling most of the File Cabinet’s actions?

a. CAB$

b. SUPER

c. DOCDB

d. CAB$ATTRIBUTES

5.      What function will delete from the File Cabinet the contents of a folder?

a. CABINET DELETE

b. CABINET DELETE_DOCUMENT

c. CABINET DELETE_FOLDER

d. CABINET JANITOR

6.      On what file(s) is the field AUTHOR kept?

a. OAUSER:DOCDB.DAT

b. OAUSER:DAF.DAT

c. OA$DATA:DAF.DAT

d. b or c

7.      On what file(s) is the field FORWARDABLE kept?

a. OAUSER:DOCDB.DAT

b. OA$DATA:PROFILE.DAT

c. OA$DATA:DAF.DAT

d. a and c

8.      Is there a way (within ALL-IN-1) to recover a document deleted via CABINET DELETE_DOCUMENT?

a. yes – it is the CABINET RESET_DELETE function

b. no

c. yes – it is the CABINET REFILE_DOCUMENT function

d. yes – it is the CABINET REORG_DOCDB function

9.      Documents being shared between users are stored where?

a. they stay in the originator’s disc area (i.e. [.DOCn] or [.MSG])

b. they are in the area pointed to by OA$SHAREn

c. they are stored in OA$DATA:PENDING.DAT

d. you can’t share documents in V2.0 of ALL-IN-1

10.     Which DSAB should be used to reference the fields on the DAF for a particular document?

a. CAB$

b. CAB$ATTACH_ATTRIBUTES

c. CAB$DAF

d. CAB$ATTRIBUTES

————————————————————————–

** Subprocess **        [SUB]

1.  Which of the following mailboxes are used to communicate between ALL-IN-1 and the ALL-IN-1 subprocess?

a. OAMAILBOX    b. A1MAILBOX    c. SUBMAILBOX   d. DCLMAILBOX

e. SYS$INPUT    f. SYS$OUTPUT   g. MAILBOXA1    h. USMAILBOX

2.  What is the purpose of OAINI.COM?

a. create symbols in the subprocess

b. create logicals in the subprocess

c. open TXL’s in the subprocess

d. a and b above

e. all of the above

3.  What is the search order that ALL-IN-1 follows when looking for OAINI.COM?

a. A1TXL, OADATA, OALIB, OAUSER

b. OAUSER, OALIB, OADATA

c. OALIB, OADATA, OAUSER

d. OADATA, OALIB, OAUSER

e. OAUSER, OALIB, OADATA, A1TXL

4.  Which mailbox is used to communicate from the subprocess to the ALL-IN-1 main process?

a. OAMAILBOX    b. A1MAILBOX    c. SUBMAILBOX   d. DCLMAILBOX

e. SYS$INPUT    f. SYS$OUTPUT   g. MAILBOXA1    h. USMAILBOX

5.  OAMAILBOX and DCLMAILBOX pass which of the following between the subprocess and the ALL-IN-1 main process

a. records

b. status information

c. return codes

d. messages

e. all of the above

6.  What is the correct syntax for a write to OAMAILBOX?

a. WRITE “xxxxxxxxxxx” TO OAMAILBOX

b. WRITE OAMAILBOX “OA xxxxxxxx”

c. WRITE “xxxxxxxxxxx” OAMAILBOX

d. WRITE “OAMAILBOX xxxxx” TO A1

7.  What is the correct syntax for a read from DCLMAILBOX?

a. READ OA FROM DCLMAILBOX

b. READ FROM DCLMAILBOX

c. READ DCLMAILBOX FROM A1

d. @DCLMAILBOX:

8.  After executing the following command procedure from the subprocess,

$ WRITE OAMAILBOX “OA GET $TERM=OA$TERMINAL\LOGICAL OA$LIB”

$ @DCLMAILBOX:

$ WRITE SYS$OUTPUT “You are working with a ”TERM’, and the current”

$ WRITE SYS$OUTPUT “ALL-IN-1 library directory is ”RESULT'”

which of the following are true?

a. DCL symbol RESULT contains the translation of OA$LIB

b. DCL symbol TERM contains the string value of OA$TERMINAL

c. DCL symbol TERMINATOR contains the string value of $TERM

d. The ALL-IN-1 symbol $TERM contains the string value of OA$TERMINAL

e. The ALL-IN-1 symbol $RESULT contains the translation of OA$LIB

9.  The COMMAND function

a. is used from the subprocess to invoke a command procedure

b. is used from the ALL-IN-1 main process to invoke a command procedure in the subprocess

c. can be passed up to 16 parameters in the following format

COMMAND com-file [/OUTPUT=file_spec] [P0 … P15]

d. remains in the subprocess

10.  The DCL function can be invoked from which of the following

a. from the choice field of a menu by entering a “$” sign

b. from the choice field of a menu by entering “DCL”

c. from any field on a form by entering GOLD “$”

d. from the editor by entering GOLD “$”

e. all of the above

11.  The OA$DCL special GET destination symbol:

< GET OA$DCL = “command”

can be used for which of the following?

a. to store the text of “command” in symbol OA$DCL

b. to check to see if the subprocess is active by returning a “Y” or “N” value on line 24

c. to execute a DCL command in the subprocess and then remaining in the subprocess

d. to execute a DCL command in the subprocess and then returning to the ALL-IN-1 main process

————————————————————————–

** Message Facility **  [MSG]

1.  Which of the following files contains the text of ALL-IN-1 messages?

a. OA$BUILD:OAMESS.TXT

b. OA$MESS:MESSAGE.TXT

c. OA$BUILD:OAMESS.MSG

d. OA$LIB:MESSAGES.TXT

2.   Messages generated by ALL-IN-1 and the VMS facilities that it calls are stored

a. in a scrolled region of forms OA$HELP_TOP and OA$HELP_BOTTOM

b. in a scrolled region of form OA$GOLD_W

c. in an ALL-IN-1 message buffer

d. in the named data of form DEFAULT

3.  Which is not a special symbol containing a part of a message?

a. OA$MSG_FAC   Facility prefix of primary message

b. OA$MSG_ID    Message ID of primary message

c. OA$MSG_AUX   Message text for the auxilliary message

d. OA$MSG_SEV   Severity code of primary message

e. OA$MSG_TEXT  Message text for the primary message

4.  What must be done to display the contents of the ALL-IN-1 message buffer?

a. Press GOLD M

b. Press GOLD E

c. Press GOLD O

d. Press GOLD W

————————————————————————–

** Initialization **    [INI]

1.  Which of the following cannot be specified on the ALL-IN-1 command line?

e.g.,   ALLIN1/USERNAME=MRMAN4/NOINIT/TERM=VT102

a. /TERMINAL=xxx

b. /EDITOR=xxx

c. /[NO]DATATRIEVE

d. /[NO]DEFER_SUBPROCESS

e. /[NO]FMS

f. /SCRIPT=xxx

2.  When the user specifies /NOINITIALIZE on command line, what happens?

a. the initialization process is skipped

b. the user comes up in hardcopy mode

c. the user is prompted, “ENTER CMD: ”

d. all of the above

3.  During initialization, the user’s profile record is read.  What file must be opened?

a. OA$LIB:PROFILE.DAT

b. OA$DATA:PROFILE.DAT

c. SYS$SYSTEM:SYSUAF

d. OA$LIB:A1UAF

————————————————————————–

** DATATRIEVE **        [DTR]

1.  In what manner can DATATRIEVE not be accessed?

a. from a menu choice field with <DTR  (and interactive privilege)

b. from the editor by entering GOLD M and invoking the DTR option

c. from line 24 after entering GOLD keypad 7 and “DTR” (and command priv)

d. from named data of a form with GET OA$DTR=”command” from a menu choice field by entering GOLD D

————————————————————————

** TEXT DSABs and Document Processing **       [TXT]

*********************************************************************

*  NOTE: The following questions are optional, and do not count as  official pre-test questions — they are designed to prime you to deal with Document Processing issues (e.g., as presented in the  INTERNALS course. As a user or applications integrator for Document Processing facilities in ALL-IN-1, you should already be familiar with the areas below, but not all the material is readily found in the APR or Users Guide.                          *********************************************************************

1. What is a TEXT DSAB?

2. How many text editors are available in the base ALL-IN-1 product?

3. What is a special data type and how many of them are supported in ALL-IN-1 V2.0?

4. How many text DSAB types are defined in V2.0?

5. What is a DEAD key sequence?

6. What does the term “standard formatting” mean?

7. What happens when you display a SIXEL picture on a VT125 terminal?

8. What ALL-IN-1 function (or functions) is used to implement the R (read) option on the WP (word processing) menu?

9. What two conditions must be met before a document can be printed to the terminal printer port?

10. What does the response {GOLD W} do in the WPS-PLUS editor?

11. Can you cut text from one WPS-PLUS document into another?

12. What video attributes are supported by the TEXT DSAB facility?

13. What ALL-IN-1 function (or functions) is used to implement the P (print) operation to an LQP02 printer on the terminal printer port?

14. Why can’t a template document be used in the CD (convert document) function in the File Cabinet sub-application?

15. What does the name TDE stand for?

[End of Pre-Test]

Posted in Technology | Tagged , , | 12 Comments