Travel challenges in professional refereeing


Being part of the refereeing team that works professional matches sounds interesting to people who are fans of the different sports that support professional teams and it is truly wonderful and privileged to be a part of massive sporting events such as a Heineken Cup final. However, like most things in life there can be a downside. Travelling to games in the European winter (and four of the six Heineken Cup pool weekends are played in the middle of winter) can be problematic and that’s just what happened this past weekend.

My assignments this week were in Montpellier and Perpignan for games on Thursday night and Saturday afternoon. In line with the ERC (European Rugby Cup, the organization that runs both the Heineken and Amlin Cup competitions) directive that refereeing teams (referee, two assistant referees, and a TMO) have to be “in country” the day before a game, the team for the Montpellier game met in Dublin Airport on Wednesday afternoon. The directive is a good one because it makes sure that enough time is available to make alternative arrangements should travel problems occur by either switching referees around to cover games or to get to a game using a different route.

We checked in for our Air France flight to CDG (Paris) – luckily, we made use of Thomas Cook offer codes and saved a bunch. The first sign of trouble was when the incoming flight was late so our outgoing flight was delayed. This rapidly turned into a cancellation due to heavy snow falling in Paris. We now had a problem because travel to the rugby centres in the south of France usually involves a transit through CDG unless a direct flight is available to somewhere like Carcassone, Nice, or Toulouse. Unfortunately the airlines cut back their service to airports in the south of France in the winter so the trek through CDG is often mandatory.

CDG is a fine airport but… flights from Ireland arrive at a stand that requires a ten minute bus transit to reach the terminal AND connections are often tight. If anything goes wrong such as a delay out of Dublin, the bus not turning up to meet the flight, or a long queue for passport control, you can really struggle to make a connection. Some of my refereeing colleagues pride themselves on their ability to navigate CDG quickly to make impossible connections but thirty years of business travel makes me wary of these feats and I prefer to avoid CDG whenever possible.

The situation worsened when Air France cancelled all remaining flights from Dublin to CDG for Wednesday. Our solution was to take the 18:05 Lufthansa flight to Frankfurt to connect to the 22:15 to Nice and then drive from Nice Airport to Montpellier. This seemed viable because both flights went through terminal 1 in Frankfurt and we had a good hour to make the connection.

Alas it was not to be. The incoming plane was late and Frankfurt was having its own weather challenges with snow. The result was that we didn’t board until 18:45 and then sat waiting for a take off slot until 19:50. Our chances of making the connection were receding rapidly, even if Lufthansa assured us that all flights were delayed in Frankfurt.

Our Airbus 320 arrived over Frankfurt to find that the airport was closed. The plane began to circle as the airport authorities had indicated that it would reopen in about an hour. More snow put a halt to that plan and we diverted to Düsseldorf, landing there at 23:45 to find an almost empty airport. The Lufthansa staff who were on duty couldn’t help with rerouting but they did an efficient job of dealing with 180 people who needed hotel rooms for the night.

We ended up in a Holiday Inn Express in Düsseldorf (some other passengers, including Stephen Fry, joined us in the hotel) and began to call the Lufthansa emergency line to see how we could make progress on Thursday morning. After 30 minutes of listening to Lufthansa’s best propaganda, we gave up and decided to take a 4:30am taxi back to the airport in the morning to be at the ticket desk when it opened. Two hours sleep is not the greatest preparation the night before a game but that’s all that was available…

The ticket desk opened at 5:15am with quite a queue of harassed passengers waiting for help. We were fortunate to talk to a very efficient Lufthansa agent who quickly got us seats on our preferred flight, the 7:20am to Lyon. We then walked down to Hertz and hired a car to allow us to drive to Montpellier.

From Lyon it took about 3 hours (including a stop) to drive the 310km to Montpellier, so we reached our hotel by 13:00. The game was to kick off at 20:45 and we had sufficient time to grab some sleep, a meal, and make the normal preparations to referee the game, in which Montpellier beat Bourgoin 39-14. None of the spectators were aware that the refereeing team had had an “interesting” journey to get to the game, which is exactly the kind of performance you want to deliver in a professional game.

Tired but extremely competent – Olly Hodges, Dudley Philips, and David Wilkinson before the Amlin Cup Montpellier v. Bourgoin game

Friday saw my three colleagues for the Montpellier game return to Ireland while I planned to drive to Perpignan to collect three other colleagues for Saturday’s game between Perpignan and Leicester Tigers. Sod’s Law was in effect again as the team’s flight from Paris Orly to Perpignan was cancelled, probably because Air France was repositioning planes disrupted by the bad weather. In any case, they switched to a flight to Montpellier and I collected them there at 3pm before driving to Perpignan.

The game between Perpignan and Leicester Tigers was an epic contest, typical of many of the high-quality and passionate encounters that occur in the Heineken Cup. In this case, the first half finished with a sequence of nine scrums over ten minutes on the Leicester 5m line. A less experienced referee than Alan Lewis (he was refereeing his 69th Heineken match and is the record holder) would have reacted to the noise of the crowd and the pressure exerted by the players and awarded a penalty try after the second penalty. Alan gave a fine demonstration of a referee understanding of how the scrum can be a real physical and mental contest between 16 players. Alan understood the enormous pressure put by Perpignan on the Leicester pack and the steps that they took to resist. At no time did Perpignan get the essential “nudge” on to move the scrum forward in such a way that they could demonstrate that they would have scored but for Leicester collapsing or otherwise illegally stopping Perpignan’s progress. Without this evidence there was no way to award a penalty try, even despite the loud demands of commentators, players, coaches and the crowd.

Some might also ask why the two hookers were yellow-carded mid-way through the series of scrums: the answer is simple – they were asked to keep their respective front rows down warned as to the consequence (this was after several penalties and resets). They didn’t comply so the referee had to follow through with the sanction else all hell would have broken loose.

Others might query why so many scrums occurred after 40 minutes elapsed and the first half “should have” ended. The answer to this question is clearly found in law where 5.7 (e) states “If time expires and the ball is not dead, or an awarded scrum or line-out has not been completed, the referee allows play to continue until the next time that the ball becomes dead.” In this case, each reset scrum is merely a continuation of the current phase of play and the penalties awarded for scrum offences (or the scrum option taken by the non-offending team) extend the current phase of play and the ball is not dead until the scrum is successfully concluded.

On-field refereeing team for Perpignan v. Leicester Tigers: Brian MacNeice, Alan Lewis, Simon McDowell

As it happened, Perpignan eventually tired of scrummaging and moved the ball to the left to allow the winger to score after collecting his out-half’s precise kick. 47 minutes and 56 seconds were played in the 1st half taking 51 elapsed minutes before it ended 13-9 to Perpignan. The noise and obvious displeasure of the crowd was enormous as the refereeing team left the pitch at half-time; they wanted that penalty try and couldn’t understand that it would have been unfair to Leicester had Alan folded. Truly a lesson in the art of refereeing worth noting for any up-and-coming official.

The second half was much better as both teams played rugby instead of messing around at scrums. At the end Perpignan won 24-19 and Leicester were happy to escape with a bonus losing point. We were happy to escape to the airport for a fast transit to Paris (Orly) and stayed overnight in the Novotel Gare de Lyon before flying back to Dublin on Sunday morning.

Four days, four flights, four different hotels, 500km of driving and two fine matches. The outcome was good even if the start was pretty awful. This weekend it’s off to Toulon for the Heineken Cup game between Toulon and London Irish. Let’s hope that the weather stays fair and it’s smooth sailing…

– Tony

Posted in Rugby, Travel | Tagged , , , , | 3 Comments

Kindle edition of Microsoft Exchange 2010 Inside Out available now!


I’m delighted that my Microsoft Exchange 2010 (including SP1) Inside Out book published by Microsoft Press is now available in a Kindle edition. The book is also available at Amazon.co.uk and other country-specific sites. Other e-book formats (ePub, MOBI, PDF) for the book are available from the O’Reilly web site.

Of course, those of you who prefer the tactile experience of reading paper copies of books can get your copy of Microsoft Exchange 2010 Inside Out here. I still prefer paper books but I appreciate the utility of keeping electronic copies of reference material in easily-accessible formats on electronic book readers.

And before anyone asks me, I don’t understand Amazon’s pricing strategy. It seems weird that the paper book is priced at $37.79 while the Kindle version costs $46.45! Interestingly, a tweet from Forbes.com Tech News on 15 December revealed that ebooks now represent 84% of units and 76% of revenue for O’Reilly’s online sales from their web site, so I guess people are happy to buy ebooks, even if they come at a premium.

– Tony

Posted in Exchange, Exchange 2010, Writing | Tagged , , , | 9 Comments

Exchange 2010 SP1 Store Driver throttling


The Store Driver is a very important Exchange component. Running on all hub transport servers, its function is to provide the mechanism to deliver inbound messages to mailbox databases. Unlike Exchange 2003 and previous versions, all messages go through a hub transport server, even if they are sent between two mailboxes in the same mailbox database.

Exchange 2010 SP1 includes new code to throttle or manage connections from the Store Driver to mailbox servers as it delivers messages in such a way that Exchange can isolate and limit the effect of any faults that occur. Most of the time, you’ll be unaware that throttling applies to the Store Driver. However, if you receive non-delivery notification (NDR) messages that report problems from “STOREDRV” (the Store Driver), your Exchange hub transport servers might be hitting a limit. For example, an entry in Microsoft’s Exchange 2010 forum reports this error:

432 4.3.2 STOREDRV.Deliver; recipient thread limit exceeded

As a quick search for “STOREDRV 432” reveals, the Store Driver issues a 432 NDR when it encounters other problems, so the important thing here is the text reporting that the “recipient thread limit exceeded”.

In most cases, the problem seems to be encountered with mailboxes that receive very heavy traffic such as those used for journaling. As you probably know, Exchange allows you to capture journal reports to any valid SMTP address. This facility allows you to direct journal traffic to a third-party journaling application but if you send journal reports to an Exchange mailbox, perhaps as an interim step before the reports are later moved to an archiving application, it’s possible that Exchange will encounter a throttle limit as the Store Driver attempts to deliver messages concurrently at times of peak demand. By default, Exchange 2010 SP1 will only attempt a single concurrent delivery to a mailbox to ensure that all of the mailboxes in a database receive a similar quality of service. This is what you’d want, but you can see that a journal mailbox might have many messages directed towards it concurrently.

The solution is to update the EdgeTransport.exe.config XML configuration file on the hub transport server to add two new keys to force Exchange to allow more than one concurrent delivery to a mailbox. Any text editor can be used to update the configuration file but it is best to update a test server first and measure the effectiveness of the change before you introduce it into production.

It is also best to make the update to all hub transport servers in the site that supports the journal mailbox so that the Store Driver has the same behaviour on all servers.

The new keys and their values are:

<add key="RecipientThreadLimit" value="2" />

<add key="MaxMailboxDeliveryPerMdbConnections" value="3" />

You should restart the Exchange Transport service after making the change to force the Store Driver to respect the new setting.

As documented on Page 397 in chapter 7 of Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk, the throttling parameters that are respected by the Store Driver in Exchange 2010 SP1 are as shown in the following table.

Connection Limit
Hub transport server to mailbox server Concurrent delivery to a mailbox: Only one message can be delivered concurrently to the same mailbox. This limit is in place to ensure that all the mailboxes in a database receive a similar speed and quality of service.
Concurrent delivery to a database: Only two messages can be delivered concurrently to the same mailbox database. This limit prevents a problem with one database affecting all the other mailbox databases on a server. The problem may be caused by an underlying hardware issue such as a failing disk or storage controller or a software bug. Exchange may reduce this limit automatically if it senses that the performance of a mailbox database is lower than normal.
Concurrent delivery to a mailbox server:  Only twenty-four messages can be delivered concurrently to the same mailbox server. A hub transport server can deliver to any mailbox server in the site. This limit prevents a problem with one mailbox server (that might be experiencing heavy load) from affecting delivery to all other mailbox servers in the site.
Concurrent delivery per message across all mailbox servers: Only twelve concurrent deliveries per individual message can be attempted to all mailbox servers in the site. This limit prevents a problem with a corrupt or expensive message (such as one with a number of large attachments or several hundred recipients in the message header) from absorbing all available connections and therefore stopping or slowing down delivery from a hub transport server to the mailbox server within a site.
Mailbox server to hub transport server Concurrent submission to a hub transport server: The limit for the maximum number of connections is calculated as five times the number of available mailbox processor cores.  This limit controls how many concurrent submissions can be made from a mailbox server to all hub transport servers in the site.
Concurrent submission from a mailbox database: Only four connections can be performed concurrently from a mailbox database
Concurrent submissions from a single mailbox server: A hub transport server will only process at most twelve concurrent submissions from any single mailbox server.
` Concurrent submissions to a hub transport server: A mailbox server will only make a maximum of fifteen concurrent submissions through any single hub transport server.
Site-wide hub transport to mailbox server The upper limit for the number of connections from hub transport servers to mailbox servers across the site is 20 x the number of processor cores. For example, if there are two hub transport servers in the site, each equipped with four processor cores, the limit is 160 connections.
Site-wide mailbox servers to hub transport servers The upper limit for the number of connections from mailbox servers to hub transport servers across the site is five times the number of processor cores. For example, if there are six mailbox servers in the site, each equipped with four processor cores, the limit is 100 connections.

Connection limits imposed by Exchange on the Store Driver

Posted in Exchange, Exchange 2010 | Tagged , , | 12 Comments

Tweaking the Mailbox Replication Service configuration file


The Mailbox Replication Service (MRS) is an essential component of any Exchange 2010 deployment as it controls the movement of mailboxes between databases. MRS runs on all Exchange 2010 Client Access Servers (CAS).

MRS gets involved in migrations from Exchange 2003 or 2007 to Exchange 2010 because moving mailboxes is the only practical way to get user data into mailboxes. You could export and then import mailbox data and this may be done in situations where no connection is available between Exchange 2010 and the legacy servers. Even so, MRS will be involved as Exchange 2010 SP1 introduces the New-MailboxImportRequest cmdlet to replace the Import-Mailbox cmdlet. MRS manages the work done by New-MailboxImportRequest, an example of how the influence of this service has spread within Exchange.

MRS is a “black box” service in that there’s no management utility provided in Exchange 2010 to control how it works or to tweak its performance. You can create, modify, view, and clear mailbox move requests through EMC but that marks the end of management interaction with data used by MRS. In fact, all you’re doing here is working with the Active Directory attributes for mailboxes that are being moved or the XML data that describe the move requests that Exchange stores in system mailboxes within databases. You’re not really doing much with MRS at all.

It’s likely that this situation exists by design because the view of the Exchange developers is probably that system administrators don’t have sufficient knowledge or data to allow them to make changes to a service that runs in the background and if they provide a management interface it’s possible that people will make mistakes that compromise Exchange’s ability to move mailboxes.

I think this is a reasonable stance to take – for now. MRS is new to Exchange and it will take time for administrators to become used to how it works. However, over time, you’d hope that a management interface will be added to EMC to allow administrators to control how MRS works and remove the black box label.

Behind the scenes, MRS operation on a CAS is controlled by an XML-format configuration file called MSExchangeMailboxReplicationService.exe.config that you’ll find in the \bin directory (for more information, see page 708 in chapter 12 of Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk). You can edit the contents of this file with any text editor but be aware that editors like the esteemed NotePad are totally unaware of XML formatting requirements and that it is all too easy to make a mistake and screw up a setting. For this reason, always take a copy of any Exchange 2010 configuration file before you edit it.

MRS reads in its configuration file when the service starts. The two major areas of the file that we might be interested in are the section that describes the default, minimum value, and maximum value for each setting (see below) and then the section (MRSConfiguration) that determines the current setting for MRS on this server (next figure).

MRS configuration file values

 

Although it is interesting to know what settings are active for MRS, I would not rush to make changes without knowing exactly why you want to make the change and what is the desired outcome. The Exchange developers made one significant change for Exchange 2010 SP1 when they reduced the MaxActiveMovesPerTargetMDB setting from 5 to 2. This setting controls the maximum number of active moves that an MRS instance can direct to a target database and the reduction is designed to reduce the pressure that a mailbox database can come under as it processes incoming mailbox content. Remember that Exchange 2010 performs content indexing as mailbox contents are transferred to ensure that a “full client experience” (marketing-speak that means that searches are possible) is available after the mailbox pointer is switched in Active Directory to the target mailbox. You can therefore understand that considerable CPU and I/O resources are consumed to process a new mailbox and that if MRS was allowed to process more than a few moves to a single database the risk exists that the disk holding the database might become very busy and reduce its ability to service other client requests.

Viewing the MRS configuration settings that are active for a server

The MaxActiveMovesPerTargetServer setting is set to 5 to ensure that MRS can’t swamp a target server with masses of incoming mailbox moves. You might be surprised that these settings are set reasonably low but the logic is good because it’s more important for a server to maintain good client responsiveness than to become preoccupied with mailbox moves. Remember that these settings only apply to the MRS running on a single CAS. If you have more than one CAS in an Active Directory site, it’s possible that a target server or target database will be asked to handle more than five incoming mailbox moves (for the server) or two (for a database) as apart from knowing what MRS is handling a move request, each MRS processes mailbox moves independently.

It’s possible that you might want to increase these values, but only when a good reason exists. For example, let’s assume that you are in the middle of a migration project and the Exchange 2010 servers aren’t doing much work yet because mailboxes haven’t been moved across from Exchange 2003 or 2007. In this scenario it would be reasonable to increase the settings that control maximum moves per server and per database to accelerate mailbox moves. However, once you’ve reached a condition where the Exchange 2010 mailbox servers have picked up the majority of the user load, you should throttle back the settings to the recommended values. See this post about batching mailbox move requests and this one about how to clear out mailbox move requests once they are complete.

You’ll have to restart the Mailbox Replication service to make any change to a setting effective. Remember that MRS runs on every CAS, so if you want a setting to be effective for MRS throughout the organization, you’ll have to apply the change to each and every CAS. In addition, you will have to check that the change is required and then reapply it (or not) following software upgrades (maybe not for a roll-up update, definitely required for a service pack).

Follow Tony @12Knocksinna

Posted in Exchange, Exchange 2010 | Tagged , , , , , | 12 Comments

Where did all this snow come from?


Ireland isn’t used to snow but we’re getting it in bucket loads at the moment (at least in Dublin). Not only is the quantity a surprise, it’s also the fact that snow is falling in late November and early December. Must be evidence of all of that global warming that the scientists keep on talking about…

In any case, we encountered even more snow when Deirdre and I travelled to Scotland. The trip was to visit the famous Laahs family in Linlithgow and to be the TMO for the Scotland v. Samoa rugby match in Aberdeen. Scotland started off cold and got steadily colder and whiter as we approached Aberdeen. Fortunately the ground staff did a great job of clearing the pitch and the game went on as planned. Scotland eventually won 19-16 with a last-minute penalty, much to the disappointment of the Samoans.

We stayed at the Thistle Caledonian hotel in Aberdeen. The hotel is in a marvelous old building but the room we had (204) was pretty small and the bathroom rather cold. Its location (Union Terrace) is in the middle of Aberdeen and the hotel is worth considering if you have to visit.

Sunday saw us making our way back southwards. The roads were very passable until we were past Dundee, at which point the A90 became like a skating rink. It took us eight and a half hours to cover 135km from Aberdeen before the police directed us into Perth because the M90 was closed due to drifting snow. Fortunately we had somewhere to stay in Perth as we were able to go to the house of a friend’s mother, who did a great job of raising our spirits with food and a good bottle of red wine.

Things didn’t look good initially on Monday as the motorway remained closed. There’s only one road south from Perth and until it opened we remained stuck. A small break in the weather and a slight thaw enabled this to happen around noon and even though the snow had started to fall again we decided to head south at 2pm.

Deirdre trudges through the snow in Perth

The M90 from Perth to the Forth road bridge proved to be a one-lane winter wonderland. It was always passable and conditions didn’t seem as bad as they were reported on the radio. It was amusing to see quite how many snowmen had been built in the closed outer lane. After two hours we reached Linlithgow to rejoin the Laahs. Meanwhile the snow continued to fall…

Snowy fields near Linlithgow

Tuesday dawned bright but the snow accumulation made the landscape extremely pretty and Christmas-like but difficult to move the car. A plough came by to rescue the situation and open the road to home. We escaped at 11am and reached Stranraer after a good run via Glasgow to get the 5:30pm ferry to Belfast, arriving there at 8:35pm.

The road to Dublin was cold but no problem until we got past Drogheda. At least, it was no problem to us – we later discovered that the police had closed the northbound carriageway because they had intercepted four men with an explosive device. Thankfully this didn’t stop our progress south. Some snow and ice between Drogheda and Dublin Airport made the journey “interesting” for a while but we passed on to the final hurdle presented by icy conditions along the M50 Dublin orbital motorway. We reached home at 11:20pm after some 1,300 km and lots of horrible weather.

Today we have more blizzards in Dublin. There’s no desire to go out and we shall just wait for the weather to pass before we take to the road again.

– Tony

Posted in Rugby, Travel | Tagged , | Leave a comment

Batching Exchange 2010 mailbox moves


My post from November 22 about clearing mailbox move requests prompted some questions about auto suspending moves. This is new functionality introduced in Exchange 2010 as all mailbox moves performed in earlier versions occur synchronously. In other words, an administrator decides to move a selected mailbox and the chosen management tool goes ahead and runs the code to copy mailbox contents to the new location. The tool is either a management console (in Exchange 2003 or 2007) or the management shell (in Exchange 2007).

Exchange 2010 manages mailbox moves through the Mailbox Replication Service (MRS), which runs on every Client Access Server (CAS). All of the MRS instances running within a site monitor the system mailboxes in the databases that belong to their Active Directory site to “discover” new move requests. Once a move request is discovered, an MRS instance takes responsibility for its processing. A carefully orchestrated sequence then occurs to connect to the source and target database, enumerate the contents of the source mailbox, create the new mailbox, and move the contents from the source mailbox to the target database. These phases represent roughly 95% of the total work done to move a mailbox; the remaining processing is to perform a last incremental synchronization to identify and move any content that has been created in the source mailbox since MRS began to copy items followed by updating Active Directory to switch the user object to point to the new mailbox location. All of the work is done asynchronously and Exchange 2007 and 2010 users can continue to work while their mailbox contents are transferred.

When MRS reaches the 95% mark, it checks to see whether the administrator has marked the move request to be “auto suspended”. When you auto suspend a mailbox move, MRS will hold the request and not proceed with the final synchronization and Active Directory update until an administrator resumes the move request. The idea is that administrators can have Exchange perform the vast majority of mailbox move processing, perform a final check to ensure that everything has gone OK (for example, that a high number of bad items were not encountered), and then release the suspended requests to allow them to complete. You might, for instance, have a batch of mailbox moves occur over a weekend, check the move history for the requests early on Monday morning, and then resume the suspended requests before users arrive to begin work.

Mailbox move requests are created with EMC or EMS. In either event, the New-MoveRequest cmdlet is run. If you want to auto-suspend a mailbox move, you’ll have to create the request through EMS as EMC doesn’t offer the necessary user interface to allow an administrator to select this option. Neither EMS or EMC offer the choice to schedule a mailbox move to start at a particular time, a curious deficiency that I hope Microsoft will address in a future version of Exchange.

Here’s some example code that we could run to move my mailbox:

New-MoveRequest -SuspendWhenReadyToComplete -SuspendComment 'Move Queued by Admin on 24-Nov-2010 10:00' -BatchName 'Mailbox Batch 1' -BadItemLimit 5 -Identity 'Tony Redmond'      -TargetDatabase 'DB1'

The important things here are the SuspendWhenReadyToComplete and the BatchName parameters. The BatchName parameter is optional. It allows you to group mailbox move requests together in a convenient form for later processing. The really important item is the SuspendWhenReadyToComplete parameter because it’s what indicates to MRS that it has to pause the mailbox move before completion.

After MRS begins to process the mailbox move, you can check its progress with the Get-MoveRequest cmdlet. You’ll see that the request progresses from “Queued” to “InProgress” (where MRS spends most of its time as it copies items from the source to target database) and eventually go into an “AutoSuspended” status. This marks the point where MRS pauses to await an administrator decision.

Migration projects are where the batch name parameter comes into its own. Remember that any migration project from a legacy version of Exchange to Exchange 2010 can only be performed by mailbox moves. The reason why is complicated but it can be reduced for simplicity’s sake to the massive difference in the database internal structures and schema that exist between Exchange 2010 and earlier versions. In any case, if you want to migrate 5,000 mailboxes, that means you’ll be creating 5,000 mailbox move requests and it’s likely that you’ll batch them in user groups that you want to move together to Exchange 2010. The groups might be departments, users in a similar job, or people who work in a location.

Let’s assume that you can identify the mailboxes in a particular group with a filter that you apply to the Get-Mailbox cmdlet. You can then pipe the discovered set of mailboxes to the New-MoveRequest cmdlet and stamp them with a common batch name. Then, after all the move requests have reached “AutoSuspended” status and you’ve had the chance to check them out, you can then release them all with one command. Here’s an example of how you might find mailboxes for a group (in this case, any mailbox whose “Office” attribute is set to “Dublin”) and queue them all for moving:

Get-Mailbox -Filter {Office -eq 'Dublin'} | New-MoveRequest -BatchName 'Dublin Users'      -SuspendWhenReadyToComplete -TargetDatabase 'DB1' -BadItemLimit 5

To find all the move requests that have been auto-suspended, we could then run this command:

Get-MoveRequest -MoveStatus 'AutoSuspended' -BatchName 'Dublin Users'

And when we have checked out the move requests and are ready to proceed, we can pipe the output of the Get-MoveRequest cmdlet to the Resume-MoveRequest cmdlet.

Get-MoveRequest -MoveStatus 'AutoSuspended' -BatchName 'Dublin Users' | Resume-MoveRequest

MRS will then complete processing and the users will be able to access their brand new Exchange 2010 mailboxes.

– Tony

For more information about the Mailbox Replication Service and how mailbox moves work in Exchange 2010, see Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk.

Posted in Exchange, Exchange 2010 | Tagged , , , , , | 12 Comments

Clearing out mailbox move requests


After you’ve run Exchange 2010 for a while, you’ve probably moved a few mailboxes around and have accumulated some completed mailbox move requests.

When it begins to move a mailbox, the Mailbox Replication Service (MRS) stamps the user object for the mailbox with six attributes that it uses to indicate that the mailbox is being moved and the progress of the move. To reduce the interaction with Active Directory, only one of these attributes (the status) is updated during the move. The statistics for the move such as number of items moved, bytes moved, and so on are maintained in the XML coded move request that MRS updates in the system mailbox of the target database. Data used to construct the move report is also stored in the same place.

Once a move is completed, MRS copies the move report from the system mailbox to a hidden folder in the user’s mailbox. You might assume that the attributes are also removed (or cleaned up) in Active Directory, but this doesn’t happen. I believe that the logic here is that it’s up to the administrator to decide when the move request is no longer required, at which point they can explicitly go ahead and remove the request. The logic is valid but given the fact that most administrators don’t realize that move requests are silently piling up and not being removed, the end result is an accumulation of move requests just like calcium forming in a kettle in an area that enjoys hard water.

The first time when an administrator is likely to realize that move requests are retained and not cleaned out is when they attempt to move a mailbox that was previously moved by MRS. At this point, they’ll be told that a move request already exists for the mailbox and they will have to clear the move request before they are able to proceed. The Exchange Management Console (EMC) attempts to bring the fact that a mailbox has a move request registered to the attention of administrators by displaying a different icon for the mailbox, but as you can see below, it’s all too easy to miss this subtlety.

Spot the difference in the icons

Of course, EMC provides a separate section to allow administrators to work with move requests, so you can switch over there to discover what mailbox move requests are known to Exchange. As shown below, it’s easy to select all the move requests and then click “Clear Move Request” in the action pane. EMC will then clear the AD attributes from the selected mailbox objects and you’ll be able to create new move requests for them.

Getting ready to clear mailbox move requests

Of course, you don’t have to do this with EMC as Exchange 2010 provides a comprehensive set of cmdlets to work with mailbox move requests. The Get-MoveRequest cmdlet reveals the set of known move requests and we can filter the command so that EMS only reports the set with a completed status.

Get-MoveRequest -MoveStatus Completed

The other status values you can look for include the following:

  • CompletedWithWarning: Although the mailbox was moved successfully, something happened during the move that an administrator needs to check by reviewing the move report.
  • CompletionInProgress: Exchange is finalizing the move by resetting the Active Directory to reflect the new location for the mailbox.
  • Failed: Whoops… not a good sign. Time to check the move report to discover what happened to stop the mailbox moving. Hopefully it’s something simple such as the mailbox quota for the target database being too low to accommodate the mailbox that you want to move. Another common problem is when some “bad items” are encountered in the source mailbox. These are items that Exchange considers to be corrupt or incomplete for some reason (often a bad MAPI property). You can force MRS to continue processing and ignore the bad items by setting a reasonable value (something higher than zero and lower than 10) for the BadItemLimit parameter when you create a move request. MRS is able to resume failed moves at the last checkpoint. Note that if you state a bad item limit higher than 50, you have to specify the AcceptLargeDataLoss parameter when you create the move request with the New-MoveRequest cmdlet. This is because MRS won’t write corrupt items into the new mailbox and each bad item is therefore dropped. You may just want to know what these dropped items contained! On the upside, they may be old calendar items that the user won’t care about. On the downside, they might be items relating to the latest corporate strategic plan…
  • AutoSuspended: MRS has moved the bulk of the mailbox to the target database and is now paused waiting for an administrator to release the move request and allow MRS to perform a final synchronization to bring the new mailbox completely up to date and then switch the mailbox in Active Directory. Effectively, the move request is suspended before it enters the “CompletionInProgress” phase. You can resume processing with the Resume-MoveRequest cmdlet. You can instruct MRS to auto-suspend a mailbox move by passing the SuspendWhenReadyToComplete parameter to the New-MoveRequest cmdlet.
  • Queued: The move request is queued and awaiting an MRS instance in its home site to “pick up” the request and take responsibility for its processing.
  • InProgress: MRS is busy moving content from the source database to the target database to build the new mailbox. MRS spends most of the time processing a mailbox move in this status.
  • Suspended: The move request has been suspended for some reason with the Suspend-MoveRequest cmdlet and is waiting for an administrator to release it. A move request can only be suspended before it reaches the final completion phase as once Exchange starts to update Active Directory, it’s too late to go back.

Once we have obtained a set of mailbox move requests, we can clear them with the Remove-MoveRequest cmdlet. To be most effective (and if you’re sure that you won’t remove any request that you want to keep), we can pipe the objects found with Get-MoveRequest to Remove-MoveRequest as follows:

Get-MoveRequest -MoveStatus Completed | Remove-MoveRequest

It’s a royal pain to have to perform this kind of housekeeping that Exchange should really be able to do on an automatic basis. I fully appreciate that some administrators may want to keep some mailbox move requests for an extended period, but it should be possible to have a parameter for the New-MoveRequest cmdlet to allow administrators to state an expiry time for a successful move request. Better again, maybe a new organization-wide configuration setting that applies to all successful move request with the ability for an administrator to override the organization wide setting so that a move request can be retained for a particular mailbox. Something like this would be good.

To set a 30 day automatic expiry for successful move requests:

Set-OrganizationConfig -DefaultExpirySuccessfulMR 30.0:00:00

To define that a move request does not automatically expire:

New-MoveRequest -DoesNotExpire $True

Of course, the official health warning is that I have no idea whether these enhancements will ever appear in a future version of Exchange. Christmas is coming and you never know what the Exchange developers have up their collective sleeves. We wait in hope…

Tony

Follow Tony @12Knocksinna

For more information about the Mailbox Replication Service and how mailbox moves work in Exchange 2010, see Microsoft Exchange Server 2010 Inside Out, also available at Amazon.co.uk.

Posted in Exchange, Exchange 2010 | Tagged , , , , , , , , , , | 30 Comments

Busy with rugby in Limerick and Scotland


This time of year is a busy representative period for rugby in the Northern hemisphere as there are touring teams from New Zealand, South Africa, Australia, Fiji, Argentina and Samoa visiting to play games against Ireland, England, France, Wales and Scotland. On Tuesday, November 18, I was at Thomond Park in Limerick to TMO the game between an understrength Munster side against Australia. The game was played in dreadful conditions with cold driving rain throughput and a wind that howled and swirled. Even the TV truck where I was positioned rocked in some of the violent gusts of wind! All-in-all, just the weather that Munster would have hoped for when they take on any touring team, especially one that might be more used to playing in warm, balmy conditions.

No decisions were referred to me by Bryce Lawrence, the referee from New Zealand, so I had to content myself by considering just how cold everyone was getting on the pitch as Australia failed to cope with the conditions and could only get to half-time at 6-6 after playing with the wind. Munster slowly turned the screw in the second half and gave a master class of how to keep a touring team miserable. Munster eventually deservingly won 15-6.

We stayed in the Strand Hotel on the Ennis Road in Limerick. This is a reasonably new hotel constructed on the site of the old Jurys Hotel. The rooms and other facilities are excellent. Apart from a quick breakfast en route back to Dublin, I didn’t try their restaurant, but it looked OK too.

On Wednesday, I had the opportunity during the week to watch the second episode of the TG4 (Irish language TV station) documentary about the history of Irish rugby (available on the TG4 player – possibly only available in Ireland). The documentary is called “Gualiann le Gualiann” (literally, “shoulder to shoulder” in English – see this page for an explanation of the Irish word “gualiann”) and explores the development of rugby in Ireland from its earliest days (Trinity College was the first club founded in the 1850s and the foundation of the IRFU in 1873). The narration is in Irish but many of the contributors speak in English and there are English sub-titles, so it’s easy to follow what’s going on.

This week’s episode covered the period from the 1950s to the mid 1970s and included events that I remember well, including the first game I attended at Lansdowne Road (Ireland v. Australia in 1967), the surreal game between Ireland and South Africa in 1970, complete with a massive police presence and a huge contingent of demonstrators against apartheid, and England coming over to play in 1973 after Scotland and Wales had declined to fulfill their championship fixtures in 1972 because of the “troubles”. The fact that a crowd burnt down the British embassy in Dublin just before they were due to travel probably affected their decision…

In any case, the documentary is very good and I was thrilled to see some footage of my grandfather, JC Conroy, in his role as president of the IRFU in 1972-73, filmed in the old committee box in Lansdowne Road beside then-premier Jack Lynch, applauding England as they took the field. Good stuff!

On Saturday, November 20, I was at Murrayfield to be TMO for the Scotland v. South Africa game. As Ian McLauchlan, the president of the SRU remarked at the dinner for the teams that evening, the rain that poured down during the game was just what Scotland had hoped for. They played much better than they had the previous weekend when beaten 3-47 by the All Blacks and never gave South Africa any space. The final 21-17 score was probably about right for a game that was never a classic, even for rugby purists.

I’ll be back to Scotland next weekend for their game against Samoa in Aberdeen. The weather is predicted to be cold rather than wet. Samoa has played well against Ireland and England over the last two weekends and it will be interesting to see how they fare against the Scots.

Now back to technology for a few days…

– Tony

Posted in Rugby | Tagged , , , , , , | Leave a comment

Thoughts on lagged database copies


One of the best things about delivering training to smart people is the questions that they pose after you introduce a topic. During the recent Exchange 2010 Maestro seminars that Paul Robichaux and I delivered in Boston and Anaheim, I took the lead in talking about the Database Availability Group (DAG) and the deployment options that are now available to Exchange 2010 administrators. Some of the questions that were raised then caused me to consider the value of lagged database copies to a DAG, which then provoked this blog post.

Consultants and other commentators often consider the use of a lagged database copy within a DAG for Exchange 2010 deployments. Typically, once there is more than two passive database copies, thoughts turn to the creation of a lagged copy to provide the ability for a “point in time” recovery should the need arise. Possibly people want to use new features, possibly they are influenced by the comments of others. Let’s explore the topic a little more.

The best thing about a DAG is that you can achieve resilience against failure by creating multiple copies of databases that Exchange will keep up to date through log shipping. However, some published advice exists that the second passive copy should be lagged. For example, Symantec’s page titled “Best practices for Backup Exec 2010 Agent for Microsoft Exchange Server” advises “If you can make more than one passive copy, the second passive copy should use a log replay delay of 24 hours”.

Of course, we are still learning about the best and most effective practices for DAG designs and it’s natural that people would want to use one of the new DAG features in their deployments. I think that there are a number of points that you need to consider before you deploy a lagged database copy into production.

First, what is a lagged database copy? A lagged database copy is one that is not updated by replaying transactions as they become available. Instead, the transaction logs are kept for a certain period and are then replayed. The lagged database copy is therefore maintained at a certain remove to the active database and the other non-lagged database copies.

The primary reason to use lagged database copies (7 or 14 days are common intervals) is to provide you with the ability to go back to a point in time when you are sure that a database is in a good state. By delaying the replay of replicated transaction logs into a database copy, you always have the ability to go to that copy and know that it represents a point in time in the past when the database was in a certain condition. Two mailbox database properties  govern how lagged copies function. You can set these properties with the Set-MailboxDatabaseCopy cmdlet or indeed set them when you create a new copy with the Add-MailboxDatabaseCopy cmdlet:

  • ReplayLagTime: the time (in minutes) governing the delay that Exchange applies to log replays for database copies (replay lag time). Setting this value to zero means that Exchange should replay new transaction logs immediately they are copied to servers that host database copies. The intention is that you have the chance to keep a server running in a state slightly behind the active copy so that if a problem occurs on the active server that results in database corruption, you will be able to stop replication and prevent the corruption occurring in database copies. Typically, DAGs that use a lagged copy are configured so that there are two or three database copies kept up to date and one (usually in a disaster recovery site) that is configured with a time lag. The maximum lag time is 14 days.
  • TruncationLagTime: the time (in minutes) governing log truncation delay. Again, you can set this value to zero to instruct Exchange to remove transaction logs immediately after their content has been replayed into a database copy, but most sites keep transaction logs around for at least 15 minutes to ensure that they are available if required to bring a database copy up to date should an outage occur. The maximum truncation lag time is seven days.

We have to realize that a lagged database copy can occupy a large amount of storage. Apart from the normal requirement to provide storage for the database itself, you must assign space for all the transaction logs for the lag period and this could be significant for a busy database that supports hundreds or thousands of mailboxes and generates many gigabytes of transaction logs daily. The transaction logs for a lagged database copy contain transactions that are not yet committed to the database. Exchange commits the transactions when the lagged period expires, so if you have a lagged period of 7 days, Exchange has to keep 7 days volume of transaction logs.

Executing a smooth and stress-free recovery is the big issue that I see with lagged copies. Microsoft provides no user interface to recover data from a lagged database. The steps required to bring a lagged database copy online as the active copy are reasonably straightforward but they are manual and depend on a reasonable degree of knowledge on the part of the administrator. You can mount a lagged database as a recovery database if all you need is to recover one or more specific mailboxes to a point in time, but this operation is not well documented so expect to have to practice it before attempting it in production. If you decide that a point in time restore is required for a complete database (a pretty catastrophic situation) and make a lagged database the active copy, you force a reseed for all other database copies. This is a further impact on service delivery.

The need to assign and manage sufficient storage is reasonably simple. The lack of a Wizard or other GUI to guide an administrator through the use of a lagged database copy in recovery operations is more serious. Few companies have staff who are experienced in this kind of interaction with a DAG (it will come with time), so if a time ever occurred when the lagged database copy is required, there’s a fair chance that all hell will break loose and panic ensues before people figure out what to do. It should be an interesting conversation with Microsoft support:

Administrator: “Hi, I need to bring a lagged database copy back online because (insert reason here)”

Microsoft support: “Interesting… hang on a moment… (pregnant pause)”

Administrator: “Hallo, is anyone there?”

Microsoft support: “I’m just checking our support tools to see how best to proceed…”  (the story evolves from this point and everyone is happy in the end)

If this discussion causes you some concern, what can you do? I think there are two routes worthy of investigation. Expanded use of the enhanced “dumpster” in Exchange 2010 is an obvious solution for recovery of individual mailboxes. In other words, keep more data in the dumpster just in case someone needs to recover an item and hope that you are never asked to recover a complete mailbox to the state that it was at a point in time. If you are asked, you need to restore the database from a backup (you’re still taking backups – right?), run ESEUTIL to fix the database and allow a clean mount, mount the restored database as a recovery database, and then use the Restore-Mailbox or New-MailboxRestoreRequest (available from SP1 onwards) cmdlets to recover data into a PST that you can then import into the user mailbox or provide to the user.

Recovery of complete databases is a different matter. My recommendation is that you should invest in storage or backup technology that incorporates strong recovery capabilities. Some storage offers very good snapshot recovery capabilities so that recovery is a matter of selecting the appropriate snapshot and recovering from it; some backup products provide similar capabilities. Your choice will be dictated by personal preference, previous deployment history, and your knowledge of how strong support personnel are within your company. In other words, you’ll select the best tool for the job to fit the unique circumstances of your Exchange 2010 deployment.

I’m sure that others will have their own views on the topic. For now, I just can’t see how I could recommend the deployment of lagged database copies. Comments are more than welcome…

– Tony

Posted in Exchange, Exchange 2010 | Tagged , , , , , , , | 14 Comments

RTM of Microsoft Exchange Server 2010 (SP1) Inside Out


I see from the Microsoft Press blog that my Microsoft Exchange Server 2010 Inside Out book, also available at Amazon.co.uk and other fine booksellers, has now been Released To Manufacturing (RTM’d).

Book cover of Microsoft® Exchange Server 2010 Inside Out

RTM is a term that is normally applied to software or hardware products. It’s usually the point when a development team is satisfied with that their code has reached the necessary levels of quality and functionality required for a product to be deployed by customers. I guess RTM can be (and is) applied to books, especially when Microsoft is the publisher.

Amazon.com is still quoting December 1 as the likely availability date for printed books. Microsoft Press says that an eBook will be available next week. The price quoted on O’Reilly’s web site for the different formats seems higher than Amazon but O’Reilly may have the eBook as an exclusive for a while until the Kindle version is generated (my thoughts, not necessarily based on any great knowledge of how and when eBooks are generated).

Clearly it’s nice to reach this point in the publication cycle and I’d like to thank all who helped to create the content for the book in one way or another. Publishing a high-quality book truly requires a team effort from all concerned. Of course, now I have to think about what to do for an encore… so Microsoft, when is the next version of Exchange (E15?) due to be released?

– Tony

Posted in Exchange, Exchange 2010, Writing | Tagged , , , , , | 6 Comments