Last September, Microsoft dropped a bombshell when they announced that they were dropping development of the Threat Management Gateway (TMG) product along with their decision to cease production of on-premises ant-virus products. The problem for the Exchange community was that TMG had become the de facto choice as a reverse proxy deployed alongside Internet-facing Client Access servers to handle inbound client traffic.
Since the original announcement, Microsoft has done its best to reassure customers and explain that TMG support remains in place until April 2015. In a nutshell, although no more TMG licenses can be bought, you can continue to run TMG alongside Exchange 2007, 2010, and 2013 until support expires.
But thinking about the situation after a thought-provoking discussion with Greg Taylor of Microsoft, I wonder whether the function served by TMG and ISA Server, its immediate predecessor, is focused on the needs of the past rather than the present. If you go back to a time when Outlook Anywhere started to popularize HTTPS connectivity instead of running MAPI RPCs over a VPN, the target infrastructure was Windows 2003 servers and Exchange 2003 SP2. External threats abounded as hackers attempted to penetrate past corporate firewalls to attack unhardened internal systems, including Exchange.
So it was logical to deploy multiple levels of protection, starting at the firewall and going through servers to perform tasks such as packet inspection before any traffic was allowed to go to an internal server. The approach worked and has served IT well as long as IT exerted strict control over networks, devices, and servers.
The same conditions do not exist today. On the plus side, the latest version of Windows and application servers like Exchange are more secure than they were in the past, thanks to customer pressure to drive improvement and changes in Microsoft’s engineering practices to enforce “secure by design”. On the downside, infrastructures have to cope with connections coming in from a multitude of device types, not all of which are “approved” because of the popularity of BYOD.
The latest versions of Exchange demand nothing more than TCP (port 443) to be open on corporate firewalls before clients can connect. The question then is what additional processing needs to happen before a sanitized traffic stream from the firewall hits an Exchange server. And as it turns out, the answer is “not much”, largely because Windows and Exchange have the capability to protect themselves against suspect packets and because the latest generation of firewall-cum-load balancer products are capable of doing much more than simply blocking inbound traffic. If this assertion is true, then what value does a product like TMG or UAG deliver? And is that additional product even required to maintain a secure environment?
Strong opinions will no doubt be voiced on this topic. Security professionals take their job very seriously and abhor anything that might expose a company to risk. But in defence of advocating the heresy of passing traffic direct from firewall to Exchange, I point out that some in the security community have considered that erecting strong barriers and depending on them for protection against network threats has been a fool’s errand for many years. The Jericho Forum, part of the Open Group has led the charge to encourage the development of systems that can function without risk as part of the Internet without the kind of traditional barriers that have been erected to date. To get an insight into their work, you could do worse than reviewing a presentation called “The business case for removing your perimeter” given at the RSA conference in April 2008. It makes interesting reading.
I was responsible for HP’s security strategy during the 2004-2007 period. When I worked in that role I had the chance to debate the changing nature of security with members of the Jericho Forum. I always thought that they had interesting but maybe impractical ideas. Now it seems that their thinking might have been a little ahead of its time. Perhaps it is now appropriate to ask the question whether the now-traditional approach should be applied to protecting modern versions of Exchange and other Windows applications that are built to consume and filter HTTP traffic.
Security traditionalists and those who worry about protecting infrastructures against penetration will probably still argue that strong barriers have to be maintained. Their concerns should be taken into account when any security strategy is constructed as threat evolves and flexes all the time – and drives an entire industry dedicated to protection against malware, trojan horses, viruses, and the like. At the end of the day, the decision as to how deploy and protect servers depends on the security requirements and profile of individual companies, but I think that it’s worth thinking about how the attack surface of modern Windows servers differ from their predecessors and whether this influences your protection strategy.
Follow Tony @12Knocksinna
Removing the parameter is a major paradigm shift and surely not easy to implement. The current situation with customers having two cascaded perimeter networks (Public DMZ/Private DMZ) does not make things easier. But you have pointed out very clearly, that there is an entire industry making profit out of this situation. Instead of enhancing the overall complexity more and more bringing ourselves in the situation where we are not able to maintain the security policies anymore we need to re-think the security approach.
The fact that the presentation has been shown in 2008 is really interesting.
An appropriately hardened Exchange 2003 server can function just as securely as 2007/10/13 using SSL, with the sole client communication method being SSL. I have not recommended TMG in Exchange deployments since the introduction of RPC/HTTP(s) in Ex2003 and support for SSL. There are only two ports that need to be opened for Exchange 443 & 25. Nothing else broadly speaking should be required. Its been my experience that inflexible Infosec mandates are usually the reasoning behind publishing Exchange with TMG. Practically there are little additional benefit(s) to be had by deploying TMG – which i won’t belabour here, especially with the maturity of security products, the multi layered approach that is generally deployed for exchange security, as well as MS’s ‘Secure by default’ approach that started with Ex2007. Granted i will miss TMG as its been a great product for other uses such as OCS/Lync/Windows Proxy etc. But for Exchange i won’t miss it at all.
@Thomas Removing the perimeter is a bit too edgy (no pun intended) and there’s very few Enterprise business’s who would even entertain the thought. But what is often misunderstood is that the DMZ is NOT a secure area. That being said most business’s if not all business’s could function without it. And the end of the day its just another network, subject to the same constraints as your secure LAN. Wether it provides an additional layer of security is a thoroughly moot topic. Personally i would be more than happy to deploy an infrastructure without a DMZ. Certainly for the SME’s i work with thats exactly what i do. But the methods and logic used by Infosec is unlikely to change primarily because of out dated concepts , and the fact they look too far down the OSI model, applying the same standards, instead of at the application layer where the business logic is actually occurring.
Security doesn’t stop at the communication protocol. It is not for nothing that after 4 months of releasing Windows 2012 Server there are almost 1Gb worth of updates, many of them security. God knows how many unlugged holes exist – it may be worse than a swiss chees. I wouldn’t sleep comfortably knowing that a hacker can directly inject code into my bank’s buggy IIS server when such attempts could be stopped at the perimeter by a reverse proxy.
But that’s just me.
I still think adding another barrier such as TMG can greatly improve security. If there’s a security hole in IIS you may be immediately vulnerable if your Exchange servers are exposed to the Internet directly. TMG runs different code and is therefore not affected by the same problem, and at least you can keep anonymous users out.
And of course it allows you to better control who and what can be accessed from the outside. You may allow everybody to access OWA internally but restrict it to certain folks from the Internet. Or you require different level of authentication from the Internet (e.g. two-factor). You may be able to work around all these issues, but a reverse proxy makes these things a lot easier.
I have to say in the SMB space deploying on-premise exchange servers without TMG or a DMZ has been the defacto standard for a long time. In fact, the cost savings of the old SBS would be negated completely if you had to tag on a TMG license (back when you could). We don’t have these servers directly connected to the internet, their is a firewall (or Next-Generation Firewall as is the new marketing term-du-jour) that does address translation plus stateful and deep packet inspection, intrusion detection, etc. Having been responsible for these systems for hundreds of customers for years, I can’t think of any issues that have come from this arrangement
Is this as secure as a deployment with a DMZ and a reverse-proxy protected system, of course not. But is it significantly less secure, probably not in my opinion. The whole TMG requirement has always seemed like security theater to me. Other than some pre-authentication the TMG didn’t seem to do much but add complexity and cost then pass the traffic on. And now that MS has end-of-lifed the TMG product, how much do you trust the product to keep up with new threats compared a dedicated security company.
Pingback: Can Windows servers consume untreated Internet traffic safely? #Exchange #Lync #ISA #TMG #RIP | The Future is...Cloudy