What follows is a long-winded background to preface the following question: What is the industry best-practice (or what recommendations would you give) for securing outbound traffic in an enterprise environment?
Background
We have a pretty typical enterprise environment: Linux and Windows hosts on unroutable IPv4 addresses behind a firewall/router/proxy. Among other things, these hosts run our application and database servers for our company's core service, which we develop in-house. The database servers have been locked down pretty extensively. Their addresses do not route (even with NAT/PAT) out of the internal network.
The application servers, and the servers that build and prepare the applications, on the other hand, require some connectivity to the outside world. These hosts need to gain access to Internet resources for a variety of reasons:
- integrate with public or private services,
- pull libraries from open-source repositories,
- download platform updates,
- get resources for ad-hoc troubleshooting,
- transmit statistics to affiliate monitoring systems,
- possibly other uses not yet identified.
Often these Internet resources can be identified by a host name or domain name, but it's not often they can be referenced simply by a destination IP address. The resource may be a specific subset of a resource, such as an application in a domain (graph.facebook.com) or a path on a host (google.com/a/company). We would like to be able to identify the resource as specifically as possible so as to avoid being overly permissive.
Our goal is to maintain a secure network. In particular, we want to:
- prevent or limit data exfiltration by a clever adversary who has gained unauthorized access to a system,
- monitor and account for activity originating from inside the network.
Our focus is on traffic originating from inside the network and terminating outside our secure environment. Furthermore, we aim to keep the permissions as tight as possible, specifying the source based on a class of host and the destination based on the resources that class of host requires.
Whatever we ultimately use, we would like to mechanize the process of granting access as part of the host provisioning process. Another requirement is that application development should be minimally impacted; the solution should look as much as possible like a simple TCP/IP network for authorized communication.
To this end, we have a few proposals on the table, some of which we've tried and some of which we may evaluate:
DMZ Firewall
The Firewall sits on the DMZ and routes traffic (network layer) based on a whitelist of allowable destination hosts. It defines a whitelist of IP addresses. The firewall simply discards traffic that doesn't correspond to the appropriate rule.
Advantages
- Applications can be written naturally without any consideration for the network.
- Supports transports other than HTTP/HTTPS.
Limitations
- Low granularity - An IP name or address could serve many applications and almost always serves multiple paths on that host.
- It's sometimes possible to resolve a hostname to a list of IP addresses using DNS and mechanically update that list, but the updates don't happen in real time.
- Rejected traffic is indistinguishable from a faulty network. Failures can be time-consuming, taking 15-60 seconds (or longer) to resolve as failed.
- Mechanization of the firewall is undesirable due to potential for abuse/failure.
HTTP Proxy Server
The proxy server, like the firewall, resides in the DMZ with connectivity to both the internal and external network. All outbound traffic must pass through the proxy server, which contains a white list of authorized resources by URL or partial URL. It only allows traffic that matches a specified resource on the whitelist. The proxy server operates at the HTTP/HTTPS protocol level.
Advantages
- Robust definition of allowed destinations.
- With a little host configuration, works naturally for many applications.
- Rejected traffic can usually be identified quickly.
Limitations (some apply specifically to our Stingray appliance)
- Applications must be aware of the proxy server and direct traffic to it.
- Some libraries require special handholding (or even bug fixes) to work properly in this environment. It's often difficult to tell in advance which libraries will be affected.
- Rules cannot differentiate easily based on source host class.
- Difficult to mechanize.
- Only works for HTTP/HTTPS.
- The network environment of the proxy can differ from the network environment of the host, leading to difficult-to-diagnose situations. For example, the application host can resolve hostname but proxy cannot, so the proxy returns a 404 response which is difficult to distinguish from a 404 response from the intended resource.
- The proxy cannot inspect encrypted traffic, and therefore cannot filter encrypted traffic any better than a firewall.
Advanced Perimiter Security Device
We are currently considering a device such as the Palo Alto Enterprise Perimiter. This device, like the others, would filter traffic. This device would do deep inspection of application-level transmission. It can inspect HTTP headers and manipulate traffic accordingly. It can even intercept and decrypt secure packets (SSL).
Advantages
- It's a commercially-supported, feature-rich approach.
- Deep packet inspection provides plenty of detail to apply fine-grained permissions.
- Applications conversing in plain text need not be aware of the device.
Limitations
- For us, it's a new investment and yet another appliance to learn/configure/manage.
- If SSL inspection is enabled, it violates the chain of trust. Applications properly configured for high security will balk or fail, so must account for the specialized environment.
- It's an unknown quantity. It's unclear if it will have the interfaces that would enable the mechanization we desire.
- A new capital expenditure.
The Question
What is the industry best-practice (or what recommendations would you give) for securing outbound traffic in an enterprise environment? Based on the details given above, is our stance too aggressive (or too lenient)? We're about to invest in one of these solutions (or maybe another) by developing tools to mechanize our processes, so any thoughtful advice on the best approach will be most appreciated.
Nice post. What is aggressive or not is hard to say, generally - YOU should decide that. All three are fine, but have different approach, usability issues, price etc.
I work for company that passes VISA/Mastercard security certification (PCI) every year and everything depends on what you do and what risks you might have. There is no company without risk, it might be minimal/insignificant for you, but risks are always present. Maybe it's enough for you to have http proxy and you are not afraid of guys, who are able to use http tunnel or use http-based remote applications etc (like Skype, Teamviewer) and you don't have control over application control, don't have an 802.1x certificate based auth on ethernet level with machine which has dual disk encryption which needs a special usb key for every bootup, despite this usb key is taken from one of 20 10-inch thick steel safes opened by splitted two passwords changed 6 hours ago, known by two guys, delivered by two security specialists with two guards and four remotely controlled cameras and all that is underground, 300m depth. What is applicable/enough for you - again, you decide.
If your employees are security experts and bad guys, able to use several tools and hide from cameras - there is no way to control them by watching their traffic and packets - they still can hide and make tunnel wherever they want, you should consider other things too (I guess Palo Alto Enterprise Perimiter can do it, if you need it so much and you pay for that USD 1 mil).
All your proposals are OK - there is nothing wrong to use any it in enterprise.
I recommend to take a look at SIEM alerting products too (Solarwinds SIEM, Trustwave SIEM, IBM Q1 Labs Qradar). Maybe you would like to watch the situation, not limit it in very details etc.