A client needs to have .NET Framework 4
installed and not patched so their software running on their Windows 2008 R2 Server
functions properly.
This sparks the questions:
- When is it okay to house a vulnerable framework if their application is not compatible with the latest framework version?
- What are best practices in this situation?
Perhaps this question should be in SE: Information Security?
The "best-practice" is generally not to run software that requires you to use a vulnerable framework (or, say an older version of Java), but that is not always an acceptable answer, unfortunately. For most businesses (those that aren't technology or IT-based, at least), IT serves business needs, and not the other way around, so you don't always get to use that answer and toss a system away because it's a security risk.
In these scenarios, the only thing you can really do is contain the vulnerable server as much as possible. Limit user access as much as possible. Whitelist the bare minimum IPs or IP ranges in the firewall. Use anti-virus on the server if feasible, and up the logging level (and devote time to checking the logs, of course). Off-load as much as you can, and if possible, use a proxy that isn't vulnerable to accept client connections. (For example, put up an RDS server, set the vulnerable server to only accept network connections from the RDS server, and have users use the RDS server to access the application on your vulnerable server.)
And, of course, be sure to CYA - inform management that you've identified a security risk with this system, so the decision to continue using it (and any security breaches that might arise) is their responsibility, not yours.