As a follow up question to How Do i host multiple servers on HyperV with only a few public IP Addresses I am now trying to figure out where to put the ISA/TMG Server? Should it be virtualized, listening on an External IP and sending data between an internal network, or should it be hosted on the host partition? The last time i played with ISA/TMG, it was a physical box, and there where other machines in behind, so it makes me think the virtual option. Give it 2 public IPs, and let it sort out the rest... give 1 IP to the box itself for managment... Which way should it work?
TMG as a VM works well, and there us a technet video that describes various scenarios using TMG, just google "Virtualize your ISA or Forefront TMG servers". As always in these things, there is best practice, good practice, and stupid .
Best practice as espoused by the vid is to have your virtualised TMGs on hardware that's specifically used for perimeter duties. Obviously this won't work if you've only got one server. However, segmenting your network as recommended by the vid is easily doable in a single server scenario, so long as you have at least 3 physical network connections.
One will become a virtual/physical network (i.e. virtual with external access) connecting the VM TMG to the internet (or your gateway router) - this must not be used for hardware management. In hyper V settings I name this network 'black' because it is unsafe and leads to the big outside.
One will be a virtual network (i.e. no external access) connecting the VM TMG to your target VMs - again, not used for hardware management. I name this network 'blue' because it is notionally safe and it links all the VMs together.
One is a physical network (not used at all by Hyper V) purely used for the hardware management lan. I name this 'management' and best practice is to keep it and the host off the internet, or only go onto the internet from the host for updates.
Logically it looks like this:
So, when you assign virtual network interfaces to your VMs, the TMG VM gets given two (black and blue) and all the other VMs get given just blue.
It's pretty straightforward, the main complexity is what you will encounter anyway when trying to understand how Hyper V allows you to co-opt network connections as 'virtual switches'. I've setup the same above with a 'black' network and 'blue' network spanning a multi-server cluster and found it works brilliantly. The basic leap of faith is that you have to trust that a nasty on the black network can't leap over via the host to the blue network. Hyper V has been EAL4+ rated for security anyway, so it's fair to say that it's improbable, unless your host is drastically compromised.
I would probably make it a VM so that if you get another host you have HA for the service.