I'm trying to set up a dedicated server with an ESXi installed on it:
I was given 2 blocks of public IPs
Block A. 1IP = X.X.X.204, used in ESXi vmkernel port IP (and gateway pointing to X.X.X.1)
Block B. 8IPs = Y.Y.Y.232-239 (from 233 to 238 usable) routed to the server vía X.X.X.204
Therefore I can perfectly connect to the ESXi (with the vShpere client) using the X.X.X.204 IP
Then I ask to myself: What network configuration should I use in the VMs I'm just installing?
and I say: I should use one of the Block B IPs
Then, I ask: What gateway IP should I put in them?
I reply: ...mmmm... not sure... may be I could use one IP of the Block B and assign it to the ESXi with a new VMKernel (that is the only way I see that I can specify another IP on the ESXi) and then the VM will use this IP as gateway...
and I further add: If ESXi knows "how to communicate" with the 2 blocks and it only has setup one gateway, the X.X.X.1, in the VMKernel0, the it should definletly work.
But unfortunetly it does not.. It's like ESXi didn't do ip_forwarding between the two networks.
Can anyone point me to a solution? Is it a matter of configuration with the ESXi or Am I missing something? Thanks!
Your assessment is correct, VMware ESXi doesn't do ip forwarding. The best solution would be to use the 1 IP you have now for the VMware Management IP, and use a second network adapter for the other IPs, that is connected to the appropriate network. Otherwise, you are going to have one heck of a time bootstrapping ESXi enough to install a Vm that will act as a router for you.
It's been a while since I've mucked with ESX, but I believe it's possible to run things off of one NIC. This isn't really good practice, so if it can be helped the management traffic is usually segregated onto its own NIC. As far as I know, it's still doable.
There are probably easier ways to accomplish this, but they're probably not as sustainable and maybe a little more hack-ish. Hopefully I'm giving you accurate information - I'd need to consult documentation to be sure though.
The solution I'd likely go with would mean you'd need, at a minimun, VLAN tagging on the switchport that your ESXi host is connected to. This would facilitate the service console IP on one subnet while allowing VMs to use the other IP range. You'd need to get your network administrator to set up the X.X.X.204 network as the native VLAN so your can get your service console access. The Y.Y.Y.Y network would then need to be provided to ESXi as as 802.1q tagged VLAN that you'd set up either on the vSwitch or the VM port group (I forget off hand). You could then apply this VLAN to the NIC of VMs and give them IPs on your Y.Y.Y.232-239 subnet.
Hope this helps!
If you only have one NIC then it's a no-brainer, just set the ESXi box to have an IP of X.X.X.204 and then give your VMs the Y.Y.Y.232-239 addresses. Do you know if the address ranges are in different VLANs? if so you'll need to make the uplink a trunk and create two port groups on your vSwitch, one per VLAN - otherwise it's really straightforward.