I've been trying for weeks to figure out the right network configurtion for sharing a range of public IPs with KVM virtual machines running on my server, but so far with little luck and with the help of the friendly ServerFault community, I've managed to make it work. You can find my working setup below:
My ISP routes all the traffic to 192.168.8.118
(so that needs to be the primary IP of eth0), but I have 192.168.239.160/28
to my disposition.
Here's /etc/network/interfaces
on the host machine:
# Loopback device:
auto lo
iface lo inet loopback
# device: eth0
auto eth0
iface eth0 inet static
address 192.168.8.118
broadcast 192.168.8.127
netmask 255.255.255.224
gateway 192.168.8.97
pointopoint 192.168.8.97
# This device acts as gateway for the bridge, so provide a route.
up ip route add 192.168.8.118/32 dev eth0 scope host
# device: br0
auto br0
iface br0 inet static
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address 192.168.239.174
broadcast 192.168.239.175
netmask 255.255.255.240
gateway 192.168.8.118
# Create and destroy the bridge automatically.
pre-up brctl addbr br0
post-down brctl delbr br0
# Our additional IPs are allocated on the bridge.
up ip route add to 192.168.239.160/28 dev br0 scope host
I have configured a virtual machine like this:
sudo ubuntu-vm-builder kvm precise \
--domain pippin \
--dest pippin \
--hostname pippin.hobbiton.arnor \
--flavour virtual \
--mem 8196 \
--user mikl \
--pass hest \
--bridge=br0 \
--ip 192.168.239.162 \
--mask 255.255.255.240 \
--net 192.168.239.160 \
--bcast 192.168.239.175 \
--gw 192.168.239.174 \
--dns 8.8.8.8 \
--components main,universe \
--addpkg git \
--addpkg openssh-server \
--addpkg vim-nox \
--addpkg zsh \
--libvirt qemu:///system ;
If I inspect the virtual machine's XML definition, its network interface is defined like this:
<interface type='bridge'>
<mac address='52:54:00:b1:e9:52'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
When I (re)start the virtual machine, /var/log/syslog
receives these lines:
Jul 20 03:13:02 olin kernel: [ 4084.652906] device vnet0 entered promiscuous mode
Jul 20 03:13:02 olin kernel: [ 4084.686388] br0: port 2(vnet0) entering forwarding state
Jul 20 03:13:02 olin kernel: [ 4084.686394] br0: port 2(vnet0) entering forwarding state
My server is running Ubuntu 12.04 64-bit with kernel 3.2.0-26-generic (from Ubuntu). I'm running libvirt-bin 0.9.8-2ubuntu1
and qemu-kvm 1.0+noroms-0ubuntu13
.
iptables on the host machine is currently set up to allow all traffic (to eliminate that as a problem source), and I have enabled forwarding of both ipv4 and ipv6 traffic.
When I log in to the guest via SSH from the host, I have no internet connection inside the guest OS. The guest’s /etc/network/interfaces
looks like this:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.239.162
netmask 255.255.255.240
network 192.168.239.160
broadcast 192.168.239.175
gateway 192.168.239.174
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 8.8.8.8
dns-search pippin
Now it works
The configuration outline above actually works as I want it to. Refer to the edit history if you want to see my earlier attempts.
If you're bridging, you shouldn't need to configure anything related to the VMs' IP addresses on the host machine. Just configure them to connect to the bridge, and configure the IPs within each VM in the usual way. Bridging joins networks together at the Ethernet layer, where IP addresses don't matter; from your ISP's standpoint it'll look like you have several computers plugged into a switch that's conected directly to the ISP.
But if your ISP is routing traffic to the host's .118 address, you'll need to configure your VM host to act as a router and forward the VMs' traffic. To do that, remove the
bridge_ports eth0
line from your interfaces file,ip route add to 192.168.239.160/28 dev br0
, andecho 1 > /proc/sys/net/ipv4/ip_forward
. In the VMs, you'll need to configure 192.168.8.118 as the default gateway, and add a route that says 192.168.8.118 is directly reachable viaeth0
. (That's the VM'seth0
, which connects to the host'sbr0
.)In neither case should you be adding the VMs' addresses directly to the
br0
interface. In the bridging case, you want the VMs and not the host to answer ARP requests for those addresses, and in the routing case, you want the host to understand that when it receives a packet for one of those addresses, it needs to be routed somewhere else, not delivered locally.You don't want to allocate the IP address of your VMs to the br0 interface of your host - that would just make that address belong to the host, not the VM.
However, your VMs will need a gateway address to route all outbound packets to. I'd recommend allocating one IP in that /28 to your host, and configure your VMs to use that IP as their default route. The first or last available IP in a subnet is a reasonable choice for a gateway address....
have you enabled ip forwarding? e.g. uncomment the following in /etc/sysctl.conf (one or both for ipv4 and/or ipv6):
Finally, have you tried logging in at the console of the VM (e.g. with virt-manager or a VNC viewer like vinagre or xvnc4viewer)? if so, what IP address does it have (if any)? Is the VM configured to have a static IP or dhcp? If the latter, have you configured your dhcp server to give the appropriate IP address to the VMs MAC address?