What is the correct way to setup NAT networking between KVM vm and host?
KVM vm:
No firewall Installed
$ sudo arp-scan -r 5 -t 1000 --interface=eth0 --localnet
10.0.2.2 52:55:0a:00:02:02 locally administered
10.0.2.3 52:55:0a:00:02:03 locally administered
$ ip r
default via 10.0.2.2 dev eth0 proto dhcp metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
ifconfig
eth0: inet 10.0.2.15 netmask 255.255.255.0 broacast 10.0.2.255
ether 52:54:00:12:34:56
lo: inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1
Host:
:~$ ip r
0.0.0.0/1 via 10.211.1.10 dev tun0
default via 192.168.1.1 dev wlan0 proto dhcp metric 600
10.21xxxxxxxx dev tun0 proto kernel scope link src 10.21xxxxx
xxxxxxxxxxxx dev wlan0
128.0.0.0/1 via 10.211.1.10 dev tun0
192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.172 metric 600
192.168.4.0/22 dev eth0 proto kernel scope link src 192.168.4.8 metric 100
:~$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.3 netmask 255.0.0.0 broadcast 10.255.255.255
inet6 fe80::76c8:79b4:88d4:7f5c prefixlen 64 scopeid 0x20<link>
ether ec:8e:b5:71:33:6e txqueuelen 1000 (Ethernet)
RX packets 1700 bytes 194730 (190.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2862 bytes 246108 (240.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16 memory 0xe1000000-e1020000
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 13251 bytes 7933624 (7.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13251 bytes 7933624 (7.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500
inet 10.211.1.69 netmask 255.255.255.255 destination 10.211.1.70
inet6 fe80::a920:941c:ffa8:5579 prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC)
RX packets 4348 bytes 2242726 (2.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3823 bytes 404190 (394.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.172 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::651b:5014:7929:9ba3 prefixlen 64 scopeid 0x20<link>
ether d8:55:a3:d5:d1:30 txqueuelen 1000 (Ethernet)
RX packets 114455 bytes 117950099 (112.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 67169 bytes 14855011 (14.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
~$ sudo arp-scan -r 5 -t 1000 --localnet
just hangs......
Host can not ping 10.0.2.2
No firewall enable
Tried
$ sudo ip route add default via 10.0.2.0
$ sudo ip route add default via 10.0.2.2
$ sudo ip route add default via 10.0.2.0/24
Can NAT work without virsh ?
Can NAT be fixed from command line only ?
Update:
$ sudo ip link add natbr0 type bridge
$ sudo ip link set dev natbr0 up
$ sudo ip link set dev eth0 up
$ sudo ip link set dev eth0 master natbr0
that works to bridge eth0 slave to kvm - vm can ping other computers on the network. but not the host @Tom Yan answer combined with archlinux-Network_bridge created above commands that can ping other network ip's
So i tried to change working bridge connection to allow host and kvm to talk.
Goal: host$ ping kvm
$ sudo ip link add natbr0 type bridge
$ sudo ip link set dev natbr0 up
$ sudo ip a add 10.0.2.1/24 dev natbr0
$ sudo kvm -m 3G -hdb /dev/sde -nic bridge,br=natbr0
kvm$ sudo ip link add natbr0 type bridge
kvm$ sudo ip a add 10.0.2.2
kvm$ sudo ip link set dev natbr0 up
kvm can ping it self
$ ping 10.0.2.2
PING 10.0.2.2 (10.0.2.2) 56(84) bytes of data
64 bytes from 10.0.2.2: icmp_seq=1 ttl=64 time=0.027 ms
but kvm$ping 10.0.2.1
Destination Host Unreachable
host$ ping 10.0.2.2
(just hangs)
Prefer command line to test the resilience of process/system bare bones vs a lot of scripts that can pose more vulnerability to failure. - command line works or not and errors are more easily traced, isolated and reproducible. Depending on linux flavor, certain scripts/parts of scripts (like those incorporated in xml alternative solutions offered) may work or not work. If bridging with kvm can be reproduced on any linux flavor by following commands above....then it seems possible that kvm NAT can also be achieved using cli commands - just to clarify the point of this post , cli steps to NAT kvm will be more standardized, so preferable.
generally @NikitaKipriyanov answer was the correct road, this was the answer but required a tweak to command
$ sudo kvm -m 3G -hdb /dev/sde -net nic -net user,hostfwd=tcp::1810-:22
using command tweak vm can communicate with internet like default and also communicate with host via ssh. credit to @NikitaKipriyanov and @cnst for the tweak https://stackoverflow.com/a/54120040
User will need to ssh using port 1810 using localhost address
$ ssh p@localhost -p 1810
The common idea of NAT is that you don't see translated addresses. You don't have routes to them. They don't exist for you. You only see only the addresses whichever those are translated into.
The QEMU case is nothing different. In this case, your host is "outside", your VM is "inside", so VM could never be accesses by the address it is assigned to. You have 10.0.2.2/24 address of the VM, but when it reaches Internet, its packets get translated into 192.168.1.172 by the QEMU process, so host considers those packets as created by the QEMU process and treats them like any other packets, say, from locally running web browser or anything like that.
How to access a VM from the host? When we have NAT, to reach hosts hidden behind it, we install DNAT rules. And again, the case of QEMU is no different, you must set up some rules into it, and then you may communicate with the VM from the host (of from other hosts, if you want) by sending packets to selected ports of the host address.
According to the QEMU documentation, to setup DNAT rules into its usermode NAT, you use the
hostfwd
clause. Let's introduce the following into its command line:Then, tcp port 11111 will be occupied by the
qemu-system-x86_64
process on my machine, and if you connect to the localhost port 11111, the connection will be made to the port 22 of the VM.The general form is
hostfwd=hostip:hostport-guestip:guestport
, but if you omithostip
, it'll be localhost, and if you omitguestip
, it'll be first "non-gateway" address inside guest network.I noticed you are mentioned
virsh
. Are you runninglibvirt
? Then the question is duplicate; see comments.You can use a bridge without enslaving any of your physical Ethernet interfaces on the VM host to it.
Say we stick with the choice of subnet
10.0.2.0/24
(which is NOT necessary):Then create the following file:
Then start qemu with the e.g.
-nic bridge,br=natbr0
or-netdev bridge,br=natbr0,id=nb0 -device virtio-net,netdev=nb0
, which willtap
your VM to the bridge in a dynamic manner (i.e. thetap
interface will be removed once the the VM is shut down).You'll need to configure static IP on the VM as well:
Unless you also set up a DHCP server (with e.g. dnsmasq) on the host. Don't forget to configure the DNS server to use inside the VM as well.
Note that VMs that make use of the same bridge can communicate with each other unless you block such communication by some means (e.g. ebtables).
The
default
route (and DNS server to use) are only necessary if you want the VM to be able to reach the "outside". If you only need it to be able to communicate with the VM host, you should skip the second command and can stop reading. (Well, read theP.S.
)It would be probably be best to configure e.g. dnsmasq on the host to be a DNS forwarder if you do not want to use a specific "public" DNS server in the VM, although using DNAT to forward DNS requests to e.g.
192.168.1.1
should work for basic ones.Then you'll need to enable IP forwarding:
If you want to avoid IP forwarding from/to certain network interface (e.g.
tun0
) for security reasons, you'll need to set up a firewall. For example:Since you have (VPN) tunnel routes that practically overrides the
default
route, the traffics from the VM to the Internet will go into the tunnel as well (unless you added the example rules above). If you want the traffics to go e.g. via your router, you'll need policy routing. For example:You can also prevent your VMs from being able to reach your LAN hosts:
Make exceptions (note the
-I
) if you are going to redirect DNS requests to your router:Finally, configure iptables to perform SNAT dynamically (as per the outbound interface) for your VM subnet:
Note that this is NOT intended to and will not exactly prevent certain traffics from the "outside" (your physical LAN hosts or the Internet; the VM host does not count) to be able to reach your VMs. It merely breaks the communication as a side effect when the source address of replying traffics from the VMs are changed before their forwarded out. For proper isolation, you will need (additional) appropriate rules in the
FORWARD
chain. Consider having a "stateful" setup there if you have such need.Additionally, you can redirect DNS requests to the host from the VMs to your router:
Which will more or less allow you to use
10.0.2.1
as the DNS server in the VM.P.S. All the manipulations above (except the creation of / write to
/etc/qemu/bridge.conf
) are volatile, i.e. they will be gone once you reboot (unless your distro does something silly). I'm not going to dive into how you can make them persistent, since there are different ways/approaches and it can be distro-specific.