We a dedicated server at OVH, assigned 2001:41d0:a:72xx::/64
I have set the machines on a segment bridged to the wan, as per IPv6 public routing of virtual machines from host
The gateway is 2001:41d0:a:72ff:ff:ff:ff:ff, outside the network
We're running a bunch of virtual debian servers.
Some of our (older) servers are happy to route ipv6 to the gateway, but the new I'm trying to setup are saying ."Destination unreachable; Address Unreachable" when pinging the gw.
Firewall is setup equally (rules for /64, not on host-level), and /etc/network/interfaces are equal; ipv6 are set static. (different adresses of cause)
On both working and non-working machine, netstat -rn6|grep eth1 show
2001:41d0:a:72xx::/64 :: U 256 2 40 eth1
2001:41d0:a:7200::/56 :: UAe 256 2 71 eth1
2001:41d0:a1:72xx::/64 :: UAe 256 0 0 eth1
2000::/3 2001:41d0:a:72ff:ff:ff:ff:ff UG 1024 2 63479 eth1
fe80::/64 :: U 256 0 0 eth1
::/0 fe80::205:73ff:fea0:1 UGDAe 1024 1 2 eth1
::/0 fe80::20c:29ff:fe22:60f8 UGDAe 1024 0 0 eth1
ff00::/8 :: U 256 2108951 eth1
On the non-working machines, pinging the gw or the workd returns "Destination unreachable."
The machines can all reach each other on the local lan.
I don't know if it is relevant, but
ping -c3 ff02::2%eth1
64 bytes from fe80::20c:29ff:fedb:a137%eth1: icmp_seq=1 ttl=64 time=0.240 ms
64 bytes from fe80::20c:29ff:fe22:60f8%eth1: icmp_seq=1 ttl=64 time=0.250 ms (DUP!)
64 bytes from fe80::2ff:ffff:feff:fffd%eth1: icmp_seq=1 ttl=64 time=3.57 ms (DUP!)
64 bytes from fe80::2ff:ffff:feff:fffe%eth1: icmp_seq=1 ttl=64 time=5.97 ms (DUP!)
On the non-working
ping -c3 ff02::2%ens34
PING ff02::2%ens34(ff02::2%ens34) 56 data bytes
64 bytes from fe80::20c:29ff:fedb:a137%ens34: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from fe80::20c:29ff:fe22:60f8%ens34: icmp_seq=1 ttl=64 time=0.138 ms (DUP!)
The :fffd amd :fffe addresses missing.
All the ipv6-addresses have been assigned at OVH control panel.
TL;DR: Something must be different between the old and new servers, but I can't find it.
UPDATE: A clone of a working machine does not work.
On the outside of the pfsense, set up as bridge, the machine sends this:
12:33:23.087778 IP6 test1.example.org > fe80::2ff:ffff:feff:fffe: ICMP6, neighbor advertisement, tgt is test1.example.org, length 32
12:33:24.106302 IP6 test1.example.org > par10s28-in-x0e.1e100.net: ICMP6, echo request, seq 451, length 64
But nothing ever gets back. Pings from outside doesn't go through either.
As the machine is an exact clone of a working machine, except for the ip-addresses, it must be an upstream problem at OVH.
UPDATE 2 Now OVH claims that to get data routed to an IPv6, the mac need to be associated to an IPv4 address. OMG The working IPv6's are not.
OVH does not know how to do IPv6 properly, their setup only works in certain situations, not applicable everywhere.
It only works without special hoop-jumping when the servers are exposed to the world and also have public IPv4-addresses.
They can't supply one public ipv6 and a subnet routed to it, which is needed if one wants to run VM's behind ones own firewall.
Until they get their stuff working, it is better to look elsewhere, if you are interested in IPv6.
OVH runs switch port security on their switches, so that only whitelisted MAC addresses can use any given port. This doesn't apply to vRack; switch port security is disabled on vRack. But OVH won't let you route IPv6 subnets to vRack yet. Nor can you failover an IPv6 subnet to another server. This is a critical oversight; until both of these capabilities exist, OVH's IPv6 support is considered limited.
So this is how I've set up an OVH server running a few dozen virtual machines:
On the host server, br3 is a bridge containing eno3 and virtual network interfaces on which I route IPv6. The host is configured as:
I have static routes configured as such:
I then run
ndppd
, which answers NDP neighbor solicitation queries for any address in my /64. It's configured as such:This causes the MAC address of the host to be used for all IPv6 addresses in the subnet, which I then route to virtual interfaces in libvirt, split into /80 networks. One example is configured as such:
All VMs in this particular network are assigned manual IPv6 addresses, but you could set up DHCPv6 if you wanted. That would look like:
I then route IPv4 failover addresses to the vRack, which is bridged to a single bridge
br4
oneno4
that all my VMs get a second virtual NIC from. Thus they have IPv6 on one interface and IPv4 on another. This is optional; you could just keep IPv4 failover addresses on your main interface (if you don't have a vRack, for instance).