I have three servers that currently have ips 192.168.1.1, 192.168.1.2, and 192.168.1.3. They can see each other and talk to each other. I would like to create a second subnet, on top of the 192.168.1.x, which is 10.170.x.x.
I can assign ips to each of the three servers, 10.170.0.1, 10.170.0.2 and 10.170.0.3, with ip addr add 10.170.0.1 dev eth0
. The problem I'm having is how to do the routing. I can't ping any server over the 10.170.x.x network. I believe that I need to create some peer-to-peer bridges but I have no clue on how to get started. Any ideas?
As has been noted by others, when you use
ip address add
and don't provide a network mask or CIDR range,/32
is assumed, and so no routes are created for the subnet.To resolve the issue, you add the CIDR range as well:
To make the change persistent you add it to
/etc/network/interfaces
. This can be done the cheap and dirty way through apost-up
command, but the proper way is to add a second stanza containing only that address:Note in particular that, despite what you will read in outdated Internet guides, you should not use
eth0:0
for the second IP address. This form is deprecated and has been for years (which means it is likely to be removed from Linux at any time).Try doing
ip addr add 10.170.0.1/16 dev eth0
so you're specifying a netmask. I bet they're adding in the /32 IP space otherwise (though I have not confirmed that).This is an interesting challenge to which there are a number of solutions. The wording of your question does leave some questions, but I think from context I have an idea what you're trying to do.
I assume these hosts are all on the same physical network segment - e.g. connected to a hub or switch and can ping each other on the 192.168.1.x network without going through a router. In this case, you really want a second network in parallel to the original - a slight nitpick, but relevant to understanding what's happening, I think.
I do think it's likely that Eric Renouf's answer above is hot on the trail - by default iproute2 is likely to add a 32-bit netmask - 255.255.255.255 - which is correct in a number of situations where routing is involved / necessary, such as at a large hosting provider where some machines have several IP addresses, handed out over time. You likely want a 24-bit netmask - 255.255.255.0 - or even a 16-bit, if you truly intend to use the entire 10.170.x.x space.
As an aside, the '.x.x' bit of the addresses is really an important detail here, typically we would describe a network using its' "network address", the bottom address in a particular subnetted range. In this case, your network addresses are 192.168.1.0/24 and 10.170.0.0/16. It's possible to have subnets smaller than class C-sized / 24-bit, where the network address isn't zero. I won't go into too much detail, but strongly suggest investing some time reading about subnetting.
Now that we know to avoid the default netmask, and why, consider:
Repeat on each machine. If you want these configured at boot, also consider adding this line as as post-up command in /etc/network/interfaces.
Hope this helps!
I was able to get this working by adding a route with a source address:
for the 192.168.1.1 / 10.170.0.1 machine, it would be
$ ip route add 10.170/16 dev eth0 src 10.170.0.1
more generally
$ ip route add 10.170/16 dev eth0 src "ip address of host"
I think the right way to do it is as follows:
This may or may not work if servers attached to unmanaged switch, but with "smart" or managed switches this is not a problem.
I have this configuration working with 2 IPv6 networks from different providers with bunch of hosts with a single ethernet interface communicating on both networks and it works great.
You just need to check your routing, I think you just missed that.
Probably, what failed there is that you've assigned the IP address to each host without indicating the network mask, you may try this by doing
This will also add the mask, thus creating the routing entry needed for your hosts to know how to reach each other.
If you just want to add the routing entry
Are the server virtual machines ? if so, you may consider adding supplemental interfaces..
However, as mentioned by others, just add the IPs with appropriate netmask, then the two host are able to communicate.
Proof from 2 debian running on vmware :
That's it !
To debug use tcpdump, check arp entry..