I attached a Second ENI to my Ubuntu OS machine on AWS VPC, the internal IP of this new ENI is 192.168.12.24. When I try to SSH in to this machine from another machine in the same VPC I get a connection time out. I am able to SSH in to the first ENI of the same machine both from inside and from outside the VPC.
Route command shows the following
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.0.0 * 255.255.240.0 U 0 0 0 eth0
192.168.0.0 * 255.255.240.0 U 0 0 0 eth1
default 192.168.0.1 0.0.0.0 UG 100 0 0 eth0
I am new to iptables and iprules, any help would be really appreciated.
I had the same issue and figured out that the root problem is because Ubuntu does not automatically register the new network interface as plug and play. You can confirm this using $ ifconfig . Even upon reboot it still does not see the new ENI, you need to manually create it to make it work.
Step 1, create a new .cfg file for eth1:
/etc/network/interfaces.d/eth1.cfg
contents of file:
# secondary eth1 interface auto eth1 iface eth1 inet static address 10.0.0.X (your IP) netmask 255.255.255.X (your netmask)
After rebooting your instance, you should then be able to ping the new private IP successfully (where you couldn't before).
Then, you just need to add a route to the gateway so that SSH can listen on the new private IP:
sudo route add -net 10.0.0.0 netmask 255.255.255.0 dev eth1 gw 10.0.0.1 (change where needed)
Now SSH should be listening and ready to accept a new SSH connection.
Adding a secondary network interface to a non-Amazon Linux EC2 instance causes traffic flow issues. This article from AWS answers your question:
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-ubuntu-secondary-network-interface/
Hope it helps!