I am trying to create a very simple cluster in sydney region.
It is a very straightforward setup. I specified the harddisk to be 60G. I want to have two ec2 instances in the cluster.
They show up as expected in ec2 panel.
However the ec2 instances are not showing up in the ecs cluster page:
1) Why It happens?
2) Is there any logging I can examine to find out the underlying problem?
You indicated in your comment that the instances have no public IP addresses. I'm extrapolating from that comment that your instances likely have no route to the Internet as well.
In order to use ECS, your instances need to have a route to reach (at a minimum) the ECS service endpoints. A route to the Internet can be through an Internet Gateway (IGW), Network Address Translation (NAT), or through an HTTP Proxy. Without a route to reach the ECS service endpoints, the ECS agent will be unable to register itself into your cluster and you will be unable to use those instances with ECS.
Fixed the issue by following the following two steps:
1) Make sure 'auto-assign public IPv4 address' is enabled
2) Create and attached a gateway to the VPC. Then add a route to the gateway.
are you using an
ecs optimized
ami? i would do that and then include this in your user-data when you spawn the instanceif you ssh onto the box, you should be able to see it register with the cluster then in the ecs agent docker logs
eg.
The issue for me was that the ECS agent was not starting on the EC2 instance. And was seeing the below error in the ecs agent logs at
/var/log/ecs/ecs-init.log
Following the instructions mentioned here and deleting the json file located at
var/lib/ecs/data/ecs_agent_data.json
and restarting the ECS agent using the below command is what ultimately worked for me.Another possible reason is using non ECS optimized AMI, also, having no outbounds rules in the security group can block the agent.
Choosing the Auto assign public IP -> Enabled worked form me while creating the cluster