I have a WebServer say WS-1 and a NFS server say NFS-1 setup on AWS. WS-1 is being managed by an elastic load balancer and also autoscaled. It also has an EBS mounted on /var/www which contains all application code.
During autoscaling if another WS-X is launched will the /var/www mounted EBS also cloned and attached to that as well? If not, what are my options besides hosting code on root EBS volume?
Access inside NFS is defined on IP basis like 10.0.0.1/32(rw,...). During autoscaling more instances will be launched, how can i allow them to connect to NFS server and mounted the shared directory? I don't want to give access to private IP subnet using NFS, while on the Security Group level i have given access to NFS server to 0.0.0.0/0. NFS server uses fixed ports like 111, 2049, 4000-4002.
On scaling up, the EBS volume and its data will not be "cloned". To have this behavior you'd want to automate it at boot.
Another method, depending on how much data is on the EBS, is to pull it down from S3.
With the security group, you can allow any server in the app_security_group to have access to any server in the nfs_server_group. This will allow you to dynamically update the security groups.
Hope that makes sense.
Your instance will only be "cloned" if you have a recent AMI (Amazon Machine Image) taken of the instance. There will probably be some changes to your filesystem since that snapshot was taken, so it's a good idea to use AWS userdata to create a Bash/CloudInit script that will trigger an update on the areas that will change, e.g. codebase, media etc.
Options for updating certain areas can be either of (listing pros and cons):
Here's an example of a bootstrapping script that you could apply to your launch configuration to trigger it to perform the bootstrapping tasks dynamically. Note that userdata scripts are executed as the root user.
This method allows you the flexibility to update your "bootstrapping-repo" as often as you need without having to create new AMIs regularly.
For background, I use S3 in my userdata to grab relevant SSH keys, host files etc, and Git for the bootstrapping repository.
It's also a good idea to keep your AMIs up to date as regularly as possible and saved to your launch configuration so that new instances spun up won't have to spend too long updating themselves. Whether you do this manually every so often or write a script to do it via the API or CLI is up to you.
FYI: the output of the userdata and subsequent scripts that are called on instance launch will be logged to the file
/var/log/cloud-init-output.log