I'm looking into using auto scaling groups for a tier of webservers that would be fronted by an ELB. One of the things I'm having a hard time with is how to give each new instance the proper DNS name. For example, I'd like webservers to have names like frontend-web-XXX.prod.example.com
so their names would appear correct in logs and just ease of organization. I have two other tiers I'd ultimately like to make autoscaled and I'd like them to have names like api-web-XXX.prod.example.com
as well. I have some experience with cloudformation templates and have spun up individual instances with associated Route53 records but I don't see any indication of how this can be done within an autoscaled group.
This is not something you can do with CloudFormation, as its involvement stops at defining the auto-scaling groups - it doesn't got to see the instances started by the ASG. Auto-scaling groups don't give you any way to do this either.
Instead, you could ensure your instances run something on startup to register themselves in Route 53. This post talks about using Chef to do it, but you could do the same thing in a standalone script.
I found this article (https://underthehood.meltwater.com/blog/2020/02/07/dynamic-route53-records-for-aws-auto-scaling-groups-with-terraform/) which solves this problem with an ASG lifecycle hook, which notifies an SNS topic, which triggers a Lambda function to insert records into R53. If the ASG scales down, then another lifecycle hook triggers, causing the Lambda function to remove the DNS entry.
It's quite an involved solution, which I haven't personally tried, but it looks to have all the right components to do what's being asked here.
A method I have used is to set the R53 record during userdata execution (https://gitlab.com/-/snippets/2213082). I've done this for a Windows AD management node (which is only needed infrequently so is terminated except when needed). For this to work, you need an Instance Profile with the right permissions to allow the server to manipulate the DNS zone (which may or may not be to everyones liking).
Another option is to used fixed IPs on the instance(s) you create. I'm not sure how you'd ever be able to spin up more than one instance with this setup though. You may be able to attach a second interface with either a fixed IP or an elastic one, which you somehow pick from a pool. That sounds like a job for userdata again - different permissions required though, which may be more acceptable for some.
The last option I can think of is a load balancer. The LB has the IP address and sends traffic to the instance(s) in the ASG. This is probably the simplest solution, although probably the most expensive - especially if you only plan on having one instance.