I have a strange idea - let multiple people/organizations host the same application, and let all their nodes be accessible via a single domain name. That's in order to have, let's say, a really distributed social network, where usability is not sacrificed (i.e. users don't have to remember different provider urls and then when one provider goes down, switch to another one)
To achieve that, I thought a DNS record with multiple IPs can be used.
So, how many IPs can a single DNS A record hold? This answer says it's around 30, but the usecase there is different. For the above scenario I wouldn't care if a given ISP caches only 30, as long as another ISP caches another 30, and so on.
Disclaimer: No offense, but this is a really bad idea. I do not recommend that anyone do this in real life.
But if you give a bored IT guy a lab, funny things will happen!
For this experiment, I used a Microsoft DNS server running on Server 2012 R2. Because of the complications of hosting a DNS zone in Active Directory, I created a new primary zone named testing.com that is not AD-integrated.
Using this script:
I proceeded to create, without error, 65025 host records for the name
testing.testing.com.
, with literally every IPv4 address from 1.1.1.1 to 1.1.255.255.Then, I wanted to make sure that I could break through 65536 (2^16 bit) total number of A records without error, and I could, so I assume I probably could have gone all the way to 16581375 (1.1.1.1 to 1.255.255.255,) but I didn't want to sit here and watch this script run all night.
So I think it's safe to say that there's no practical limit to the number of A records you can add to a zone for the same name with different IPs on your server.
But will it actually work from a client's perspective?
Here is what I get from my client as viewed by Wireshark:
(Open the image in a new browser tab for full size.)
As you can see, when I use nslookup or ping from my client, it automatically issues two queries - one UDP and one TCP. As you already know, the most I can cram into a UDP datagram is 512 bytes, so once that limit is exceeded (like 20-30 IP addresses,) one must use TCP instead. But even with TCP, I only get a very small subset of A records for testing.testing.com. 1000 records were returned per TCP query. The list of A records rotates by 1 properly with each successive query, exactly like how you would expect round-robin DNS to work. It would take millions of queries to round robin through all of these.
I don't see how this is going to help you make your massively scalable, resilient social media network, but there's your answer nevertheless.
Edit: In your follow-up comment, you ask why I think this is generally a bad idea.
Let's say I am an average internet user, and I would like to connect to your service. I type www.bozho.biz into my web browser. The DNS client on my computer gets back 1000 records. Well, bad luck, the first 30 records in the list are non-responsive because the list of A records isn't kept up to date, or maybe there's a large-scale outage affecting a chunk of the internet. Let's say my web browser has a time-out of 5 seconds per IP before it moves on and tries the next one. So now I am sitting here staring at a spinning hourglass for 2 and a half minutes waiting for your site to load. Ain't nobody got time for that. And I'm just assuming that my web browser or whatever application I use to access your service is even going to attempt more than the first 4 or 5 IP addresses. It probably won't.
If you used automatic scavenging and allow non-validated or anonymous updates to the DNS zone in the hopes of keeping the list of A records fresh... just imagine how insecure that would be! Even if you engineered some system where the clients needed a client TLS certificate that they got from you beforehand in order to update the zone, one compromised client anywhere on the planet is going to start a botnet and destroy your service. Traditional DNS is precariously insecure as it is, without crowd-sourcing it.
Humongous bandwidth usage and waste. If every DNS query requires 32 kilobytes or more of bandwidth, that's not going to scale well at all.
DNS round-robin is no substitute for proper load balancing. It provides no way to recover from one node going down or becoming unavailable in the middle of things. Are you going to instruct your users to do an ipconfig/flushdns if the node they were connected to goes down? These sorts of issues have already been solved by things like GSLB and Anycast.
Etc.
To answer the question as it was stated ("how many IPs can a single DNS A record hold?") the answer is very simple: a single
A
record holds exactly one address. There can however be multipleA
records for the same name.Each IPv4 address will take up 16 bytes in the reply. Each IPv6 address will take up 28 bytes in the reply.
It is strongly recommended that you ensure the reply will fit in 512 bytes. That would allow for about 25 IPv4 addresses and 14 IPv6 addresses (considering that you need some other information in the packet as well). The exact limit depends on the length of your domain name.
If you have both 25 IPv4 addresses and 14 IPv6 addresses, then you are counting on the clients requesting IPv4 and IPv6 addresses in separate queries. Should the client ask for both types of addresses in a single query (which is rare), then you would have to go lower.
Should the reply size exceed 512 bytes, it may still work over UDP if client and server supports EDNS. Without EDNS, the client would receive a truncated reply, and it would have to retry over TCP. This increases the communication from 1 to 4 roundtrips. But even worse, sometimes there are misconfigurations preventing DNS over TCP from working.
Even if you could squeeze more than 14 addresses into the reply without causing problems at the DNS layer, it is unlikely to be very useful. The timeout used by the client before giving up on one address and proceeding to the next is often significant.
Having to wait for that timeout even once can lead to poor user experience. If the client had to go through 14 addresses before getting a response, the user would have to wait for 13 timeouts.
What you are describing isn't an especially new idea. As other answers have already covered, you are limited in how many A records you can have in one reply, but that says nothing about how many A records there might be in total.
You could, for example, implement a DNS server which answers any query for an A record with a random IP. Queried enough times, this would result in 4294967296 unique A records: one for each IPv4 address.
As I said, this isn't a new idea. In fact, it's in part how Akamai works (and probably a lot of other CDNs). The A record you get for any Akamai domain is determined by their black-magic DNS servers. I bet the answer you get depends on dynamic load balancing and geographical concerns.
For example, I picked a338.g.akamaitech.net. If I look at that on my computer right now, which uses a DHCP assigned nameserver from Comcast:
What if I ask Google's DNS?
I bet if you try it, I bet you will get a different answer. How many edge servers does Akamai have serving any particular resource? More than two, I bet.
Others have mentioned it as a detail, but from a practical standpoint, the hard limit is the UDP packet size limit of 512 bytes. While it's possible to switch to TCP when truncation is detected, in practice many/most clients will not do it (and arguably they shouldn't; it would give bad user experience for most applications, and I would only expect zone-transfers or other special-purpose lookups to support TCP). So you're looking at a limit of somewhere around 30 addresses for IPv4 (A records) and somewhat less for IPv6 (AAAA) since they're larger. The length of the domain name cuts into this and will limit the number further.
The short answer: about 25 A records fit in a UDP packet. Beyond that, DNS will switch to TCP and it will not be as fast. You'll also have problems with clients that aren't using DNS resolvers capable of picking the "nearest" IP. Also, with wifi and mobile, the "nearest" is often not going to be the right server.
Longer answer:
Don't do that. A better way would be to set up individual CNAME records for each user that point to the appropriate server. Let's say you have two servers,
server-f
andserver-r
used for IMAP. Configure each person's IMAP client with the servername being USERNAME.imap.example.com where "USERNAME" is replaced by their email username. Now you can move people between servers without having to reconfigure their email client.server-f.example.com. IN A 10.10.10.10 server-r.example.com. IN A 10.20.20.20 wilma.imap.example.com. IN CNAME server-f.example.com. fred.imap.example.com. IN CNAME server-f.example.com. betty.imap.example.com. IN CNAME server-r.example.com. barney.imap.example.com. IN CNAME server-r.example.com.
However, if you do this, I HIGHLY HIGHLY RECOMMEND that you generate the DNS records automatically from a database of users. You want to make sure that as accounts are created and deleted the DNS records are created and deleted too. Otherwise you'll end up with a mess and a lot of confusion.
I've seen this done at companies with literally thousands of users and, since things were automated, it world very well.
Why 512 bytes as a magic number for UDP payload?
If you go back to (near) the beginning and look at RFC-791, which is the protocol standard for IP version 4 you will find this text
So 20 bytes for a TCP header, 20 bytes for a UDP header leaves 536 bytes for the DNS response. 12 bytes for the DNS protocol header leave 524 bytes. Then you need to leave some room for the QNAME and stuff, and 512 bytes is a nice round number.
On today's Internet, the common MTU is 1500 bytes and pretty much every host that does IPv4 can support an IP datagram of that size. But these protocols predate wide adoption of Ethernet (at least IP, UDP) and the number you depended on was 576 bytes as a starting point.
The EDNS extension mechanism allows communicating to the DNS server what size response the resolver is prepared to receive, so it's common to see queries with that option present to reflect a 1500 byte MTU. You ideally really don't want to cause IP fragmentation for a bunch of reasons, thus the rather conservative packet size of responses with DNS responses transported by UDP.
Thus ends today's history lesson.
As others have pointed out, it's a terrible idea for real-world use.
In the real world there are nonconforming clients and resolvers that have trouble with responses that can't fit within a single UDP datagram, and there are firewalls which will enforce specific but not-protocol-compliant ideas about DNS message size limits.
And even if you could count on your huge response getting through in every case (which you emphatically cannot) there's another reason this is a Very Bad Idea. The larger your DNS response size is, the more tempting it is as a payload for reflection attacks because you provide a huge amplification factor. In this type of denial-of-service attack, which is common in DNS, a UDP query is sent to an open recursive resolver. The source address of UDP queries are typically easily spoofed, and the attacker sets the query source to the IP of their intended target. Two desirable (to the attacker) effects are achieved: first -- a relatively small sending effort on their part (from a small spoofed query) results in a comparatively large torrent of undesired traffic arriving at the target (this is the amplification factor), and second -- the actual source of the attack is hidden from the target; the target only knows the addresses of the recursive resolvers being used as reflectors.
Interesting point of historical trivia on this subject. In the 90s, AOL expanded their DNS records such that an MX query would return >512 bytes. This violated RFC, broke a lot of SMTP servers (qmail being a popular one at the time), and caused sysadmins a lot of headaches. The fix required either patching, or adding static routes.
I don't know what the current situation is, but a few years ago, when I last touched qmail, the patches were still in place.
http://www.gossamer-threads.com/lists/qmail/users/30503