When some domain have primary NS, and several secondary NSes, will clients ask them randomly to reduce the load, or they will hit primary NS only, and proceed to secondary only when primary fails?
When some domain have primary NS, and several secondary NSes, will clients ask them randomly to reduce the load, or they will hit primary NS only, and proceed to secondary only when primary fails?
I will reuse the example from here https://serverfault.com/questions/130608/when-is-a-secondary-nameserver-hit/130625#130625
Basicly it depends on resolver implementation. Some resolvers hits the first server, other resolves will randomly pick a server from those availible. To get around this most DNS servers randomize the order of the replies.
If you ask for google.com you get the following answer:
And then we do it again:
Notice here how they change the order of the nameservers in the reply to spread out the load.
IPv4 resolvers will typically use the servers in the order they get them in the packet, with the first one most often succeeding. The order is typically randomized by the DNS server to spread the load. IPv6 will change this, as it requires the IP with the most common topmost bits to be the one contacted first. This will make randomization of the DNS replies meaningless.
As far as DNS recursive servers are concerned, there's no difference between "primary" and "secondary" name servers - technically they're both just "authoritative" servers.
The only things that make any difference to the effectiveness of the load balancing are:
NS
records is returned by the servers themselvesOf those factors, the first is the least important. Picking at random and using RTTs is much more common.
They will hit the primary one first then proceed to the secondary NS, so having multiple nameservers will only increase redundancy.
In order to increase performance you would need anycasted nameservers, though implementing this on your own will cost a lot of cash, and won't offer a substantial enough improvement to warrant the cost.
If you're talking about dns servers, rather than dns caches then it will ask a server at random. The master server is merely the source of the dns records for other servers that are authoritative for the domain. It's also probably true that this is only relevant if you're using axfr as a method of replicating your dns, when consulting a backend such as a database or ldap directory with other forms of replication it's even less meaningful as to which record goes in the SOA. There is one exception to this and that is that if you're using dynamic dns updates then dhcp clients will contact this server with updated info on their ips.
Based on empirical data on our DNS servers, it seems that primary and secondary are hit with about the same number or requests, i.e. the resolvers will use both, either by random selection, round robin or other.
Adding more servers may definitely improve performance.
The only real way to improve DNS performance over the whole internet is to use an anycast address.
If you just add a bunch of addresses, you still have no control over which address some remote user is actually going to use because the OS of the client decides what to do with the list of DNS servers it gets. A clever client would try to figure out which is the fastest, but that's not something the DNS admin has control over.
Windows clients will use the primary DNS unless the primary cannot be contacted, then it will switch to the secondary. I don't think there's a way to change this behavior.