A modern server can handle 100kq/s or more, it would take a lot of DNS queries to get overloaded. Even most small/medium ISPs just have a primary and a secondary (sometimes a tertiary too) servers.
Many of the root servers are clustered, but that's primarily for redundancy. Root DNS servers commonly see 50kq/s (more on particularly busy days like holidays). But the root servers are a bad example to use for a typical network.
Using an explicit, separate load balancer for DNS is rarely sensible.
DNS queries are automatically distributed among all of the available servers and resolvers have built in fault tolerance; a server that goes down won't affect your DNS service since the resolvers know to try the others.
A modern server can handle 100kq/s or more, it would take a lot of DNS queries to get overloaded. Even most small/medium ISPs just have a primary and a secondary (sometimes a tertiary too) servers.
Many of the root servers are clustered, but that's primarily for redundancy. Root DNS servers commonly see 50kq/s (more on particularly busy days like holidays). But the root servers are a bad example to use for a typical network.
Yes, it is a good idea.
IF you have the traffic. Root serves for . and the 2nd tier domain are clusters each. They handle significant amounts of traffic.
You servers EXTREMELY like dont care as you do not have any significant load.
Does it still make sense for higher uptime? Not really.
Using an explicit, separate load balancer for DNS is rarely sensible.
DNS queries are automatically distributed among all of the available servers and resolvers have built in fault tolerance; a server that goes down won't affect your DNS service since the resolvers know to try the others.