I have set up a DNS-server on SLES10 (currently bind 9.6) on a multi-homed server. This server can be queried from all internal networks and delivers answers for all internal networks. We have two separate DNS "master" zones. Each of these zones is being served by a number of authoritative Windows-DNS-servers.
Now my linux-server is a secondary DNS server for one of these zones (private internal zone) and acting as forwarder for the other zone (public internal zone).
Until recently this setup worked without problems. Now I get - upon querying the public internal zone
(e.g. by the host
command on a linux client) the error-message
;; Truncated, retrying in TCP mode
a wireshark-dump revealed the cause of this: The first query goes out in UDP mode, the answer does not fit into UDP (due to the longish list of authoritative NS), then it is retried in TCP mode, delivering the right answer.
Now the question: Can I configure my bind to query the forwarders in TCP mode without trying UDP first?
Update: Trying my hand on ASCII-art...
+--------------+ +--------------+ +-----------------+
| W2K8R2 DNS | | SLES 10 DNS | | W2K8R2 DNS |
| Zone private +---+ All internal +---+ Zone public |
| internal 2x | | Zones | | internal 30+ x |
+--------------+ +-+----------+-+ +-----------------+
| |
+--+---+ +--+---+
|Client| |Client|
+------+ +------+
First, I would not call that an error, just an informational message.
Second, DNS servers will always answer UDP queries (BIND at least, I cannot find options to disable UDP) and clients will always (?) try to send a UDP query first (for example there are no options in resolv.conf to change that nor in the JVM) - if they fit in a UDP packet (requests usually do)
If you have a specific use case, you can specify to use TCP, e.g. in shell script use 'dig +tcp' or 'host -T' for resolution, and you can use system calls 'sethostent/gethostbyname/endhostent' (see man page) to force TCP in other cases.
If you really want to try and block UDP, the only option I can see is with an iptable rule, but I am not sure that that set up would work. I expect that DNS resolution would simply fail.
Your BIND server should be using EDNS (see RFC 6891) to allow UDP packets longer than 512 bytes.
This should permit your large NS set to be retrieved over UDP, without requiring the overhead of a TCP connection for other smaller queries.
Note however that these are actually the default values. If EDNS isn't being used, either something is blocking it, or the servers receiving the EDNS options aren't supporting it.
Also, note that
host
doesn't support EDNS. It's perfectly possible that your forwarder -> server queries are already using EDNS, and you just can't see it when you try with your local client.Try
dig +bufsize=4096 @server hostname A
instead of usinghost
.