Is it generally considered a good or bad practice to include all frequently used and company's important hosts in /etc/hosts
?
Now I can see the following pros and cons:
- Pros: improved speed, traffic, reliability, security
- Cons: reduced manageability
In my view the less you rely on hosts file entries the better. It is far preferable to use automated systems like DNS. Your con is spot on - it reduces manageability. It also becomes error prone when things change. More than once I've known people to spent considerable time trying to debug what they thought was a DNS issue, only to discover that there was a hosts entry that was causing it all.
As for your pros, I'll argue that the minute speed difference should not be noticed unless you have other, more significant network issues. The amount of traffic generated by a DNS lookup is so trivial that if it factors in at all you once again have a serious network problem. There is no real security benefit. Quite the contrary, if the machine is lost or stolen you may be providing information that was not available otherwise.
John Gardeniers is right, the speed increase from using a hosts file is pretty darned small. The one case where it could be significant is if the DNS server itself was significantly overloaded. Even the overhead from DNSSEC is pretty small, though it can kill DNS servers ;).
The one thing I know of for things that NEED to be in the hosts file are addresses that have to be resolvable even during DNS outages. The one case I can think of is the LDAP server if you're using pam-ldap for authentication. And even that isn't a good use-case since the LDAP service itself can be configured with an IP address instead of a host-name.
If you're doing something funky with the localhost subnet (127.0.0.0/8) where you're using those addresses and need to access them by name for some reason, then a hosts file is justified.
We consider hosts file usage bad practice except in very specific cases.
Security is arguable if you change network. A peer identification is necessary (for example ssh and fingerprints checks, or GPG private/public keys, or certificates) to make sure you are talking to the right machine.
The problem I noticed is, for example, a naive user has a laptop relying on /etc/hosts and it gets onto another network (e.g. a webcafé, library... with malicious entities) and it expects to connect (manually or automatically, cronjobs...) to its usual "secure server" or "data server", sends some credentials or exports some confidential info (logs, rrd, login/passwords...) then the receiving machine can sniff all this data. Chances are probably low, but this is plausible.
As far as speed improvement goes better ways exist. Nearly any Linux/Solaris (I'm not sure about other *nix flavors) will have
nscd
running, which as long as the hosts cache has not been disabled will provide a memory resident equivalent to the /etc/hosts file. And, if the environment requires additional DNS RR types cached a local install of bind can be configured for this purpose. Many Linux distributions will have a bind package setup that does this with no user intervention.As to the question, outside of an environment that is fully disconnected from the internet I would recommend keeping a machines /etc/hosts file as simple as possible. The added management involved in trying to keep some number of name/ip pairs accurate will quickly prove not to scale well.