I've come across a situation where a client needs to blacklist a set of just under 1 million individual IP addresses (no subnets), and network performance is a concern. While I would conjecture that IPTables rules would have less of a performance impact than routes, that's just conjecture.
Does anyone have any solid evidence or other justification for favoring either IPTables or null routing as solution for blacklisting long lists of IP addresses? In this case everything is automated, so ease-of-use isn't really a concern.
EDIT 26-Nov-11
After some testing and development, it appears that none of these options are workable. It appears that both route lookups and iptables do linear searches through the ruleset, and take simply too long to process this many rules. On modern hardware, putting 1M items in an iptables blacklist slows the server down to about 2 dozen packets per second. So IPTables and null routes are out.
ipset
, as recommended by Jimmy Hedman, would be great, except that it doesn't allow you to track more than 65536 addresses in a set, so I can't even try to use it unless someone has any ideas.
Apparently the only solution for blocking this many IPs is doing an indexed lookup in the application layer. Is that not so?
More Information:
The usage case in this instance is blocking a "known offenders" list of IP addresses from accessing static content on a web server. FWIW, doing blocking through Apache's Deny from
is equally slow (if not more so) as it also does a linear scan.
FYI: Final working solution was to use apache's mod_rewrite in conjunction with a berkeley DB map to do lookups against the blacklist. The indexed nature of berkeley DBs allowed the list to scale with O(log N) performance.