Is there a way to automatically synchronize all zones between BIND (9) servers so that I don't have to add zones to the slave when I add them to the master?
Look at BIND 9.7.2-P2 in which you have the "rndc addzone" and "rndc delzone" statements that allow you to "remotely" add and remove zones from a running server.
I have a paper that provides some examples that I gave at NANOG last month.
While this won't go back and clean up any mess that you have currently, it does make it really easy to synchronize machines that you are able to manage using "rndc" going forward.
[yes, responding to a rather old post, but BIND 9.7.2-P2 is cool enough to warrant it]
Adding yet another update (years after the fact, but hoping that it helps folks that run across this in search results), I'd like to recommend the use of Catalog zones.
Catalog zones, introduced in BIND 9.11 (2018) allow automatic provisioning of zones (addition and deletion) through a special zone that is shared among the primary and secondary servers.
I don't know of any way to do this natively to bind9 if you're using flatfile backend. There are various DB-backed systems which can help automate it. Or you can script it:
I populate a text file with a list of zones and the primary NS IP for the zone, and stick it on a website that I allow my slaves access to. The slaves fetch this file periodically, and if it has changed they parse it generate a named.conf, and tell bind to reload configs. It's "automatic" in the sense that I don't have to manually ssh to my secondaries and update configs, but it's still external to bind9.
You could also use a higher level configuration management system such as puppet, to manage your entire DNS infrastructure. That's a bit more complicated though.
Maybe you're looking for a configuration management system like Puppet or CFEngine? There's extra infrastructure involved, but they can handle distributing a lot of configuration stuff, and could easily include this too.
Bind itself can't do it. More to the point, it would be undesirable to have it do so. There are many situations where only certain domains should be replicated with any given slave.
Using rsync on your entire /var/named tree works pretty well if you write your zones correctly and make sure named.conf lives in /var/named. It won't work with dynamic updates though, and is sorta against the grain for "how things should be done".
I've also experimented with stuffing all the domains to propagate into a special zone, and used a simple script on the slaves to rebuild the named.conf based on what they see in the master zone. Basically the same deal as the text file above, but feeding it from DNS to keep everything in-band. I should probably publish the script before I end up losing it =/
In the days of everybody and their mom having their own domains, it surprises me there isn't a good solution for this integrated with Bind by now =/
I second (or third) the above suggestions to check out Puppet or CFEngine. Also, you could look at checking your files into and out of CVS/SVN. If you're interested in a scripting solution, here's what I use:
#!/bin/bash
DATE=`date +%Y-%m-%d`
archive='/root/dns'
cd $archive
[ $1 ] && DEBUG=$1
if [ "$DEBUG" == "-debug" ]; then
echo "Debugging activated..."
else
unset DEBUG
fi
for server in dnsm02 dnsm03 dnsm51 dnsm52; do
for file in named.conf named.cfx.conf named.external.conf named.internal.conf named.logging.conf named.options.conf; do
PATCHDIR="$archive/$server/$DATE/patch" && [ $DEBUG ] && echo "PATCHDIR = $PATCHDIR"
SRVDIR="$archive/$server/$DATE" && [ $DEBUG ] && echo "SRVDIR = $SRVDIR"
## Fetch bind config files from $server, put them in date stamped $archive/$server
[ ! -d $PATCHDIR ] && mkdir -p $PATCHDIR && [ $DEBUG ] && echo "Created archive directory"
scp -q user@$server:/etc/bind/$file $archive/$server/$DATE/$file && [ $DEBUG ] && echo "Copied remote $file from $server..."
## diff fetched file against template file and create a patch
[ $DEBUG ] && echo "Creating patch file..."
diff -u $SRVDIR/$file $archive/$server/$file > $PATCHDIR/patch.$file
[ ! -s $PATCHDIR/patch.$file ] && rm -f $PATCHDIR/patch.$file && [ $DEBUG ] && echo "no differences , no patch created for $server $file"
[ -s $PATCHDIR/patch.$file ] && patch $SRVDIR/$file $PATCHDIR/patch.$file && ssh user@$server "sudo scp user@dnsm01:$SRVDIR/$file /etc/bind/$file" && [ $DEBUG ] && echo "$file patched and uploaded"
done
[ $DEBUG ] && echo "Checking whether patch directory is empty..."
[ $(ls -1A $PATCHDIR | wc -l) -eq 0 ] && rmdir $PATCHDIR && [ $DEBUG ] && echo "$PATCHDIR empty, removing..."
ssh user@$server "sudo rndc reload"
done
ssh keys are pretty essential to this setup. I do not claim extraordinary scripting-fu powers, so feel free to criticize, but be gentle.
For the amount of zones I have, syncing manually ended up being easier than getting any other solution to work. If I had many more zones I'd look into the proposed solutions.
Create a script to rip all the zone file names from the master (ls -1 will do most of this).
Create a script on the slave that will take the list of zone files as input, and create a named.conf.local from that list (the formatting is pretty simple), and replace the existing named.conf.local (you can use another name, and include it from named.conf.local if you want to play it safe)
create a single-command passwordless sudo access for "rndc reload" on the slave.
Create a single-use ssh key that allows you to send the list of zones from the master, and pipe it into the slave script and then run "sudo rndc reload". You can now push the zones from the master to the slave.
(optional) create a cron job to push the zones daily, or what ever.
Good experience, working this out. I can post my scripts, if anyone wants them.
This is some php code that the master server can run to create a list. Options then could be to upload it to a DB or the other DNS servers can pull it over http/s.
$zones = file(URL TO MASTER SERVER);
if($zones != ""){
$header = "// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local
include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
include "/etc/bind/named.conf.default-zones";
";
file_put_contents("/var/www/html/zone/zones.txt", $header);
foreach($zones as $zone){
if($zone != "") {
$zone = preg_replace('~[[:cntrl:]]~', '', $zone);
$config = 'zone "' . $zone.'" {
type slave;
masters {lemming; };
allow-transfer {none; };
file "/var/lib/bind/db.'.$zone.'";
};
';
file_put_contents('/var/www/html/zone/zones.txt', $config, FILE_APPEND);
}}
}
The "zone" dir will need to be writeable
Then create a bash script like this:
#!/bin/bash
php /var/www/html/index.php
cp /var/www/html/zone/zones.txt /etc/bind/named.conf
service bind9 restart
logger DNS Zones pulled from master and bind restarted /home/bob/dns_sync.sh
Look at BIND 9.7.2-P2 in which you have the "rndc addzone" and "rndc delzone" statements that allow you to "remotely" add and remove zones from a running server.
I have a paper that provides some examples that I gave at NANOG last month.
ftp://ftp.isc.org/isc/pubs/pres/NANOG/50/DNSSEC-NANOG50.pdf
While this won't go back and clean up any mess that you have currently, it does make it really easy to synchronize machines that you are able to manage using "rndc" going forward.
[yes, responding to a rather old post, but BIND 9.7.2-P2 is cool enough to warrant it]
Adding yet another update (years after the fact, but hoping that it helps folks that run across this in search results), I'd like to recommend the use of Catalog zones.
Catalog zones, introduced in BIND 9.11 (2018) allow automatic provisioning of zones (addition and deletion) through a special zone that is shared among the primary and secondary servers.
For full information, see: https://kb.isc.org/docs/aa-01401
I don't know of any way to do this natively to bind9 if you're using flatfile backend. There are various DB-backed systems which can help automate it. Or you can script it:
I populate a text file with a list of zones and the primary NS IP for the zone, and stick it on a website that I allow my slaves access to. The slaves fetch this file periodically, and if it has changed they parse it generate a named.conf, and tell bind to reload configs. It's "automatic" in the sense that I don't have to manually ssh to my secondaries and update configs, but it's still external to bind9.
You could also use a higher level configuration management system such as puppet, to manage your entire DNS infrastructure. That's a bit more complicated though.
Maybe you're looking for a configuration management system like Puppet or CFEngine? There's extra infrastructure involved, but they can handle distributing a lot of configuration stuff, and could easily include this too.
Bind itself can't do it. More to the point, it would be undesirable to have it do so. There are many situations where only certain domains should be replicated with any given slave.
Using rsync on your entire /var/named tree works pretty well if you write your zones correctly and make sure named.conf lives in /var/named. It won't work with dynamic updates though, and is sorta against the grain for "how things should be done".
I've also experimented with stuffing all the domains to propagate into a special zone, and used a simple script on the slaves to rebuild the named.conf based on what they see in the master zone. Basically the same deal as the text file above, but feeding it from DNS to keep everything in-band. I should probably publish the script before I end up losing it =/
In the days of everybody and their mom having their own domains, it surprises me there isn't a good solution for this integrated with Bind by now =/
I second (or third) the above suggestions to check out Puppet or CFEngine. Also, you could look at checking your files into and out of CVS/SVN. If you're interested in a scripting solution, here's what I use:
ssh keys are pretty essential to this setup. I do not claim extraordinary scripting-fu powers, so feel free to criticize, but be gentle.
For the amount of zones I have, syncing manually ended up being easier than getting any other solution to work. If I had many more zones I'd look into the proposed solutions.
Good experience, working this out. I can post my scripts, if anyone wants them.
This is some php code that the master server can run to create a list. Options then could be to upload it to a DB or the other DNS servers can pull it over http/s.
Master server can run this:
Slave server can run this:
The "zone" dir will need to be writeable
Then create a bash script like this:
Then create a chronjob as root (crontab -e):