We currently have started using the following strategy for Windows backups, which I am looking to deploy via puppet:
- iSCSI Initiator is enabled on the client server.
- iSCSI Virtual Disk + VHD are configured on the backup server, with VHD files spread out over numerous RAID containers.
- Also on the backup server, a new iSCSI target is configured pointing to this virtual disk, restricted to the DNS Name or IP of the client server. A random username/password are configured.
- The iSCSI initiator on the client server is configured to connect to the new target and the virtual disk is added via disk management.
- Finally, windows backup is configured to point to the VHD locally, to perform the backup.
I've only just begun using puppet, and my main challenge so far is that I must configure two separate nodes, in a particular order (eg. the iSCSI initiator cannot connect to the target before it exists).
Possible Solutions
Exported Resource Collection
I have been looking for a way to do this, and so far, the most suitable configuration pattern I can find is Exported Resource Collection. I have looked at a few examples using Nagios, and it seems that it would require me to define a new type, which is essentially Ruby code, in order to perform actions on the exported resources.
I would like to avoid doing something this advanced so soon as, although I have programming experience, my colleagues do not, and I'm trying to keep it as simple as I can.
Separate node entries
One idea I am toying with is, instead of the added complexity involved in trying to define the entire backup (target server and initiator) from the client node, simply define the two separate roles in each node.
And then to simplify things, although configuration should be done in a particular order for things to work, I would simply rely on puppet to keep trying to run the configuration and eventually, all pre-requesites would be met. (For example, the first time around, puppet would perhaps try and connect to the iSCSI target before it has been created, but upon trying the second time, the other node should have finished creating the target, so if puppet tried this a second time, it should suceed.)
Something like:
node 'backup-server' {
windows_backup::server::target { 'client01':
dns_name => 'client01.example.com',
username => '',
password => '',
drive_letter => '',
drive_size => '',
},
windows_backup::server::target { 'client02':
dns_name => 'client02.example2.com',
username => '',
password => '',
drive_letter => '',
drive_size => '',
}
}
Then...
node 'client01' {
windows_backup::client::backup { 'client01':
username => '',
password => '',
drive_letter => '',
}
But then, as you can see, you start getting into the territory of hard-coding too many values (for example, determining the size of the VHD will require us to login to the server and determine the size manually -- whereas this can be done automatically potentially). And this then comes back to ideally automatically sharing data/resources between the two nodes involved.
At some point, all the manual hard-coding provides not enough of an advantage (eg. time spent doing configuration) to justify the extra layer of complexity involved in using puppet to deploy this.
Has anyone deployed similar workflows in the past, and are there simpler ways of doing this?
0 Answers