I am a developer for a website which is heavily reliant on static content such a pictures, videos etc. Our current setup is very simple, we basically have one server acting as our cdn which in turn gets updated by the web servers through rsync. The simplicity if this setup has a couple of drawbacks, mainly the fact we also have to rsync any changes between the web servers.
We are now looking for a less disjointed architecture. One idea is to use NAS which could be mounted in each web server which meant that we wouldn't have to rely on rsync however would introduce a single point of failure.
Does anyone have any knowledge of successful, fairly high volume (approx. 10tb a month) cdn setups and architectures?
First off, you're not using a "Content Delivery Network" (CDN), you just have a single webserver dedicated to serving static content. If you're searching with CDN as your search term, then you'll get wrong answers because that's the wrong word.
You could look into using a real CDN, which caches your static data in many locations around the globe closer to & faster for your end users. Because the CDN caches data closer to the end user, it has lower network latency, and thus speeds up the download. It also provides a scalability benefit, by almost completely offloading the serving of static content from your own servers. MaxCDN & Amazon CloudFront are common low-cost choices.
If your needs are smaller, you can also just use a few webservers to serve your static content. That's pretty simple. In this case, your question seems to be about deployment, i.e. getting the static files to the webservers. There are many ways of doing that, it will depend on what software you're using and how your internal workflow is, perhaps you should open another question with more precise information on this.
if it's static content, why not instead use large cache reverse proxies whether you use Apaches proxy or nginx or even lighttpd which is light and fast.
Another idea is to use DRBD which is RAID 1 over the network. 2 servers will mirror each others disk. They would have to be master/master mode but since it's mostly read only, you may not need to run cluster software. Otherwise you could use ocfs2 (oracle disk clustering). At least this way any server death is mirrored on another servers disk.
I think the rsync is fine it's just you need to make it scalable. example. Make a backend dns structure which has CNAMES of 'clusterhost-1, clusterhost-2, etc' pointing to your content/web hosts and a master content server does a DNS AXFR transfer to get a domain list (grepping out what isn't clusterhost) and run through them all and rsync them. If you have a new server, add a new CNAME for it and it'll automatically rsync on next round.. you want to take one offline not to be used, remove the CNAME for that host and it won't be updated. The same would go for reverse proxying you can control load by making multiple CNAMES and add them to the proxy list and change the record to point to whichever backend you want.
Hope this gives some ideas.