I run a fairly heavy trafficked website and due to some unfortunate incidents the machines that are in my cloud at Linode went down. And I have only a single Load Balancer machine exposed to the outside world (one IP).
Also my site is a candidate for over 6,000 static pages that can be mirrored. Now my DNS is CloudFlare.
What can I do to maintain a static mirror of my site and route to it incase my site goes down.
Since I am running from Linode I don’t have anything like Route53 to detect downtime at an IP address and point to another one.
What are the strategies that people use statically mirror site and work against downtime?
A couple different things jump to mind first and foremost:
First, you already have a static mirror of your site that is designed for just this use-case: Cloudflare. As well as providing your DNS, I assume you have them set up as a CDN to dull the brunt of traffic that is coming towards you. Cloudflare has a feature called Always Online which is intended to do just what you are looking for: provide a static copy of the website up even if the "origin" - in this case, your load balancer and/or the servers behind it - go down. Make sure that you have that set up properly first, before you worry about a more complicated solution. Always good to get 80%+ of the problem out of the way with 2% of the work! In fact, you may be able to simply rely on Cloudflare to take care of the problem for you completely. Do some reading into Cloudflare Always On first, as it is going to be a lot simpler for you implement than anything to follow, since Cloudflare is already set up in your infrastructure. If after you have read up, it won't do enough for you, read on.
Now, there are a couple different things you need to think about when worrying about making your site available through different kinds of outages. The first thing is the goals. Are you trying to maintain your sites availability in an outage only, or would you prefer to also maintain a second site that you use to load balance between locations? What kind of system outages are you trying to prevent? How much time and/or money are you willing to invest in minimizing downtime?
After you have set some of the goals, you can now look at the different kinds of solutions that are out there. Generally speaking, all of the different strategies for minimizing downtime involve keeping one or more "extra" locations synchronized with the content in the main location, preferably in a different hosting provider and network to prevent downtime that spreads across an entire company. Failover is generally done through manipulation of DNS records. Larger companies will sometimes use IP-level solutions like anycasting or route manipulation to accomplish the tasks - which comes with several benefits - but doing so is expensive and extremely difficult to get right.
There are lots of companies out there that can help you change your DNS records automatically when a single IP becomes unavailable, but you can do it fairly easily yourself by utilizing the Cloudflare API (or the API of whatever your DNS provider is, if you change in the future.) All that is required is a second system in a separate location as wherever your website is hosted that continually checks your site to make sure it is up. If not, it hits your DNS providers API and changes the DNS records of your site to point to your backup location. This means that you will have a worst-case (on paper) downtime of the monitoring interval + the DNS TTL. In practice, DNS can be quite aggressively cached, and even short (<30sec) TTLs can take up to a couple hours to be completely cleaned out by all clients around the world. Mobile devices, in particular, are known for being troublesome with this. There are lots of tutorials out there on how to use different monitoring systems to accomplish this task - a quick search for "cloudflare failover" got me these two which use nagios and monit, respectively, but I am sure there are lots more easily accessible.
Of course, any kind of failover requires a place to fail over to! There are a bunch of different requirements for doing so, depending on your particular application's specifications and the requirements for synchronization. Some sites which are all static content can simply be copied each time they are updated to both locations, either by hand, by an automated script which pushes or pulls from the master to the slaves (cron + rsync is your friend!), or other methods like block-replication (DRBD) or a shared file system (GlusterFS). Other sites with dynamic content will require both this kind of file-level synchronization and database replication in a master-slave setup. Note, databases can cause all sorts of problems if you are trying to accept writes at both locations, so do plenty of research into master/master database replication using your particular database technologies if you are planning to have both datacenters concurrently available. It is not uncommon to setup the slave as a read-only replica even when failed over to in order to keep from having to sync the data back from the promoted slave when the main center is available again.
There are a lot of different things to think about when considering this kind of high-availability setup. If you tell us a bit more about the specifics of your application, I'm sure we can add more specific advice.
Instead of buying the NodeBalancer fits-all ready-to-use feature from Linode, you might as well just buy another regular Linode, and implement load balancing and caching by yourself.
You can use nginx, and have it act as a proxy and balancer to your real web-site.
Depending on whether you require your website to change every couple of hours/days or not, you can use several nginx features to save the content from your upstream Linodes.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html
One feature you may find very useful is
proxy_cache
. Another one isproxy_store
.The
proxy_cache
set of directives are very flexible such that nginx can be configured to automatically store and expire all the pages, or automatically serve stale pages only the the upstreams are unavailable (e.g. take a look atproxy_cache_use_stale
). Whereasproxy_store
can potentially be implemented together with a manual clear-allrm -rf
script, depending on your needs.Of course, if you're already paying 20$/mo for your load balancer at Linode (and provided you are not over budget), then you might as well cancel that, and look into CloudFlare, Incapsula and other similar offerings, some paid versions of which can be configured to cache all kinds of content (including that dynamically generated, starting at 10$/mo at Incapsula, for example).
Best Speed, Best Reliability, Most Secure
If your site is static then you should consider hosting it purely on CDN, then you don't need load balancers , dedicated or vps servers. Good CDNS scale as much you need and your only charged for the send amount or per Xrequests depending which company you go with. In regards of the non www cname issues I believe that cloudsflare has a workaround, Rackspace and Amazon you'd need a vps to redirect from non www to www. on the cdn.
CDN would offer more performance than a vast number dedicated servers or vps. Also its by far the most reliable and secure.
If you have some php files like contact forms then you can host them on a vps 64mb-256mb by using ajax js.
Further you mention mirroring,cdns mirror files all over the world this one of the main reasons there so fast and they use fail proof raids, and other failover redundantances... cdns cant really fail. But if you wanted a mirror say to a vps then you just use the api and cron the backup.
DNS.. Just pickone that has anycast/active failover
http://dyn.com/dns/dns-comparison/
Cloudflare can already do this for you. Though a service called Always Online™ (scroll down about half way).
If you want your own solution, use a proxy/load balancer. Something like haproxy works very well for this. Though, as this is a static file request, nginx will work very well too. So, if one goes down, the proxy simply stops requesting from the down server. But note that using a load balancer alone is not a failover, you need additional web servers ready to serve the extra load when the other fails.
DNS failover which you mentioned is not recommended if you are looking to stay within one datacenter. See Why is DNS failover not recommended? That post will also outline some additional solutions for you as well.