I have a webserver. I have 2 static IP's from 2 different ISP's. I want to make sure my webserver is accessible when one ISP's network(s) are down.
for example
my host name a.example.com
my ISP1 IP : x.134.x.100
my ISP1 IP : x.10.x.10
Current A records in my DNS for example.com.
a.example.com x.134.x.100
a.example.com x.10.x.10
My firewall routes both requests to same server and this works fine.
My question is
"Is it the right solution for problem? If not what is the right solution?"
EDIT: I saw this link http://www.linuxquestions.org/questions/linux-general-1/multiple-a-records-in-dns-734680/
Now my question is : What is the easy/cheapest way to provide high availability?
The problem with this solution is that DNS will continue to hand out both IP's even if one server/ISP is down which means it won't really accomplish your goal. If you really need a true backup site, the best way to do this is to setup BGP.
It's a process, but you work with your ISP to setup a BGP router at each site advertising your public network. When one router or ISP goes down, it's a matter of seconds for the backup router to begin advertising your network from the backup site. No DNS changes, no waiting for anything to timeout or clear - it just works.
EDIT** Add secondary solution.
If setting up BGP is simply too far out of reach due to budget or some other constraint, I would recommend you setting up a DNS server at each site with a very short TTL (like 10 minutes assuming you don't have a HUGE amount of traffic). On each DNS server, create a single A record with the IP of the server in that site. So, in site A, configure a host record with the IP in site A and in site B create one with the IP in site B. This way, when the ISP goes down, you will no longer be serving out IP's that are unavailable.
Keep in mind with this solution, you will have clients that have cached the "bad" IP and they will have to wait for that to timeout before they will put the "good" IP from the other DNS server.
For webservices from specific clients types this will work, if you rung the command "host www.google.com" you see that they return with a rotating list of geographically significant WIPs. The modern browser (google's main client) understand that they can utilize that pool of addresses in order without additional resolver lookups.
Of course some clients don't do this and will use the first address and never continue.
This isn't a valuable form of high availability, but it does explain what you are seeing when you look at certain people's website dns implementations.