I am using tar
to backup a linux server to tape. I am using the -j
option to compress the file with bzip2
, however I can't see a way to adjust the block size options for bzip2 from tar. The default block size is 900,000 bytes which gives the best compression but is the slowest. I am not that bothered about the compression ratio, so am looking to make bzip2 run faster with a smaller block size.
Guy C's questions
I currently have an MS Access application that connects to a PostgreSQL database via ODBC. This successfully runs on a LAN with 20 users (each running their own version of Access). Now I am thinking through some disaster recovery scenarios, and it seems that a quick and easy method of protecting the data is to use log shipping to create a warm-standby.
This lead me to think about putting this warm-standby at a remote location, but then I have the question:
Is Access connecting to a remote database via ODBC usable? I.e. the remote database is maybe in the same country with ok ping times and I have a 1mbit SDSL line.
I will explain the problem first ...
I have an in-house webserver/web-app that is publicly accessible. Our Internet connection (Bonded ADSL MAX Premium) is therefore a single-point-of-failure (which has been highlighted by some recent connectivity issues).
As a low-cost backup I was thinking of adding a second Internet connection (Standard ADSL) with a static IP of it's own.
Now I was wondering if anybody has tried or would comment on the following idea ...
If I got an externally hosted server and run a proxy server like HAProxy, I could have this proxy requests to our main IP (down our main connection). Then have it failover to the second connection if the main one went down.