i'm running a site on a shared host while image hosting is done on amazon s3. i've opened a second shared host account for backups. here's my setup :
A -> site on shared host, pushes images to B (s3 api) and C (via sftp), and daily backups to B (s3 api) and C (via sftp)
B -> amazon s3 buckets for main images host and bucket for backups
C -> second account on shared host, acts at backup of images host B and backup of data of A. Host only allows one sftp account which has full root access but is jailshelled.
my problem with this setup is that A is a point of total failure - anybody who gets access to A will be able to (worst case), wipe A, B and C.
Can somebody please recommend a better setup ? thanks in advance.
if your processes are autonomous, there will always be be a single point of failure, how else would you marshall resources and control the process? make a best effort to lock down this machine and you should be okay.
some approaches for locking down your single point of failure:
turn off services and close ports that aren't required
uninstall software that is not needed
follow best practices for hardening ssh. these include but are not limited to: disallow password auth, disallow root login, change the default port, whitelist the users who can login
use a tool like fail2ban to keep port knockers and script kiddies away