I have a fairly hefty Dell machine with dual quad-core cpus and 32GB memory.
On that machine I host a pretty busy forum, but the one machine does everything; web, cache, db, sphinx, etc. I also have a full-system, downtime-less backup done nightly.
This all costs me a smidge under $800 p/m (Australian prices are higher) and I don't really trust the backups.
As an experiment I set up the following;
- ELB with Auto scale rules (add new instance after 2 mins of 20%+ CPU, drop instance after 5 mins under 20%. Max 5 instances)
- Two EC2 instances (m3.large) across availability zones.
- One m3.medium for NFS (And will add cache and sphinx to this) - This just stores user uploads and it is served through Cloud Front.
- One db.r3.large RDS instance with my my.cnf copied to it as best I could (I'm not a db expert, my conf follows me everywhere)
In its current state its approx $600 p/m and worst case scenario (5 web instances) it comes in at roughly $1k.
It seems very, very good.
Last time I tried EC2 with my site it kept falling over at busy periods due to a lack of connection support on the single instances, so this time I kept the instances smaller and put the ELB in front. Apache Bench (-n 1000 -c 100) from my machine seems to go well with 8+ requests per second and 50% of requests under 11799ms (100% under 17763ms.)
What am I missing? This all seems too good to be true. I can have scheduled snapshots of my db done whenever I want, I can have as many machines as I want behind the ELB and in its current (2 instances) state I reckon it would be fine for 90% of the day.
Please provide feedback on this, I'm not a systems guy, I just love AWS stuff and my site seems perfect for AWS based hosting.
Thanks.
0 Answers