i am running Ubuntu 11.04 instance for my Web Server on AWS cloud, now i am getting there is no disk space in / partition of my server. df -ah say this
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 7.8G 97M 99% /
proc 0 0 0 - /proc
none 0 0 0 - /sys
fusectl 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
none 3.7G 112K 3.7G 1% /dev
none 0 0 0 - /dev/pts
none 3.7G 0 3.7G 0% /dev/shm
none 3.7G 80K 3.7G 1% /var/run
none 3.7G 0 3.7G 0% /var/lock
/dev/xvdb 414G 16G 377G 4% /mnt
Now i have Tried these thing for getting some extra space on / partition
- Clean up All Log files for Apache.
- Removed all unnecessary files from server.
- Home directory Cleanup.
But Still I am not getting enough space. This Instance type is m1.large with 8GB EBS. Now i am getting i have enough disk space in /dev/xvdb.
Is there a way i can allocate some diskspace to / from /dev/xvdb or Any other Ways. Please suggest me the possible solution for this.Is it possible to use the same /dev/xvdb partition with another instance.
The answer is twofold.
Workaround: use /dev/xvdb (/mnt) for temporary data
This is the so called ephemeral storage of your Amazon EC2 instance and its characteristics are vastly different than the of the persistent Amazon EBS storage in use elsewhere. In particular, this ephemeral storage will be lost on stop/start cycles and can generally go away, so you definitely don't want to put anything of lasting value there, i.e. only put temporary data there you can afford to lose or rebuild easily, like a swap file or strictly temporary data in use during computations. Of course you might store huge indexes there for example, but must be prepared to rebuild these after the storage has been cleared for whatever reason (instance reboot, hardware failure, ...).
Solution: resize /dev/xvda1 (/) to gain desired storage
This is the so called Root Device Storage of your Amazon EBS-backed EC2 instance, which facilitates Amazon EBS for flexibility and durability in particular, i.e. data put there is reasonably safe and survives instance failures; you can increase flexibility and durability even further by taking regular snapshots of your EBS volume, which are stored on Amazon S3, featuring the well known 99.999999999% durability.
This snapshot features enables you to solve your problem in turn, insofar you can replace your current 8GB EBS root storage (/dev/xvda1) with one more or less as large as you desire. The process is outlined in Eric Hammond's excellent article Resizing the Root Disk on a Running EBS Boot EC2 Instance:
If you properly prepare the steps he describes (I highly recommend to test them with a throw away EC2 instance first to get acquainted with the procedure, or automate it via a tailored script even), you should be able to finish the process with a few minutes downtime only indeed.
Most of the outlined steps can be performed via the AWS Management Console as well, which avoids dealing with the Amazon EC2 API Tools; this boils down to:
df -ah
Good luck!
Alternative
Given the versatility and ease of use of these EBS volumes, an additional option would be to attach more EBS volumes to your instance and move clearly separable areas of concern over there.
For example, we are using a couple of pretty heavyweight Java applications, each consuming 1-2GB storage per version; to ease upgrading versions and generally be able to move these apps to different instances at my discretion, I've placed them on dedicated EBS volumes each, mount these to an instance and soft link them to the desired location, e.g. usually
/var/lib/<app>/<version>
and/usr/local/<app>/<version>
.With this method, we are currently running EC2 instances with the root device storage still at its default size of 8GB (just like yours), but sometimes up to 8 EBS volumes with varying sizes (1-15GB) attached as well.
You need to be aware of potential network performance issues though, insofar all these EBS volumes are using the very same LAN for their I/O, which might yield respective performance gains even, or saturate your network in extreme cases - so as usual this depends on the use case and workload at hand.
Yep a simple way it to fstab it and then mount it to say /var/www/html/files2/
then mkdir /var/www/html/files2/website then ln -s -d /var/www/html/website /var/www/html/files2/website
Today I occurred the same problem , when you ceate the new ec2 intance by default EBS is 8GB. You can modify the size of the attached EBS without creating new intace or taking snapshot or detaching EBS.. Here are the three steps you can follow:
For the rest of the steps please follow this article if you have any quetion feel free to ask.
Thanks!
I suffered for hours today with this issue. So here is a simple guide on what I did to resolve an issue when the file system is full on AWS.
This will create space on the old file system and you will still have access to all the files.
Have fun!