I am using AWS EC2 free tier now.
I deleted EC2 instances but my billing dashboard shows current usage space is 11 GB-MO
.
Here is my billing dashboard image.
And here is EC2 dashboard.
Can any one help me how to clean instance?
Thank you
I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation.
In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI.
In short, here's what I'm trying to do:
One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.
I have an instance store (not EBS) based EC2 instance, launched from an official Ubuntu 12.04 AMI (specifically ami-25e8d351
: eu-west-1, 32-bit, instance root store).
It is up and running, and I've made some changes to it (installed software; tweaked config files).
Now I'd like to bundle the setup as an AMI (on Amazon S3), i.e., save the changes I've made. But I can't do this on the AWS Console:
Why is the "Bundle Instance (instance store AMI)" option greyed out?
And more importantly, is there any way to save this instance as an AMI?
long time lurker, I just launched a new m1.large instance on EC2 and I need all the available instance storage on the machine(850GB) for data processing.
I understand that generally all of the memory doesn't come mounted and you have to stitch a couple drives together(generally /dev/sdb, /dev/sdc). Amazon ec2 - how to setup? However on this instance those drives aren't listed in /dev on my box...
ubuntu@ip-***:/dev$ ls
autofs fd hvc7 loop6 port ram13 ram9 tty0 tty18 tty27 tty36 tty45 tty54 tty63 ttyS13 ttyS22 ttyS31 vcs vcsa3
block full input loop7 ppp ram14 random tty1 tty19 tty28 tty37 tty46 tty55 tty7 ttyS14 ttyS23 ttyS4 vcs1 vcsa4
btrfs-control fuse kmsg loop-control psaux ram15 rfkill tty10 tty2 tty29 tty38 tty47 tty56 tty8 ttyS15 ttyS24 ttyS5 vcs2 vcsa5
char hvc0 log mapper ptmx ram2 shm tty11 tty20 tty3 tty39 tty48 tty57 tty9 ttyS16 ttyS25 ttyS6 vcs3 vcsa6
console hvc1 loop0 mem pts ram3 snapshot tty12 tty21 tty30 tty4 tty49 tty58 ttyprintk ttyS17 ttyS26 ttyS7 vcs4 vga_arbiter
core hvc2 loop1 net ram0 ram4 snd tty13 tty22 tty31 tty40 tty5 tty59 ttyS0 ttyS18 ttyS27 ttyS8 vcs5 xvda1
cpu hvc3 loop2 network_latency ram1 ram5 stderr tty14 tty23 tty32 tty41 tty50 tty6 ttyS1 ttyS19 ttyS28 ttyS9 vcs6 xvdb
cpu_dma_latency hvc4 loop3 network_throughput ram10 ram6 stdin tty15 tty24 tty33 tty42 tty51 tty60 ttyS10 ttyS2 ttyS29 uinput vcsa zero
disk hvc5 loop4 null ram11 ram7 stdout tty16 tty25 tty34 tty43 tty52 tty61 ttyS11 ttyS20 ttyS3 urandom vcsa1
ecryptfs hvc6 loop5 oldmem ram12 ram8 tty tty17 tty26 tty35 tty44 tty53 tty62 ttyS12 ttyS21 ttyS30 usbmon0 vcsa2
As you can see there is no /dev/sdb,sdc The ebs backed drive is /dev/xvda1 and the currently mounted ephemeral/instance drive is xvdb, but there's no second ephemeral drive to mount that I can see.
df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 857M 6.8G 12% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 3.7G 8.0K 3.7G 1% /dev
devpts 0 0 0 - /dev/pts
tmpfs 1.5G 156K 1.5G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.7G 0 3.7G 0% /run/shm
/dev/xvdb 414G 199M 393G 1% /mnt
$ mount
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/xvdb on /mnt type ext3 (rw,_netdev)
Does anyone know why there isn't another drive I can mount and then stitch into a full 850GB disk? Or what it's labeled for that matter?
This is my first time putting a large together as well but the different drive names are throwing me for a loop and making me think I've forgotten something.
Is this a change in 12.04 that I missed?
Thanks for any help!
What's the difference between the two? It seems as if using an instance store the root drive is ephemeral and EBS isn't in the event of a termination. But, if you're not terminating does it matter? Could someone compare EBS with instance-store and termination turned off? What are the practical differences?