long time lurker, I just launched a new m1.large instance on EC2 and I need all the available instance storage on the machine(850GB) for data processing.
I understand that generally all of the memory doesn't come mounted and you have to stitch a couple drives together(generally /dev/sdb, /dev/sdc). Amazon ec2 - how to setup? However on this instance those drives aren't listed in /dev on my box...
ubuntu@ip-***:/dev$ ls
autofs fd hvc7 loop6 port ram13 ram9 tty0 tty18 tty27 tty36 tty45 tty54 tty63 ttyS13 ttyS22 ttyS31 vcs vcsa3
block full input loop7 ppp ram14 random tty1 tty19 tty28 tty37 tty46 tty55 tty7 ttyS14 ttyS23 ttyS4 vcs1 vcsa4
btrfs-control fuse kmsg loop-control psaux ram15 rfkill tty10 tty2 tty29 tty38 tty47 tty56 tty8 ttyS15 ttyS24 ttyS5 vcs2 vcsa5
char hvc0 log mapper ptmx ram2 shm tty11 tty20 tty3 tty39 tty48 tty57 tty9 ttyS16 ttyS25 ttyS6 vcs3 vcsa6
console hvc1 loop0 mem pts ram3 snapshot tty12 tty21 tty30 tty4 tty49 tty58 ttyprintk ttyS17 ttyS26 ttyS7 vcs4 vga_arbiter
core hvc2 loop1 net ram0 ram4 snd tty13 tty22 tty31 tty40 tty5 tty59 ttyS0 ttyS18 ttyS27 ttyS8 vcs5 xvda1
cpu hvc3 loop2 network_latency ram1 ram5 stderr tty14 tty23 tty32 tty41 tty50 tty6 ttyS1 ttyS19 ttyS28 ttyS9 vcs6 xvdb
cpu_dma_latency hvc4 loop3 network_throughput ram10 ram6 stdin tty15 tty24 tty33 tty42 tty51 tty60 ttyS10 ttyS2 ttyS29 uinput vcsa zero
disk hvc5 loop4 null ram11 ram7 stdout tty16 tty25 tty34 tty43 tty52 tty61 ttyS11 ttyS20 ttyS3 urandom vcsa1
ecryptfs hvc6 loop5 oldmem ram12 ram8 tty tty17 tty26 tty35 tty44 tty53 tty62 ttyS12 ttyS21 ttyS30 usbmon0 vcsa2
As you can see there is no /dev/sdb,sdc The ebs backed drive is /dev/xvda1 and the currently mounted ephemeral/instance drive is xvdb, but there's no second ephemeral drive to mount that I can see.
df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 857M 6.8G 12% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 3.7G 8.0K 3.7G 1% /dev
devpts 0 0 0 - /dev/pts
tmpfs 1.5G 156K 1.5G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.7G 0 3.7G 0% /run/shm
/dev/xvdb 414G 199M 393G 1% /mnt
$ mount
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/xvdb on /mnt type ext3 (rw,_netdev)
Does anyone know why there isn't another drive I can mount and then stitch into a full 850GB disk? Or what it's labeled for that matter?
This is my first time putting a large together as well but the different drive names are throwing me for a loop and making me think I've forgotten something.
Is this a change in 12.04 that I missed?
Thanks for any help!
This can't be done from the GUI, but you can attach them via the command-line tools.
The key there is the
-b
command, as that tellsec2-run-instances
how to set up the block-mappings. If you had several EBS volumes to attach to it, you'd do that there. For the instance-local storage, you need to explicitly declare their mappings to the instance when you create it.Once you have two, you can do with them as you will.