By default when booting an m1.xlarge
EC2 instance of an OpenSolaris image you're given 1.6T of drive space across 4 ephemeral devices. This is auto-configured as follows:
~# zpool status
NAME STATE READ WRITE CKSUM
mnt ONLINE 0 0 0
c7d1p0 ONLINE 0 0 0
c7d2p0 ONLINE 0 0 0
c7d3p0 ONLINE 0 0 0
c7d4p0 ONLINE 0 0 0
What I'd like to do is to change this so that on boot the disk structure is something like the following:
NAME STATE READ WRITE CKSUM
mnt ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c7d1p0 ONLINE 0 0 0
c7d2p0 ONLINE 0 0 0
logs ONLINE 0 0 0
c7d3p0 ONLINE 0 0 0
cache
c7d4p0 ONLINE 0 0 0
... onto which I will, on boot, load data from an S3 store.
If I create the above structure and then re-image the machine, subsequent boots from this new AMI fail, either silently (as in, terminating before booting successfully) or fail to respond (once booted they I can't access via SSH or by any other means). Console output is empty in both these instances, except occasionally when there's a complaint about the devices.
Is what I'm trying to achieve possible? I'm assuming I'm simply missing the correct --block-device-mapping
argument when using ec2-bundle-image
but there's very little information on Google relating to this subject.
0 Answers