I have a Linux server running on a VMware virtual machine, with 4 virtual hard drives. After the box ran for a month, I added 2 of the 4 hard drives in the vSphere client; I need more space. I did this step a few weeks ago, then was pulled into another project before creating the file systems and setting up mounts. Now, I do not know which drive is which within Linux. I have /dev/sda , /dev/sda1, /dev/sda2, and /dev/sdb
How do determine which drives have existing data and which are the new? Or, how do I remove drives and start over (I know how to remove the drives in teh vSphere client, but not how to remove the references to them in Linux).
Here are the results of dmesg| grep sd
:
[ 1.361162] sd 2:0:0:0: [sda] 16777216 512-byte logical blocks: (8.58 GB/8.00 GiB)
[ 1.361205] sd 2:0:0:0: [sda] Write Protect is off
[ 1.361210] sd 2:0:0:0: [sda] Mode Sense: 61 00 00 00
[ 1.361253] sd 2:0:0:0: [sda] Cache data unavailable
[ 1.361257] sd 2:0:0:0: [sda] Assuming drive cache: write through
[ 1.363223] sd 2:0:0:0: Attached scsi generic sg1 type 0
[ 1.363398] sda: sda1 sda2
[ 1.363788] sd 2:0:0:0: [sda] Attached SCSI disk
[ 1.364425] sd 2:0:1:0: [sdb] 1572864000 512-byte logical blocks: (805 GB/750 GiB)
[ 1.364466] sd 2:0:1:0: [sdb] Write Protect is off
[ 1.364471] sd 2:0:1:0: [sdb] Mode Sense: 61 00 00 00
[ 1.364512] sd 2:0:1:0: [sdb] Cache data unavailable
[ 1.364515] sd 2:0:1:0: [sdb] Assuming drive cache: write through
[ 1.370673] sd 2:0:1:0: Attached scsi generic sg2 type 0
[ 1.405886] sdb: unknown partition table
[ 1.406228] sd 2:0:1:0: [sdb] Attached SCSI disk
[ 4.493214] Installing knfsd (copyright (C) 1996 [email protected]).
[ 4.493849] SELinux: initialized (dev nfsd, type nfsd), uses genfs_contexts
[ 5.933636] EXT4-fs (sdb): mounted filesystem with ordered data mode. Opts: (null)
[ 5.933649] SELinux: initialized (dev sdb, type ext4), uses xattr
[ 6.099670] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[ 6.108488] SELinux: initialized (dev sda1, type ext4), uses xattr
Output from fdisk -l
Disk /dev/sda: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000dfc09
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 16777215 7875584 8e Linux LVM
Disk /dev/sdb: 750 GiB, 805306368000 bytes, 1572864000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/fedora_dataserv-swap: 820 MiB, 859832320 bytes, 1679360 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/fedora_dataserv-root: 6.7 GiB, 7201619968 bytes, 14065664 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
From the information you provide, you have two VM disks:
/dev/sda: 8GB with two partitions /dev/sda1 and /dev/sda2
/dev/sdb: 750GB with no partition, which should be the one you newly added.
Your fdisk -l command result shows that you have created a LVM volume called fedora_dataserv and according to the used disk space, you are using the /dev/sda disk only.
You can refer to the Answer I have posted before, change the value of deb-web138 to fedora_dataserv. For example:
are changed to:
in order to increase the space you can use.
If you simply type
you will see, which folder is mounted to which disk.
lsscsi
dmesg| grep sd
cat /proc/scsi/scsi
fdisk -l
sda is the drive connected to the first logical port in your VM's configuration. sdb is the drive connected to the second logical port in your VM's configuration. sda1 and sda2 are two partitions on the first drive, and sdb appears to have no partitions (i.e. is the one you added). You can use gparted or (if formatted as such, lvm) to see how your partitions are laid out.
blkid
will list the drives. You should be able to identify them based on their sizes, partitions, UUIDs, filesystem types, and so on.lsblk
is also quite useful to get a graphical overview of the devices, but doesn't show the filesystem type.Thanks to everyone who answered. Everyone who did so, helped me track down the issue, and taught me much!
For some reason, Linux was not recognizing the 2 new drives. (I did not know that until I learned from the others' answers.
The final solution was:
fdisk -l
), which it didfdisk -l now shows /dev/sdc and /dev/sdd
Thanks again to everyone for the help!
/dev/mapper is where mounted luns and LVM partitions are automounted, usually with friendly names.
If your system uses LVM, man lvm. If you're using mounted luns, check out dm-multipath.