This is Ubuntu server 11.10.
/dev/sdb is not mounted (see outputs below) and is not used by any process that I can see. Its not used for swap as well. This is a 2nd IDE drive in the server, connected to the secondary IDE and set up in hardware raid as array 2.
I cannot mount the drive as I get a complaint it might already be in use. I did run fdisk, deleted all the previous partitions and created a single primary one.
root@sargent:/home/harel# fdisk -l /dev/sdb
Disk /dev/sdb: 122.9 GB, 122942324736 bytes
226 heads, 63 sectors/track, 16864 cylinders, total 240121728 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00083711
Device Boot Start End Blocks Id System
/dev/sdb1 2048 240121727 120059840 83 Linux
root@sargent:/home/harel# mkfs -t ext4 /dev/sdb
mke2fs 1.41.14 (22-Dec-2010)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
/dev/sdb is apparently in use by the system; will not make a filesystem here!
root@sargent:/home/harel# cat /proc/swaps
Filename Type Size Used Priority
/dev/sda5 partition 2619388 0 -1
root@sargent:/home/harel# mount
/dev/sda1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
Check your partitioning once again, but without specifying /dev/sda:
Then if you find in output something like
/dev/md0
, - it means that you have got sw array, and disk that you're trying to format contains metadata of that array.In this case:
Clear superblock of disk:
Remove array
No you can work with your drive.
/dev/sdb
is in use because there are partitions on it that the OS is aware of. If you want to create a filesystem on it (a bad idea, because it is rarely done so will confuse administrators, and it will make it difficult to do any kind of splitting or resizing), first remove the existing partition withfdisk
. If you want to create a filesystem on the sole partition/dev/sdb1
(this is what you should do, since there is no benefit to using the disk directly), then say what you mean:mkfs /dev/sdb1
.You get this error message because
/dev/sdb
has a partition (i.e./dev/sdb1
) and the mkfs call would also overwrite all or parts of your partition table. In the worst case, your file system wouldn't be usable then. Or you 'just' loose a partition table you might still need. Since the partition device files and the actual on disk partition table should tell the same story, they are arguably 'in use' by the kernel.Thus, the simple rule is: if you want to create your filesystem on the whole disk device then make sure all partitions are deleted before. For example:
Usually, the partitioning tool takes care of notifying the kernel to update its partition device files. But sometimes (e.g. for loopback devices) it may be necessary to explicitly remove them (after the partition table is removed), e.g. via
partx -dv mydev
and/orkpartx -dv mydev
.Note that a previous
kpartx -av mydev
may create the partition devices as/dev/mapper/mydev*
instead of/dev/mydev*
. When they are present then mkfs complains in the same way.This is an old post but I encountered the same problem when I tried to create a filesystem to one of my hard drives. The system complains and says
The commands:
did not yield anything useful. Here's what I did to find the cause of the problem. I went to the /etc directory and searched for any files that mentioned the drive sdc. The first 'find' did not find anything, so I expanded the search to include the next layer below and that did find the offender
I know those files are being used by targetcli (iSCSI backend), so once I detached that storage and re-run the mkfs command, the same command ran without any issue.
Hope this helps.
In my case
sudo umount /dev/sdb
worked.sudo dd if=/dev/urandom of=/dev/sdb bs=1M status=progress
This command will delete the metadata and digital signatures which is even caused by Intel VROC (virtual raid on CPU), but takes more time depending on the size of hard disk.
Try completely wiping the Hard Drive first with:
sudo dc3dd wipe=/dev/sdb
Make sure to choose the correct drive.