I am doing a dd
on two identical drives with this command:
dd if=/dev/sda of=/dev/sdb bs=4096
Both hard drives are the exact same model number, and both have 1TB of storage space. /dev/sda
uses a blocksize of 4096. /dev/sda
is a local drive and /dev/sdb
is a remote caddy. I might be able to use the following protocols:
- USB2.0 HighSpeed (Currently the plan)
- Gigabit Over-The-Network clone (Really do not want to even try this)
- USB3.0 (If I find my other drive caddy)
- eSATA (If I find/buy a cable)
- SATA (If I find/buy a cable, gotta love laptop CD drives)
Is there a way to run this drive copy that takes less that 96 hours? I am open to using tools other than dd
.
I need to clone the following partitions (including UUIDs)
- Fat32 EFI Partition (*)
- NTFS Windows Partition (*)
- HFS+ OSX Partition
- EXT4 Ubuntu Partition (*)
- Swap Partition (*)
*
Supported by Clonezilla
I have tried Clonezilla (and it was MUCH faster), but it does not support HFS+ smart copying, which I need. Maybe the newest version supports this?
When I made my first clone, I did all of the partitions except HFS+ and it went very quickly. (No more than 3 hours total)
In my experience, I don't think there is something faster in the command line as
dd
. Adjusting thebs
parameter can increase the speed, for example, I have 2 HDD that I know have a read/write speed greater than 100 MB/s so I do this:There is also
pv
(Needs to be installed first) that checks for the fastest speed on both drives and then proceeds on cloning. This has to be done of course from root:With PV I got 156 MB/s
The nice thing about
pv
apart from the speed is that it shows the progress, current speed, time since it began and ETA. In regards to HFS+ I would not know, am just trying to help on the "speed" part. Withpv
or a very optimizedbs
parameter, you can do a 4 TB drive in less than 7 Hours (6 Hours 50 Minutes at a current speed of 150 MB/s).I did a couple of tests with the connection types you were using and others I had available. I was using the Asus Z87 Pro and the Intel DZ68DP. This were my results, but first we need to know that the theoretical speeds for many transfer rates (Raw speeds) are just that, theory. Doing real tests revealed they are between 40% to 80% of that raw speed. This tests can change depending on Device used, connection type, motherboard, type of connecting cable, filesystem type and more. With that in mind, this is what I got (I only tested Write speed to the Device, read is typically higher):
To copy a partition wholesale, use
cat
instead ofdd
. I ran benchmarks a while ago, copying a large file rather than a partition, between two disks (on the same disk, relative timings are different):The conclusion from this benchmark is that the choice of block size for
dd
matters (but not that much), andcat
automatically finds the best way to make a fast copy:dd
can only slow you down. With a small block size,dd
wastes time making lost of tiny reads and writes. With a large block size, one disk remains idle while the other is reading or writing. The optimal rate is achieved when one disk reads while the other disk writes.To copy a partition, it may be faster to copy the files with
cp -a
. This depends on how many files there are and how much of the filesystem is free space. Copying files has an overhead that's roughly proportional to the number of files, but on the other hand copying free space wastes time.The maximum data rate for USB2 is a little under 50 MB/s which works out to 6–7 hours to transfer 1TB. This assumes a hard disk that's fast enough to saturate the USB bus; I think the faster 7200 rpm drives can do it but 5900rpm might not be that fast (maybe they are for linear writes?).
If either disk is in use in parallel, this can slow down the copy considerably as the disk heads will need to move around.
The problem is your connection type, and block size. For the fastest results your block size should be half the lowest write speed you typically receive. This will give you a safe margin, but still allow for a large number; of course you need to have enough ram to hold the data too.
Usb 2.0 is 12 megabits per second (Mbps), Usb 2.0 High Speed is 480 Mbps. This is of course the raw speed; with 8 bits in a byte and framing overhead, the usable speed in MB/s is usually a decimal place over. So for example 480 raw, becomes 48MBs usable. Keep in mind that this is the mathematical best, in the real world it will be a bit lower. For usb 2.0 high speed connections you should expect somewhere around 30-35 MBs max write speed, provided the actual storage device can equate or surpass the connection speeds.
I agree that raw speed of a well tuned
dd
('pv') or 'cat' command is tough to beat, but if there is any problem with the copy (bad sector, power failure, user error, etc) then you have to start over.I'd like to suggest ddrescue - a FOSS tool that has all the speed of dd but it will work around disk errors, and resume at a later point if there is a failure.
I'm moving Windows 7 from an HDD to SSD and found this and some other answers... Something I learned which might help others. In my case, the source drive is bigger, else I would have worked at the /dev/sda -> /dev/sdb device level.
Win7 and its 3 partitions... I used Xbuntu 14.04 live cd on a usb. Popped out the win computer's DVD and put the SSD in its place. Installed partclone and tried this:
partclone.ntfs -b -N -s /dev/sda3 -o /dev/sdb3
Partclone puked on the ntfs needing chkdisk run in Windows, so a quick fix got partclone happy:
All commands run as root. Partclone's ncurses UI (the -N option) said the transfer was 7GB/min and ended up at 5GB/min, which equates to 83MB/sec. The great part is partclone doesn't copy unused space, so this made the clone remarkably fast.
Additional potential gotchyas:
if the drive you are transferring to was previously used, it might have remnants of a GPT. Windows 7 factory installs are usually msdos/mbr partition tables. You'll need to remove the GPT fragments from the destination drive. This Unix & Linux QA helped me with this. You have to use
gdisk
on the device, use x then z and yes to zap GPT data and make sure you KEEP the MBR.And don't forget if you don't do a device level dd, you'll need to copy the MBR using
dd if=/dev/sdb of=/dev/sda bs=446 count=1
where sdb is source or old drive and sda is destination or new drive (source)
I've been recently creating an image of 100GB partition (HDD) and write it to the new SSD disk.
Here's a tip that can drastically speed up the process :)
Split file into smaller parts (the bigger the file is, the slower it works)
During the process, you can check the speed using (in separate terminal)
Then, when you have a directory full of result files (maindisk.img000, maindisk.img001, and so on...) use
to 'burn' the image into the new partiton of SSD (parition must be the same size as the old one)
To me it worked a loooot faster than usual way (without splitting). Average speed of creating the image was ~13MB/s. When I use the 'normal' way it starts with ~15MB/s and then decreasing to 1MB/s.
For anyone finding this thread, it is much easier and faster to just use a tool designed for data recovery like ddrescue. It tries to rescue the good parts first in case of read errors. Also you can interrupt the rescue at any time and resume it later at the same point.
Run it twice:
First round, copy every block without read error and log the errors to rescue.log.
Second round, copy only the bad blocks and try 3 times to read from the source before giving up.
Now you can mount the new drive and check the file system for corruption.
More info:
https://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html
I managed to get 430 MB/s when I was making an image from a SATA SSD to a NVME SSD
Id recommend the input/read - file/disk to be on SATA to boost read speeds. The USB 2.0 High speed is good too as I am getting average speeds of 33816 kb/s with ddrescue compared to when the setting was USB 2.0 to SATA at 2014 kb/s
Use a different block size. It's the amount of data that
dd
reads at a time. If reading too little, a greater share of time is spent on the program logic and if read too much, much time is spent moving the large data around.To measure the speed at different block sizes, use the following
bash
script:$dev
to the devicecbtotal
to be at least 5x your expected read speedThe result may be biased towards a larger size due to the disk reading ahead - that's why it's important to set
cbtotal
large enough.