I have a NTFS partition that has a folder that holds over 5 million directories. Each directory contains more directories and then files.
I am moving to SSD and I need to copy all these data over to the new drive, but I am running into issues.
Directories like this will crash Windows Explorer, so I didn't even try it.
My first attempt is robocopy, which has worked well in the past. But when I tried to do the copy it just never started, even after leaving it for days. I think it was trying to index everything before starting the copy.
How do you copy directories of this size?
I have never seen robocopy fail like this, but if it is failing because of size/pre-indexing, then why not write your own script to do the copying? I would write the thing in PERL and just do the copy on a per-file basis and recursively go through it. You could add in checks in to see if the file already exists with the same time stamp, etc.
You could also check out RichCopy which is multi-threaded.
As some other answers have said, I've never seen RoboCopy fail like this - AFAIK, it doesn't do any indexing up front, it just starts with the first directory and gets on with the job.
Do you have the latest version of RoboCopy? There are some old copies floating around that date back to old NT days that can have problems with larger copies.
FWIW, here's the version header from my installation for comparison:
Not sure what else is on your partition, but you could use a Linux live CD (like GPartEd) and just copy the entire partition en-masse to the new drive. As a bonus, if you wanted to change the partition size while doing it, GPartEd will do that for you as well. Then delete whatever you didn't want to copy over. :)
It is possible you may be running into path-length problems with your directory structure. NTFS allows longer paths than both Explorer or CMD will access. This requires a bit of creativity to handle. Generally this is caused by file writes that don't use either method, or a share is there a directory or three down that'll allow it to work.
Run that at the top of your directory structure to chop off the top of said directory structure and hopefully make your paths small enough to be visible to everything. Use whatever copy tool you need to use instead of xcopy.
You probably need to use something multithreaded like richcopy it allows you to specify how many threads and how deep to dig per thread. I haven't tried it on 5 million dirs but I've used it on thousands and it worked far better than robocopy. Another copy utility in my bag o tricks is XXcopy. It's not as fast but has some pretty cool directory cloning options. Since it's command line I don't think it's' even going to try to enumerate the structure, just pump it thru.
Can you back it up using Windows NTbackup or a similar utility? Typically when I'm moving large amounts of data, I will backup from one device and then restore to another.
Maybe try the good old xcopy
I've never used it and can't vouch for it, but people say good things about TeraCopy.
Not a real answer, more a suggested line of attack: The *nix way to do this would be using
tar cf - | tar xvf -
, essentially asking tar to create a archive and directly pipe it into tar to extract it (somewhere else). Maybe this can be done on windows in a similar fashion aswell.I use FastCopy ( http://www.ipmsg.org/tools/fastcopy.html.en) . Its been the industry standard open source copier for 10+ years now. I wouldnt use anything else.