I'm trying to back up 66 gigabytes to a NAS by making a tarball, but "tar -cSf ..." quits with a "memory exhausted" error after gigabyte 62. My Linux machine has a gigabyte of RAM and a gigabyte of swap space.
(edit) I tried it again about where tar gave up and tar quickly gave up again, so it looks like it may be having trouble dealing with a special file.
This data is surprisingly resistant to being backed up. rsync is 4 times slower than tar because the NAS isn't very fast and it quits in the middle with 'connection reset by peer'; 'cp' doesn't work well on the cifs share because it can't create the special files. Is there a better way?
I don't know why, but I can suggest you try something like
which will create one file per every 30 GB. If you ever reach the 120 GB mark, you'd need to add a fourth file (-f piece4.tar)
If this still fails you could try with smaller pieces and writing a script to generate the command line (because a commandline with 80 -f arguments would be a pain to write :-) )
Try passing the --hard-dereference option. This will disable hard link tracking, which requires memory proportional to the number of inodes being backed up. It might also be interesting to try stracing the tar process while it's attempting to back up the problem file.
-S
is doing some checking for sparse files (those where not all file extents are actually physically allocated on disk). This could possibly be running out of memory. Try running it without the -S (compress it if you really want) and see if this fixes the problem.or