as it takes as much filenames as argument to rm as much it's possible, then runs rm with the next load of filenames... it may happen that rm is only called 2 or 3 times.
All these find invocations are very nice but I seldom remember exactly the nomenclature needed when I'm in a hurry: instead I use ls. As someone mentions, ls . would work but I prefer ls -1 as in:
ls -1 | xargs -n 100 rm -rf
The -n xxx figure is pretty safe to play around with as exceeding the maximum will either be auto-corrected ( if size-max is exceeded; see -s ) or if the args-max for an app is exceeded it will usually be rather obvious.
It should be noted grep is handy to insert in the middle of this chain when you only want to delete a subset of files in a large directory, and don't for whatever reason want to use find.
This answer assumes you are using Gnu core utilities for your ls, xargs & etc.
-f(after rm) forces through with no prompt on write-protected files
Tip: Rename folder (ex session to session_old) first to keep additional autogenerated files from being added while you are trying to delete files. You can remake original directory manually if it doesn't automatically as in my case
it simply takes too long (one exec of rm per file).
this one is much more efficient:
as it takes as much filenames as argument to rm as much it's possible, then runs rm with the next load of filenames... it may happen that rm is only called 2 or 3 times.
In the event you cannot remove the directory, you can always use find.
That will delete all files in the current directory, and only the current directory (not subdirectories).
Both these will get round the problem. There is an analysis of the respective performance of each technique over here.
or
The problem stems from bash expanding "*" with everysingle item in the directory. Both these solutions work through each file in turn instead.
I was able to do this by backing up one level:
cd ..
And running:
rm directory name -rf
And then re-creating the directory.
All these find invocations are very nice but I seldom remember exactly the nomenclature needed when I'm in a hurry: instead I use ls. As someone mentions, ls . would work but I prefer ls -1 as in:
ls -1 | xargs -n 100 rm -rf
The -n xxx figure is pretty safe to play around with as exceeding the maximum will either be auto-corrected ( if size-max is exceeded; see -s ) or if the args-max for an app is exceeded it will usually be rather obvious.
It should be noted grep is handy to insert in the middle of this chain when you only want to delete a subset of files in a large directory, and don't for whatever reason want to use find.
This answer assumes you are using Gnu core utilities for your ls, xargs & etc.
You can use the
-exec +
option to find which which will try to run rm as few times as possible, which might be faster.Here's a version for deleting large number of files when the system needs to remain responsive.
It works by issuing work in small batches (100 files by default) and waiting a bit for other jobs to finish.
Worked brilliantly for deleting over half a million files from single directory on ext3. It prints percentage done as a little bonus
Solves "argument too long" or "cannot allocate memory" errors
This did the trick on 220,000+ files in session folder....
Advantage: instantly starts removing files
CLICK for screenshot of files being removed - (removed all files in ~ 15min)
-f (after ls) keeps from presorting
-v (after rm) displays each file as being removed
-f (after rm) forces through with no prompt on write-protected files
Tip: Rename folder (ex session to session_old) first to keep additional autogenerated files from being added while you are trying to delete files. You can remake original directory manually if it doesn't automatically as in my case