I am using the following command:
\cp -uf /home/ftpuser1/public_html/ftparea/*.jpg /home/ftpuser2/public_html/ftparea/
And I am getting the error:
-bash: /bin/cp: Argument list too long
I have also tried:
ls /home/ftpuser1/public_html/ftparea/*.jpg | xargs -I {} cp -uf {} /home/ftpuser2/public_html/ftparea/
Still got -bash: /bin/ls: Argument list too long
ANy ideas?
*.jpg expands to a list longer than the shell can handle. Try this instead
There is a maximum limit to how long an argument list can be for system commands -- this limit is distro-specific based on the value of
MAX_ARG_PAGES
when the kernel is compiled, and cannot be changed without recompiling the kernel.Due to the way globbing is handled by the shell, this will affect most system commands when you use the same argument ("*.jpg"). Since the glob is processed by the shell first, and then sent to the command, the command:
is essentially the same to the shell as if you wrote:
If you're dealing with a lot of jpegs, this can become unmanageable very quick. Depending on your naming convention and the number of files you actually have to process, you can run the cp command on a different subset of the directory at a time:
This could work, but exactly how effective it would be is based on how well you can break your file list up into convenient globbable blocks.
Globbable. I like that word.
Some commands, such as find and xargs, can handle large file lists without making painfully sized argument lists.
The -exec argument will run the remainder of the command line once for each file found by find, replacing the {} with each filename found. Since the cp command is only run on one file at a time, the argument list limit is not an issue.
This may be slow due to having to process each file individually. Using xargs could provide a more efficient solution:
xargs can take the full file list provided by find, and break it down into argument lists of manageable sizes and run cp on each of those sublists.
Of course, there's also the possibility of just recompiling your kernel, setting a larger value for
MAX_ARG_PAGES
. But recompiling a kernel is more work than I'm willing to explain in this answer.That happens because your wildcard expression (
*.jpg
) exceeds the command line argument length limit when expanded (probably because you have lots of .jpg files under/home/ftpuser/public_html/ftparea
).There are several ways for circumventing that limitation, like using
find
orxargs
. Have a look at this article for more details on how to do that.As GoldPseudo commented, there is a limit to how many arguments you can pass to a process you're spawning. See his answer for a good description of that parameter.
You can avoid the problem by either not passing the process too many arguments or by reducing the number of arguments you're passing.
A for loop in the shell, find, and ls, grep, and a while loop all do the same thing in this situation --
and
and
all have one program that reads the directory (the shell itself, find, and ls) and a different program that actually takes one argument per execution and iterates through the whole list of commands.
Now, this will be slow because the rm needs to be forked and execed for each file that matches the *.jpg pattern.
This is where xargs comes into play. xargs takes standard input and for every N (for freebsd it is by default 5000) lines, it spawns one program with N arguments. xargs is an optimization of the above loops because you only need to fork 1/N programs to iterate over the whole set of files that read arguments from the command line.
There is a maximum number of arguments that can be specified to a program, bash expands *.jpg to a lot of arguments to cp. You can solve it by using find, xargs or rsync etc.
Have a look here about xargs and find
https://stackoverflow.com/questions/143171/how-can-i-use-xargs-to-copy-files-that-have-spaces-and-quotes-in-their-names
The '*' glob is expanding to too many filenames. Use find /home/ftpuser/public_html -name '*.jpg' instead.
It sounds like you have too many
*.jpg
files in that directory to put them all on the command line at once. You could try:You may need to check
man xargs
for your implementation to see whether the-I
switch is correct for your system.Actually, are you really intending to copy those files to the same location where they already are?
Using the
+
option tofind -exec
will greatly speed up the operation.The
+
option requires{}
to be the last argument so using the-t /your/destination
(or--target-directory=/your/destination
) option tocp
makes it work.From
man find
:Edit: rearranged arguments to cp
Go to folder
and execute the following:
In this way if the folder 'ftparea' has subfolders, this might be a negative effect if you want only the '*.jpg' files from it, but if there aren't any subfolders, this approach will be definitely much more faster than using find and xargs