I've got a bizarre seeming shell issue, with a command in the $PATH that the shell (ksh, running on Linux) appears to cowardly refuse to invoke. Without fully qualifying the command, I get:
# mycommand
/bin/ksh: mycommand: not found [No such file or directory]
but the file can be found by which:
# which mycommand
/home/me/mydir/admbin/mycommand
I also explicitly see that directory in $PATH:
# echo $PATH | tr : '\n' | grep adm
/home/me/mydir/admbin
The exe at that location seems normal:
# file /home/me/mydir/admbin/mycommand
/home/me/mydir/admbin/mycommand: setuid setgid ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), not stripped
# ls -l mycommand
-r-sr-s--- 1 me mygroup 97892 2012-04-11 18:01 mycommand
and if I run it explicitly using a fully qualified path:
# /home/me/mydir/admbin/mycommand
I see the expected output. Something is definitely confusing the shell here, but I'm at a loss what it could be?
EDIT: finding what looked like a similar question: Binary won't execute when run with a path. Eg >./program won't work but >program works fine
I also tested for more than one such command in my $PATH, but find only one:
# for i in `echo $PATH | tr : '\n'` ; do test -e $i/mycommand && echo $i/mycommand ; done
/home/me/mydir/admbin/mycommand
EDIT2:
As of this morning, the problem has vanished, and I'm now able to execute the executable.
That could be thought of as validating the suggestion to logout and login, but I'd done that last night without success. That logout/login should have also done the equivalent of running the 'hash -r' command that was suggested (which fwiw also appears to be a ksh builtin, and not just a bash builtin).
In response to some of the answers:
This is an executable not a script (see the ELF reference in the file command output).
I don't think that a strace would have helped. That ends up forcing the command to execute fully qualified. I suppose that I could have done a strace attach on the current shell, but since I can no longer repro there's no point of trying that.
there were no semicolons in the $PATH. Since I can no longer repro, I won't clutter up this question with the full $PATH.
trying another shell (i.e. bash) would have been something I'd also have tried, as was suggested. With the problem gone, I now won't know if that would have helped.
It was also suggested to me was checking the directory permissions. Doing so, for each of the directories up to this one I see:
# ls -ld $HOME $HOME/mydir $HOME/mydir/admbin
drwxr-xr-x 10 me root 4096 2012-04-12 12:20 /home/me
drwxrwsr-t 22 me mygroup 4096 2012-04-12 12:04 /home/me/mydir
drwxr-sr-x 2 me mygroup 4096 2012-04-12 12:04 /home/me/mydir/admbin
The $HOME directory ownership is messed up (shouldn't be root group). That could cause other issues, but I don't see how it would have caused this one.
You probably need to update your shell's cache of items in your
$PATH
usinghash -r
.Also, in such cases, check what happens when the program is called by passing its executable as an argument to the dynamic linker (might refuse to do so while setuid/setgid on some systems).
ldd(1) output of both cases might also be revealing. "no such file or directory" on an executable file really means that the dynamic linker specified in the executable file cannot be found (imagine the executable having an ELFin form of #!/lib/ld-linux.what.so.ever inside)
This behaviour had people dumbfounded that were there to witness the end of the libc5 era, and now occasionally dumbfounds people in the era of mixed i386/amd64 with different means of supporting two library sets in the wild.
Relative RPATH in executable vs $PWD?
PS the other question is related to MacOSX, which probably uses dyld and not the libc provided linker. Very different kind of animal.
Alright, I don't have an answer. I did prove out a few things and think I may add to this later:
So, by all accounts, I'm still as confused. Just for grins and on the off chance this is a shell-related bug, can you try it with a different shell?
I'm guessing that your script doesn't have a valid shell after the #!. For instance, on some older SCO systems, scripts with #!/bin/bash don't work because bash REALLY lives in /usr/bin/bash. Dumb, but hey SCO is almost dead for a reason, no?
Check your shell and make sure it points to a real binary/script.
Edit: It doesn't tell if it's a script or a binary, but assuming your 'ls -l' output is correct, then you probably don't have a 93kbyte script... so this is probably a binary meaning my answer is totally incorrect.
Have you tried logging out and back in? I know if I use a binary that's in /usr/bin then install a /usr/local/bin version from source, the system still tries to execute the original one until I log out and back in.
No answer, just a bunch of thoughts:
My guesses:
You had an alias named
mycommand
. For example:You had a function named
mycommand
. For example:Next time you have this problem, try running
command -V mycommand
to see what kind of command the shell believesmycommand
is.I had exactly the same problem, and failed to find an answer because the original poster's problem resolved itself. But this didn't work for me, and I finally managed to track the problem down. So I'm adding the following as an answer to the original post.
The symptoms I faced were the following. There is a script (myscript.pl) in the /my/home directory. Now trying to run it:
Verified permission of the file (execution flag is set). Verified $PATH (though there should not be an issue).
So then I try (after verifying the executable flag is set on the script):
Hmmm... So perhaps the script is not calling the right scripting language (perl, in this case). The top of the script has the correct magic:
and indeed /usr/bin/perl exists and works. So the following call works correctly.
tab auto-complete would not show the file (neither in
tcsh
nor inbash
).This really threw me off. Then remembered that a few months ago my hard disk crashed, and the young system administrator in my lab re-installed the system. Thought he might have screwed up the permissions on the partitions. And indeed, in
/etc/fstab
, the exec permission was missing!Instead of
Fixed this by changing
/etc/fstab
and remounting:This solved the problem completely. My guess is that a similar thing might have happened with the original poster's problem, except that there the permission issue was intermittent (e.g., there was a temporary change that might have been solved with a reboot, if one took place).
Not a full answer, but wanted to report my experience as I had the exact same problem as the questioner, and thought it might help future users experiencing the same maddening problem. I could which the file, and see it in my path, and even execute it by giving the full path, but could not execute it otherwise. However, in my case, it only happened within a sub-shell (i.e. when it was executed from a script, in this case there were a few nested sub-shells). I could run it from the command line just fine.
Just before the command in the nested script, I printed out the command, e.g. echo $(which mycommand)
mycommand: /home/me/bin/mycommand
Then I would try to execute it from the parent script:
/home/me/bin/some_parent_script[72]: mycommand: not found [No such file or directory]
Just like the questioner, I was unable to diagnose the source of the problem. My PATH looked right, it was which-able, and hash did not reveal any prior entries of mycommand. The next day when I logged in, everything magically worked again. Now, I will note here that there was a known system issue that occurred just before I saw this problem where a mount was remounted. Perhaps that's a clue?
If I didn't have log file after log file demonstrating that this happened, I wouldn't believe it to have been possible! Thanks to the questioner, I no longer feel crazy!
P.S. I don't think user226160 had the EXACT same problem as the reporter, but it sounds related, and lends credence to the mount theory.