Is there a command in Solaris to read a file, and when it gets to the end to stream the way tail does? I need to read the file from the start, and it is a binary file.
Information on Solaris and Linux would be appreciated.
In linux you can use tail -f -n +0 /path/filename to see it. While -n generally refers to how many lines at the end of the file that you want printed, when passed +<n> it starts at the nth line from the beginning of the file.
From tail --help:
-n, --lines=K output the last K lines, instead of the last 10;
or use -n +K to output lines starting with the Kth
tail -9999f will do something close to what you want. Add more 9s if your file is bigger.
Problems:
Binary files may not have newline characters. tail -f will wait for a newline before printing anything out.
The version of tail on Solaris (you didn't mention which Solaris but it probably doesn't matter) probably doesn't support that option. It may support tail -n 9999 -f. You may have to acquire the GNU version of tail.
Because the file is constantly growing, there is a race condition between finding out how big it is and starting the tail process. You could miss the start of the file if you don't ask it to get enough lines.
tail won't know when you have really finished writing to the file so your gzip process will never finish either. I'm not sure what will happen when you ctrl-c to end the tail process but it's likely that gzip will clean up after itself and remove the file it was working on.
My suggestion would be to start your original program up and pipe the output to gzip like this:
./my_program | gunzip > new_file.txt
That way, gunzip will wait if my_program is going slow but will still finish when the true end of the file is indicated by my_program finishing.
You may need to rewrite your program to write to STDOUT rather than directly to a file.
Edit:
After a look at the man page, three of the issues above can be resolved. Using the -c <bytes> option instead of -n <lines> mitigates problem 1. Using -n +0 or -c +0 mitigates problem 3. Using --pid=<PID> will make tail terminate when the original program ( running as <PID> ) terminates which mitigates problem 4.
In linux you can use
tail -f -n +0 /path/filename
to see it. While -n generally refers to how many lines at the end of the file that you want printed, when passed+<n>
it starts at the nth line from the beginning of the file.From
tail --help
:tail -9999f
will do something close to what you want. Add more 9s if your file is bigger.Problems:
tail -f
will wait for a newline before printing anything out.tail
on Solaris (you didn't mention which Solaris but it probably doesn't matter) probably doesn't support that option. It may supporttail -n 9999 -f
. You may have to acquire the GNU version of tail.tail
won't know when you have really finished writing to the file so yourgzip
process will never finish either. I'm not sure what will happen when youctrl-c
to end thetail
process but it's likely that gzip will clean up after itself and remove the file it was working on.My suggestion would be to start your original program up and pipe the output to gzip like this:
That way, gunzip will wait if
my_program
is going slow but will still finish when the true end of the file is indicated bymy_program
finishing.You may need to rewrite your program to write to STDOUT rather than directly to a file.
Edit:
After a look at the man page, three of the issues above can be resolved. Using the
-c <bytes>
option instead of-n <lines>
mitigates problem 1. Using-n +0
or-c +0
mitigates problem 3. Using--pid=<PID>
will make tail terminate when the original program ( running as<PID>
) terminates which mitigates problem 4.