I am trying to remove a string from many text files on one of our servers. The string is identical across all these files and I can run:
grep -r -l 'string'
to get the file list but I am stuck on how to get the files edited and written out to their original locations again. Sounds like a job for sed but not sure how to handle the output.
find -type f -print0 | xargs -0 -n 1 sed -i /string/d
will do the trick, handling spaces in filenames and arbitrarily nested frufru, since apparently people aren't capable of expanding*
on their own.Here's my script for this sort of thing, which I call
remove_line
:So you do
remove_line 'string'
the files in your list.Advantages to doing this over using
sed
are you don't have to worry about the platform-dependent behavior ofsed -i
and you can use Perl regex for the matching pattern.Ugh. I'm not a shell wizard at all, but I'd look at a pipe to xargs and then sed to remove the line with the string in question.
Little bit of Google perusal makes me think that this might make Bob your stepuncle - close enough to get there anyway.
Ummmmmm, this is a perl one-liner, thanks to the lovely -i flag for in-place filtering of input files!!
In context...
here is the trimmed quick-reference on the perl incantation used...
and for extra credit, you can use
touch -r file.bak file
to copy the old timestamp to the new file. the inodes will differ, though, and strange things may happen if you have hard links in the mix...check the docs if you're that motivated to cover your tracks... Hmmmmm, what was your application again?Don't forget about the -v option in grep which reverses the sense
Frok fom grep man page:
You may be then able to pass that into the find command similar to this
find -name -exec
grep -v -r -l 'string'
{} \;And that's getting close to what you want... but of course you'll need to write the result back to the original file...