I often use grep to find files having a certain entry like this:
grep -R 'MyClassName'
The good thing is that it returns the files, their contents and marks the found string in red. The bad thing is that I also have huge files where the entire text is written in one big single line. Now grep outputs too much when finding text within those big files. Is there a way to limit the output to for instance 5 words to the left and to the right? Or maybe limit the output to 30 letters to the left and to the right?
grep
itself only has options for context based on lines. An alternative is suggested by this SU post:As another alternative, I'd suggest
fold
ing the text and then grepping it, for example:The
-s
option will makefold
push words to the next line instead of breaking in between.Or use some other way to split the input in lines based on the structure of your input. (The SU post, for example, dealt with JSON, so using
jq
etc. to pretty-print andgrep
... or just usingjq
to do the filtering by itself ... would be better than either of the two alternatives given above.)This GNU awk method might be faster:
-v RS=...
), and the number of characters in context (-v n=...
)FNR > 1
) is one where awk found a match for the pattern.n
trailing characters from the previous line (p
) andn
leading characters from the current line (substr($0, 0, n)
), along with the matched text for the previous line (which isprt
)p
andprt
after printing, so the value we set is used by the next lineRT
is a GNUism, that's why this is GNU awk-specific.For recursive search, maybe:
Using only-matching in combination with some other options(see below), might be very close to what you are seeking, without the processing overhead of regex mentioned in the other answer