Ok, this is a very practial use case from my point of view.
Let say I have a some simple shell oneliner which does log the output into a file. This can be simply anything, for example tcpdump. Is there any generic and trivial way, to make sure, that the output file won't exceed given size?
Resoning behind this, is to protect from filling the whole available space on the mount point by mistake. If I forget about the script, or it will yield GBs of data per hour, then this simple debugging task can lead to a potential system crash.
Now, I am aware of the options build in some of the tools (like combination of -W/-C in tcpdump). What I need is a very generic failsafe.
Long story short - when I run a script like:
% this -is --my=very|awsome|script >> /var/tmp/output.log
How to make sure that output.log will never get bigger than 1GB.
Script can crash, be killed or whatever.
Solution I am looking for should be easy and simple, using only tools available in popular distros like ubuntu/debian/fedora. In general something widely available. Complicated, multiline program is not an options here regardless of the language/technology.
tail -c 1Galso works, useful to look at the tail of an event. Understandably it flushes after the command end. – dashesy Oct 07 '16 at 00:05>>), and asks,How to make sure that output.log will never get bigger than 1GB.This answer does not do that - this answer limits the size of the command output to 1G. Tested on macos using repetitiveenv | head -c 1000 >> /var/tmp/output.logwhich yielded:-rw-r--r-- 1 seamus wheel 7304 Dec 25 03:33 output.log– Seamus Dec 25 '21 at 10:18