13

Ok, this is a very practial use case from my point of view.

Let say I have a some simple shell oneliner which does log the output into a file. This can be simply anything, for example tcpdump. Is there any generic and trivial way, to make sure, that the output file won't exceed given size?

Resoning behind this, is to protect from filling the whole available space on the mount point by mistake. If I forget about the script, or it will yield GBs of data per hour, then this simple debugging task can lead to a potential system crash.

Now, I am aware of the options build in some of the tools (like combination of -W/-C in tcpdump). What I need is a very generic failsafe.

Long story short - when I run a script like:

% this -is --my=very|awsome|script >> /var/tmp/output.log

How to make sure that output.log will never get bigger than 1GB.

Script can crash, be killed or whatever.

Solution I am looking for should be easy and simple, using only tools available in popular distros like ubuntu/debian/fedora. In general something widely available. Complicated, multiline program is not an options here regardless of the language/technology.

5 Answers5

18

You can use head for this:

command | head -c 1G > /var/tmp/output.log

It accepts K, M, G and the like as suffixes (bytes are the default). Append 'B' to use the base 10 versions.

Eduardo Ivanec
  • 15,071
  • 2
  • 37
  • 43
  • head agains wall It couldn't be more simple. Thanks! – mdrozdziel Jun 07 '11 at 18:31
  • 3
    It seems tail -c 1G also works, useful to look at the tail of an event. Understandably it flushes after the command end. – dashesy Oct 07 '16 at 00:05
  • 1
    Unit for suffixes do not work on Mac OS. – anumi Jan 22 '19 at 15:15
  • 3
    Sticking my neck out here, but this does not seem to accomplish what the OP requested (yes - i realize he accepted it, but read the question). He is using an append redirect (>>), and asks, How to make sure that output.log will never get bigger than 1GB. This answer does not do that - this answer limits the size of the command output to 1G. Tested on macos using repetitive env | head -c 1000 >> /var/tmp/output.log which yielded: -rw-r--r-- 1 seamus wheel 7304 Dec 25 03:33 output.log – Seamus Dec 25 '21 at 10:18
  • Hi, I wanted to ask will this allow the latest changes within the 1GB or will it stop adding the changes after 1 GB. – Muneeb Ahmad Khurram Feb 12 '23 at 11:28
3

curtail limits the size of a program's output and preserves the last X KB of output with the following command:

run_program | curtail -s 1G myprogram.log

https://github.com/Comcast/Infinite-File-Curtailer

NOTE: I'm the maintainer of the above repo. Just sharing the solution...

  • Hi Dave, I have tried your solution and it works very well, but is there a way I can go ahead and see the output of the provided program as well on the console. – Muneeb Ahmad Khurram Feb 12 '23 at 11:23
0

Like Seamus said inside Eduardo Ivanec answer, this questions seens to ask about how to continuously update into a file, like doing a backup from a external .

  1. Write a script, to limit your file size Note: This method is allows the output file to be larger than specified between the sleep interval.

| #!/bin/sh

MY_FILE="myJournal.log"

while true; do echo "$(tail ${MY_FILE} -c 400)" > ${MY_FILE} sleep 1 done

  1. Run the , and keep it running:

    ./cutter.sh &

  2. Run your script/application pointing to file, ex:

    journalctl -f >> myJournal.log

0

Set the maximum file size for a user that will only be used to run these scripts.

The file /etc/security/limits limits a user with the values of "default" unless there are explicit values for a specific user. These user specific values will overwrite the default values. The file may have a slightly different name depending on your OS.

If your log user is named log_maker, then add this line to the file:

log_maker hard fsize 1000000

The number after fsize is the maximum file size in KB.

0

You can limit any file with

tail -c 1G myLog.log > myLog.tmp
mv myLog.tmp myLog.log

Writing the tail directly to the same file makes an empty file. So you need to use a temporary file.

Séverin
  • 1
  • 2
  • tail -c 1G myLog.log > myLog.tmp mv myLog.tmp > myLog.log this will kill the log, i think you wanted to use >> instead of > or dont you? – djdomi Aug 07 '19 at 10:23
  • 1
    The purpose of this command is to rewrite the log with only the last 1G. The >> should be for the previous command : doSomething >> myLog.log; tail -c 1G myLog.log > myLog.tmp; mv myLog.tmp > myLog.log; – Séverin Aug 07 '19 at 12:50
  • IMHO, your answer comes closest to answering the OP's question. Oddly, the selected answer does not seem to answer the question (see my comment above). But I believe your answer contains a flaw: there is no redirect > in mv; it's simply mv myLog.tmp myLog.log, not mv myLog.tmp > myLog.log – Seamus Dec 26 '21 at 03:40
  • You're right. Just corrected this typo. Thanks – Séverin Dec 27 '21 at 23:18