19

EDIT

Please see not only the accepted answer but also the other one(s).

Question

Why does redirecting both STDOUT and STDERR to the same file not work, although it looks like the same as 1>[FILENAME] 2>&1 ?

Here is an example:

perl -e 'print "1\n" ; warn "2\n";' 1>a.txt 2>a.txt
cat a.txt
# outputs '1' only.

Well, why? I thought that this works because... STDOUT is redirected to a.txt and so is STDERR. What happened to STDERR?

akai
  • 309
  • 4
    You accepted an answer that explains the details in a misleading way (many would say incorrect). Append redirection works for a different reason than the one in the answer. I'd recommend accepting the other one. – Peter Cordes Dec 13 '19 at 12:06

2 Answers2

62

With 1>a.txt 2>&1, file descriptor #1 is duplicated to #2. They both reference the same "open file", and they both share the current position and r/w mode. (There's actually no difference at all between using 2>&1 and 2<&1.)

With 1>a.txt 2>a.txt, both file descriptors are opened independently and have separate cursor positions. (The file gets truncated twice too.) If you write "Hello" to fd #1, its position is advanced to byte 5, but fd #2 remains at byte 0. Printing to fd #2 will just overwrite the data starting from 0.

This is easy to see if the second write is shorter:

$ perl -e 'STDOUT->print("abcdefg\n"); STDOUT->flush; STDERR->print("123");' >a.txt 2>a.txt

$ cat a.txt 
123defg

Note that Perl has internal buffering, so in this example an explicit flush() is necessary to ensure that fd #1 data is written before fd #2 data. Otherwise, the streams would be flushed in unpredictable order on exit.

For comparison, if the file descriptors are shared, the writes just follow each other:

$ perl -e 'STDOUT->print("abcdefg\n"); STDOUT->flush; STDERR->print("123");' >a.txt 2>&1

$ cat a.txt 
abcdefg
123
u1686_grawity
  • 452,512
  • I am surprised that the redirections are not opened in O_APPEND mode, but here we go (I checked with strace on bash, they really are not). – Jonas Schäfer Dec 12 '19 at 17:54
  • 11
    @JonasSchäfer, that's what >> is for. – Peter Dec 12 '19 at 18:27
  • 2
    @JonasSchäfer: Apparently O_APPEND did not yet exist when sh was originally written, and the behavior of > is now more or less set in stone in any shell that wants to remain sh-compatible. (It doesn't help that using it would make it impossible to seek() back, which is yet another change in behavior.) – u1686_grawity Dec 12 '19 at 20:43
  • 1
    For dup() you could say that it creates another descriptor ID for the same in-kernel file description. – Peter Cordes Dec 13 '19 at 12:07
  • 1
    Does "There's actually no difference at all between using 2>&1 and 2<&1" apply to just this question, or in general? Maybe I'm thinking of the & has to be after the specifically named file, and the order within the & parm does not matter? – Mark Stewart Dec 13 '19 at 19:18
  • 3
    @MarkStewart I didn't understand your second question. Regarding the first one, I believe that it's in general. The only difference between >& and <& seems to be the default for the left file descriptor. If it's specified as in the examples given, then there should be no difference. They'd both result in a call dup2(1, 2). – JoL Dec 13 '19 at 20:04
  • @JoL Thanks. I think I was saying you shouldn't have 2>&1 before > test.txt sometimes... – Mark Stewart Dec 13 '19 at 20:23
  • 3
    @MarkStewart Ah, certainly. – JoL Dec 13 '19 at 20:55
  • @Peter Of course. facepalm – Jonas Schäfer Dec 15 '19 at 18:53
-2

Both your redirections truncate the file, so the second (in chronologial order of execution) will overwrite the first. Try

rm a.txt ; touch a.txt ; perl -e 'print "1\n" ; warn "2\n";' 1>>a.txt 2>>a.txt

Or just use the same file descriptor

perl -e 'print "1\n" ; warn "2\n";' 1>a.txt 2>&1
Eugen Rieck
  • 20,271
  • 22
    This isn't right. It's true that the file is truncated twice, but both times it's done before perl even starts. The truncating behavior of > is irrelevant here. The reason why one overwrites the other is because they have independent cursor positions. The reason why this isn't an issue in append mode is because each write automatically seeks to the end before writing. On the order of execution, we see that the first overwrote the second. Perl buffers stdout when writing to a file, but stderr remains unbuffered. So, "1" is actually written when perl is about to die, after warn outputs. – JoL Dec 12 '19 at 20:35
  • 1
    As I wrote yesterday in my now gone comment, the point about truncation is, that the file pointer stays at zero afterwards. So later output overwrites earlier output - since &1 is slower than &2 (and with slower I mean "slower to finally write out") you consistently see 1, not randomly 1 or 2. – Eugen Rieck Dec 13 '19 at 08:22
  • 3
    Append redirection works for a different reason: because opening with O_APPEND makes writes always append to the current end of the file, respecting other writes (by other processes, or by this process on other file descriptors that weren't "dup"ed and that don't share the same file position). >> vs. > is different in both O_APPEND and O_TRUNC, so yes the non-truncating redirect works, but not exactly because it doesn't truncate. – Peter Cordes Dec 13 '19 at 12:04
  • "slower" isn't a great description of the buffering that stdout gets. You could just say that stdout is full-buffered because it's a file, so doesn't get flushed until the end. "slower" has connotations of performance and race conditions, but the behaviour here is deterministic (except for large writes, larger than the stdio buffer size, so some or all of the first print would get actually written with syscalls to the file before the stderr writes) – Peter Cordes Dec 13 '19 at 12:11
  • @EugenRieck You can disable truncation/clobbering, and the result is going to be exactly the same. All files are at zero when they're first opened. That you say "stays at zero afterwards" leads me to think you think there's 1 file pointer that goes back or "stays" at zero after the first write, somehow because of the 2 truncations that happened way before any writing. However, the point is that there is 2 file pointers that start at zero. print uses one, and warn uses the other. That's going to cause an overwrite whether you use truncation or not. I invite you to read grawity's answer. – JoL Dec 13 '19 at 16:13
  • Here's to compare the same scenario as using > but without truncation: ruby -e 'f1 = File.open("a.txt", File::WRONLY | File::CREAT); f2 = File.open("a.txt", File::WRONLY | File::CREAT); exec("perl", "-e" "print \"1\\n\" ; warn \"2\\n\";", out: f1, err: f2)' and here's the same scenario as using >> but adding truncation to it: ruby -e 'f1 = File.open("a.txt", File::WRONLY | File::CREAT | File::TRUNC | File::APPEND); f2 = File.open("a.txt", File::WRONLY | File::CREAT | File::TRUNC | File::APPEND); exec("perl", "-e" "print \"1\\n\" ; warn \"2\\n\";", out: f1, err: f2)'. – JoL Dec 13 '19 at 16:44
  • @PeterCordes Please read my comment again, then you might be able to understand. – Eugen Rieck Dec 13 '19 at 16:47
  • @JoL Please read my comment again, then you might be able to understand – Eugen Rieck Dec 13 '19 at 16:47
  • 3
    I'm done trying to explain this to you. It astounds me that I'm even providing you with code so you can see what happens and you reply with that. – JoL Dec 13 '19 at 16:53
  • 5
    You wrote in a comment: the point about truncation is, that the file pointer stays at zero afterwards. Yes, the implicit seek-to-end on every write as part of append redirects is the key here. I downvoted your answer because that key point isn't made in your answer. Instead you make some odd claim about the order of truncation mattering. But 2>a.txt 1>a.txt would behave the same way so it's not actually chronological ordering of truncation that matters, it's what happens on the 2nd write. Remember that both truncations happen before the first write, before the program runs. – Peter Cordes Dec 13 '19 at 16:58
  • 1
    I'm sure I do fully understand what POSIX system calls and file descriptor semantics are involved in this, from having used strace on a shell running commands like this. You might understand to, and your first comment looks correct, but you haven't explained it clearly for future readers of your answer. Insulting people instead of fixing your answer to include the explanation from your comment isn't helping anyone. You get upvotes for answers that actually help beginners understand new stuff, not just for claiming to know the right answer yourself. – Peter Cordes Dec 13 '19 at 17:04
  • @grawity it is deterministic: STDERR is unbuffered. The important part is, that we don't need to care about buffering - from the POV of the OS, one is faster and the other slower. The redirect doesn't care, why - it just sees one as faster than the other. – Eugen Rieck Dec 13 '19 at 17:42