You should always remember that both Input and Output (and ErrorOut) are Text files, so they adhere to the conventions of that type.
Thanks for the clarification. The example code and this statement spurred me to look more under the hood. I have some observations and more questions.
read() on stdin is geared to read up to a LineBreak character. This is expected behavior for a console application reading user input. (duh) This is very, very bad for reading streams from piped standard input. The way read() interprets and drops control, tab, and line feed characters is totally incompatible with reading piped standard input, unless your only objective is human-readable text. But I'm looking at reading untyped binary input, where all the bytes are crucial.
Because read() only reads up to LineBreak, casting it to an array leaves arrays with lots of null bytes after the slot that LineBreak char would be located, if it were reading binary without interpreting breaks. This behavior makes it useless for my particular case. I need EVERY BYTE (all of them, no exceptions) to be castable to arrays, as loosing a single byte in a stream breaks a hash algorithm.
LineBreak is set in system files of the compiler somewhere, is it not? There's also a command to change the way a read or write operation interprets which characters are LineBreak, correct?
The simplest solution I see at this point is, to instruct the text stream input, to stop interpreting LineBreak characters so that every byte in the buffer will be cast to the char array. So when read() sees the LineBreak (#10, #13) chars, it just reads them like any other chars, filling up the entire array buffer.
A good idea is to turn off interpretation of LineBreak completely in the program. How would I do this?
Better: how would I hack some way to use blockread on the stdin stream?