Thinking about it. Using CR alone in protocols actually make infinitely more sense. As that would allow use of LF in records. Which would make many use cases much simpler.
Just think about text protocols like HTTP, how much easier something like cookies would be to parse if you had CR as terminating character. And then each record separated by LF.
ASCII already has designated bytes for unit, group, and record separators. That aside, a big drawback of using unprintable bytes like these is they're more difficult for humans to read in dumps or type on a keyboard than a newline (provided newline has a strict definition CRLF, LF, etc)
There is in fact a reason those ASCII 'characters' should stay unprintable: the 0x00-0x1f (except Tab, CR, LF) range is explicitly excluded as invalid in a whole bunch of standards, e.g. XML.
Just think about text protocols like HTTP, how much easier something like cookies would be to parse if you had CR as terminating character. And then each record separated by LF.