Sorry I wasn't very clear. TFA is talking about using LLMs to write things from scratch, not just to clean up grammar for example. In that context, I was talking about bits of semantic information, not bits of English text information. You might have 300 bits of semantic information in your mind, and then you have to expand that to, say, 600 bits of English text to give to the LLM. If you're using the LLM purely to turn bullet points into prose, it'll add more bits of English, but not more bits of (useful) semantic information.
If you're familiar with LLMs and information theory, the LLM isn't giving you any semantic information that you don't already know. If you aren't familiar with LLMs and information theory, you can learn about them from google and/or your own LLM, using that prompt for keywords. In either case, the LLM's response isn't very helpful, because it's not my ideas that you are reading, it's random information pulled from the internet (directly or indirectly), and it's not actually the semantic information I wanted to convey.
This comment is more useful than the LLM's, because every word is chosen to convey the ideas in my mind as clearly as possible in the context of this article and conversation. It's also half as many words to read.
I do think that a list of bullet points versus an article give a different impression to the reader. The same information packaged in different ways can give a different impression and I think that impression is part of the message people want to give. Reading a prompt of bullet points + desired impression will give the reader a different impression than what the LLM would output.
That is very true! Subjectively, I would much prefer to read either bullet points or the impression you want to convey. I don't care what impression the LLM wants to convey.
I prompted Claude with "(information theory) difference between semantic information and english text information in the context of using LLMs for writing": https://claude.ai/share/5925245a-0893-46ba-bca9-30627d4facbc
If you're familiar with LLMs and information theory, the LLM isn't giving you any semantic information that you don't already know. If you aren't familiar with LLMs and information theory, you can learn about them from google and/or your own LLM, using that prompt for keywords. In either case, the LLM's response isn't very helpful, because it's not my ideas that you are reading, it's random information pulled from the internet (directly or indirectly), and it's not actually the semantic information I wanted to convey.
This comment is more useful than the LLM's, because every word is chosen to convey the ideas in my mind as clearly as possible in the context of this article and conversation. It's also half as many words to read.