Nothing like coding, just like relatively basic stuff. Idk its hard to explain but I use AI so frequently for work that I have a sense for what it is capable of.
I should clarify that by small I mean in the 3-8B range. I haven't tested the 14-30B ones, my experience is only about the smaller ones.
In my experience, small models are not good for coding (except very basic tasks), they're not good for general knowledge. So the only purpose I could see for them would be, when they're given the information, i.e. summarization or RAG.
But in my summarization experiments, they consistently misunderstood the information given to them. They constantly made basic errors and failed to understand the text.
So having eliminated programming, general knowledge, summarization and (by extension, RAG, because if you can't understand the information, then you can't do RAG either, by definition) -- I have eliminated all the use cases that I had in mind!
That would leave very basic tasks like classification or keywords, but I think there they would be in the awkward middle ground of being disappointing relative to big LLMs for many tasks, and cumbersome relative to small specialized models which can run fast and cheap and be fine tuned.