Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This needs to be shouted from the rooftops. If you could do it yourself then LLMs can be a great help, speeding things up, offering suggestions and alternatives etc.

But if you’re asking for something you don’t know how to do you might end up with junk and not even know it.



But if that junk doesn't work (which it likely won't for any worthwhile problem) then you have to get it working. And to get it working you almost always have to figure out how the junk code works. And in that process I've found is where the real magic happens. You learn by fixing, pruning, optimizing.

I think there's a whole meta level of the actual dynamic between human<>LLM interactions that is not being sufficiently talked about. I think there's, potentially, many secondary benefits that can come from using them simply due to the ways you have to react to their outputs (if a person decides to rise to that occasion).


If the junk doesn't work right from the beginning, yes. The problem is that sometimes the junk might look like it works at first, and then later you find out that it doesn't, and you ended up having to make urgent fixes on a Friday night.

> And in that process I've found is where the real magic happens

It might be good way to learn if there's someone who's supervising the process, so they _know_ that the code is incorrect, and tells you to figure out what's wrong and how to fixes.

If you are shipping this stuff yourself, this sounds like a way of deploying giant foot-guns into production.

I still think it's a better to learn if you try to understand the code from the beginning (in the same way that a person should try to understand code they read from tutorials and stackoverflow), rather than delaying the learning until something doesn't work. This is like trying to make yourself do reinforcement learning on the outputs of an LLM, which sounds really inefficient to me.


I see what you’re saying. Maybe in a novice or learning type situation having the LLM generate code you need to check for errors could be educational. We all learn from debugging afterall. On the flip I suspect for most people course questions might be better for that however. For those already good at the craft I agree there might be some unexplored secondary effects.

What I find (being in the latter category) is most LLM code output falls on the spectrum of “small snippets that work but wouldn’t have taken me long to type out anyway” to “large chunk that saves me time to write but that I have to thoroughly check/test/tweak”. In other words, the more time it saves typing the more time I have to spend on it afterwards. Novices probably spend more time on the former part of that spectrum and experienced devs on the latter. I suspect the average productivity increase across the spectrum is fairly level which means the benefits don’t really scale with user ability.

I think this tracks with the main thing people need to understand about LLMs: they are a tool. Like any tool simply having access to it doesn’t automatically make you good at the thing it helps with. It might help you learn and it might help you do the thing better, but it will not do your job for you.


There are real dangers in code that appears to run but contains sneaky problems. I once asked ChatGPT to take a data set and run a separate t-test on each group. The code it generated first took the average of each group and then ran the test on that one value. The results were wrong, but the code ran and handed off results to my downstream analysis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: