Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just tried it (gemma3:12b) using ollama and also through open-webui

It's surprisingly fast and pretty good. Was really impressed that I can feed it images through open-webui

However, it keeps failing, both on the terminal and through open-webui. The error is:

"Error: an error was encountered while running the model: unexpected EOF"

It seems like it's an ollama issue, although according to tickets on GitHub it's supposed to be related to CUDA, but I'm running it on an M3 Mac

Up until now I never had this issue with ollama, I wonder if it's related to having updated to 0.6.0



Does Ollama use llama.cpp? If so you have to update that. You nearly always have to update the backend when a new model like this comes out.

I assure you it works fine with CUDA.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: