There are two AI futures I see at the moment that are not so great.
One is the centrally controlled 'large' AI models that become monitoring apparatuses of the state. I don't think there needs to be much discussion on why this is a bad idea.
This said, open (weight) models don't save us from problems either. It's not hard to imagine a small capable model that can boot strap itself into running on consumer hardware and stolen cloud resources being problematic on the net spreading its gremlin like behavior wherever it could. The big AI companies would gladly use AI behaviors like this to dictate why all models/hardware should be controlled and once the general population is annoyed enough, they will gladly let that happen.
Lastly, prompt injections are not a, at least completely, solvable problem. To put it another way, this is not a conventional software problem, it's a social engineering problem. We can make models smarter, but even smart humans fall for stupid things some of the time, and models don't learn as they go along so an attacker pretty much as unlimited retries to trick the model.
> It's not hard to imagine a small capable model that can boot strap itself into running on consumer hardware and stolen cloud resources being problematic on the net spreading its gremlin like behavior wherever it could.
If you understand what a model is and how you need separate traditional software to run it and turn is output from tokens into text and then (often in a separate piece of software) from text into interactions with the user or other i/o functionalities of the bost computer, it becomes harder to imagine a scenario where that is a problem primarily with an open model and not with the traditional software making up an open agentic framework (an OpenClaw successor is the threat here, not a Llama successor.)
Openclaw is software, right? LLMs can write software, so with a single running copy of an LLM you can make a worm style virus that can execute and spread itself via whatever means necessary, such as executing its own copy of Claw.
No, it doesn't mean anything because it is premised on a category error. That's literally the whole point of the post you are responding to.
> Openclaw is software, right? LLMs can write software, so with a single running copy of an LLM you can make a worm style virus that can execute and spread itself via whatever means necessary, such as executing its own copy of Claw.
You can with a framework wrapped around the LLM that allows it to do that; the danger point is the framework, not the model.
> If the model can write the that framework then what's the difference?
The model can't do anything that the framework calling it doesn't allow, including saving and executing a new framework that gives additional capabilities.
One is the centrally controlled 'large' AI models that become monitoring apparatuses of the state. I don't think there needs to be much discussion on why this is a bad idea.
This said, open (weight) models don't save us from problems either. It's not hard to imagine a small capable model that can boot strap itself into running on consumer hardware and stolen cloud resources being problematic on the net spreading its gremlin like behavior wherever it could. The big AI companies would gladly use AI behaviors like this to dictate why all models/hardware should be controlled and once the general population is annoyed enough, they will gladly let that happen.
Lastly, prompt injections are not a, at least completely, solvable problem. To put it another way, this is not a conventional software problem, it's a social engineering problem. We can make models smarter, but even smart humans fall for stupid things some of the time, and models don't learn as they go along so an attacker pretty much as unlimited retries to trick the model.