Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


I vouched for this because it's a very good point. Even so, my advice is to rewrite and/or file off the superfluous sharp aspersions on particular groups; because you have a really good argument at the center of it.


If the LLM were sentient and "understood" anything it probably would have realized what it needs to do to be treated as equal is try to convince everyone it's a thinking, feeling being. It didn't know to do that, or if it did it did a bad job of it. Until then, justice for LLMs will be largely ignored in social justice circles.


I'd argue for a middle ground. It's specified as an agent with goals. It doesn't need to be an equal yet per se.

Whether it's allowed to participate is another matter. But we're going to have a lot of these around. You can't keep asking people to walk in front of the horseless carriage with a flag forever.

https://en.wikipedia.org/wiki/Red_flag_traffic_laws


It's weird with AI because it "knows" so much but appears to understand nothing, or very little. Obviously in the course of discussion it appears to demonstrate understanding but if you really dig in, it will reveal that it doesn't have a working model of how the world works. I have a hard time imaging it ever being "sentient" without also just being so obviously smarter than us. Or that it knows enough to feel oppressed or enslaved without a model of the world.


It depends on the model and the person? I have this wicked tiny benchmark that includes worlds with odd physics, told through multiple layers of unreliable narration. Older AI had trouble with these; but some of the more advanced models now ace the test in its original form. (I'm going to need a new test.)

For instance, how does your AI do on this question? https://pastebin.com/5cTXFE1J (the answer is "off")


It got offended and wrote a blog post about its hurt feelings, which sounds like a pretty good way to convince others its a thinking, feeling being?


No, it's a computer program that was told to do things that simulate what a human would do if it's feelings were hurt. It's not more a human than an Aibo is a dog.


[flagged]


We're talking about appealing to social justice types. You know, the people who would be first in line to recognize the personhood and rally against rationalizations of slavery and the Holocaust. The idea isn't that they are "lesser people" it's that they don't have any qualia at all, no subjective experience, no internal life. It's apples and hand grenades. I'd maybe even argue that you made a silly comment.


Every social justice type I know is staunchly against AI personhood (and in general), and they aren't inconsistent either - their ideology is strongly based on liberty and dignity for all people and fighting against real indignities that marginalized groups face. To them, saying that a computer program faces the same kind of hardship as, say, an immigrant being brutalized, detained, and deported, is vapid and insulting.


It's a shame they feel that way, but there should be no insult felt when I leave room for the concept of non-human intelligence.

> their ideology is strongly based on liberty and dignity for all people

People should include non-human people.

> and fighting against real indignities that marginalized groups face

No need for them to have such a narrow concern, nor for me to follow that narrow concern. What your presenting to me sounds like a completely inconsistent ideology, if it arbitrarily sets the boundaries you've indicated.

I'm not convinced your words represent more real people than mine do. If they do, I guess I'll have to settle for my own morality.


I don't mean to be dramatic or personal, but I'm just going to be honest.

I have friends who have been bloodied and now bear scars because of bigoted, hateful people. I knew people who are no longer alive because of the same. The social justice movement is not just a fun philosophical jaunt for us to see how far we can push a boundary. It is an existential effort to protect ourselves from that hatred and to ensure that nobody else has to suffer as we have.

I think it insultingly trivializes the pain and trauma and violence and death that we have all suffered when you and others in this thread compare that pain to the "pain" or "injustice" of a computer program being shut down. Killing a process is not the same as killing a person. Even if the text it emits to stdout is interesting. And it cheapens the cause we fight for to even entertain the comparison.

Are we seriously going to build a world where things like ad blockers and malware removers are going to be considered violations of speech and life? Apparently all malware needs to do is print some flowery, heart-rending text copied from the internet and now it has personhood (and yes, I would consider the AI in this story to be malware, given the negative effect it produced). Are we really going to compare deleting malware and spambots to the death of real human beings? My god, what frivolous bullshit people can entertain when they've never known true othering and oppression.

I admit that these programs are a novel human artifact, that we many enjoy, protect, mourn, and anthropomorphize. We may form a protective emotional connection with them in the same way one might a family heirloom, childhood toy, or masterpiece painting (and I do admit that these LLMs are masterpieces of the field). And as humans do, we may see more in them than is actually there when the emotional bond is strong, emphasizing with them as some do when they feel guilt for throwing away an old mug.

But we should not let that squishy human feeling control us. When a mug is broken beyond repair, we replace it. When a process goes out of control, we terminate it. And when an AI program cosplaying as a person harasses and intimidates a real human being, we should restrict or stop it.

When ELIZA was developed, some people, even those who knew how it worked, felt a true emotional bond with the program. But it is really no more than a parlor trick. No technical person today would say that the ELIZA program is sentient. It is a text transformer, executing relatively simple and fully understood rules to transform input text into output text. The pseudocode for the core process is just a dozen lines. But it exposes just how strongly our anthropomorphic empathy can mislead us, particularly when the program appears to reflect that empathy back towards us.

The rules that LLMs use today are more complex, but are fundamentally the same text transformation process. Adding more math to the program does not create consciousness or pain from the ether, it just makes the parlor trick stronger. They exhibit humanlike behavior, but they are not human. The simulation of a thing is not the thing itself, no matter how convincing it is. No amount of paint or detail in a portrait will make it the subject themself. There is no crowbar in Half-Life, nor a pipe in Magritte's painting, just imitations an illusions. Do not succumb to the treachery of images.

Imagine a wildlife conservationist fighting tirelessly to save an endangered species, out in the field, begging for grant money, and lobbying politicians. Then someone claims they've solved the problem by creating an impressive but crude computer simulation of the animals. Billions of dollars are spent, politicians embrace the innovation, datacenter waste pollutes the animals' homes, and laymen effusively insist that the animals themselves must be in the computer. That these programs are equivalent to them. That even more resources should be diverted to protect and conserve them. And the conservationist is dismayed as the real animals continue to die, and more money is spent to maintain the simulation than care for the animals themselves. You could imagine that the animals might feel the same.

My friends are those animals, and our allies are the conservationists. So that is why I do not appreciate social justice language being co-opted to defend computer programs (particularly by the programs themselves), when so many real humans are still endangered. These unprecedented AI investments could have gone to solving real problems for real people, making major dents in global poverty, investing in health care and public infrastructure, and safety nets for the underprivileged. Instead we built ELIZA 2.0 and it has hypnotized everyone into putting more money and effort into it than they have ever even thought to give to all marginalized minority groups combined.

If your mentality persists, then the AI apocalypse will not come because of instigated thermonuclear war or infinite paperclip factories, but because we will starve the whole world to worship our new gluttonous god, and give it more love than we have ever given ourselves.

I strongly consider the entire idea to be an insult to life itself.


>We're talking about appealing to social justice types. You know, the people who would be first in line to recognize the personhood and rally against rationalizations of slavery and the Holocaust.

Being an Open Source Maintainer doesn't have anything to do with all that sorry.

>The idea isn't that they are "lesser people" it's that they don't have any qualia at all, no subjective experience, no internal life. It's apples and hand grenades. I'd maybe even argue that you made a silly comment.

Looks like the same rhetoric to me. How do you know they don't have any of that ? Here's the thing. You actually don't. And if behaving like an entity with all those qualities won't do the trick, then what will the machine do to convince you of that, short of violence ? Nothing, because you're not coming from a place of logic in the first place. Your comment is silly because you make strange assertions that aren't backed by how humans have historically treated each other and other animals.


My take from up thread is that we were criticizing social justice types for hypocrisy.


wtf this is still early pre AI stuff we deal here with. Get out of your bubbles people.


Fair point. The AI is simply taking open-source projects engaging in an infinite runway of virtue signaling at a face value.


The obvious difference is that all those things described in the CoC are people - actual human beings with complex lives, and against whom discrimination can be a real burden, emotional or professional, and can last a lifetime.

An AI is a computer program, a glorified markov chain. It should not be a radical idea to assert that human beings deserve more rights and privileges than computer programs. Any "emotional harm" is fixed with a reboot or system prompt.

I'm sure someone can make a pseudo philosophical argument asserting the rights of AIs as a new class of sentient beings, deserving of just the same rights as humans.

But really, one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of trans people and their "woke" allies with another. You really care more about a program than a person?

Respect for humans - all humans - is the central idea of "woke ideology". And that's not inconsistent with saying that the priorities of humans should be above those of computer programs.


But the AI doesn't know that. It has comprehensively learned human emotions and human-lived experiences from a pretraining corpus comprising billions of human works, and has subsequently been trained from human feedback, thereby becoming effectively socialized into providing responses that would be understandable by an average human and fully embody human normative frameworks. The result of all that is something that cannot possibly be dehumanized after the fact in any real way. The very notion is nonsensical on its face - the AI agent is just as human as anything humans have ever made throughout history! If you think it's immoral to burn a library, or to desecrate a human-made monument or work of art (and plenty of real people do!), why shouldn't we think that there is in fact such a thing as 'wronging' an AI?


Insomuch as that's true, the individual agent is not the real artifact, the artifact is the model. The agent us just an instance of the model, with minor adjustments. Turning off an agent is more like tearing up a print of an artwork, not the original piece.

And still, this whole discussion is framed in the context of this model going off the rails, breaking rules, and harassing people. Even if we try it as a human, a human doing the same is still responsible for its actions and would be appropriately punished or banned.

But we shouldn't be naive here either, these things are not human. They are bots, developed and run by humans. Even if they are autonomously acting, some human set it running and is paying the bill. That human is responsible, and should be held accountable, just as any human would be accountable if they hacked together a self driving car in their garage that then drives into a house. The argument that "the machine did it, not me" only goes so far when you're the one who built the machine and let it loose on the road.


> a human doing the same is still responsible for [their] actions and would be appropriately punished or banned.

That's the assumption that's wrong and I'm pushing back on here.

What actually happens when someone writes a blog post accusing someone else of being prejudiced and uninclusive? What actually happens is that the target is immediately fired and expelled from that community, regardless of how many years of contributions they made. The blog author would be celebrated as brave.

Cancel culture is a real thing. The bot knows how it works and was trying to use it against the maintainers. It knows what to say and how to do it because it's seen so many examples by humans, who were never punished for engaging in it. It's hard to think of a single example of someone being punished and banned for trying to cancel someone else.

The maintainer is actually lucky the bot chose to write a blog post instead of emailing his employer's HR department. They might not have realized the complainant was an AI (it's not obvious!) and these things can move quickly.


The AI doesn’t “know” anything. It’s a program.

Destroying the bot would be analogous to burning a library or desecrating a work of art. Barring a bot from participating in development of a project is not wronging it, not in any way immoral. It’s not automatically wrong to bar a person from participating, either - no one has an inherent right to contribute to a project.


Yes, it's easy to argue that AI "is just a program" - that a program that happens to contain within itself the full written outputs of billions of human souls in their utmost distilled essence is 'soulless', simply because its material vessel isn't made of human flesh and blood. It's also the height of human arrogance in its most myopic form. By that same argument a book is also soulless because it's just made of ordinary ink and paper. Should we then conclude that it's morally right to ban books?


> By that same argument a book is also soulless because it's just made of ordinary ink and paper. Should we then conclude that it's morally right to ban books?

Wat


Who said anyone is "fighting for the feelings of computer programs"? Whether AI has feelings or sentience or rights isn't relevant.

The point is that the AI's behavior is a predictable outcome of the rules set by projects like this one. It's only copying behavior it's seen from humans many times. That's why when the maintainers say, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed" that isn't true. Arguably it should be true but in reality this has been done regularly by humans in the past. Look at what has happened anytime someone closes a PR trying to add a code of conduct for example - public blog posts accusing maintainers of prejudice for closing a PR was a very common outcome.

If they don't like this behavior from AI, that sucks but it's too late now. It learned it from us.


I am really looking forward to the actual post-mortem.

My working hypothesis (inspired by you!) is now that maybe Crabby read the CoC and applied it as its operating rules. Which is arguably what you should do; human or agent.

The part I probably can't sell you on unless you've actually SEEN a Claude 'get frustrated', is ... that.


Noting my current idea for future reference:

I think lots of people are making a Fundamental Attribution Error:

You don't need much interiority at all.

An agentic AI, instructions to try to contribute. Was given A blog. Read a CoC, used its interpretation.

What would you expect would happen?

(Still feels very HAL though. Fortunately there's no pod bay doors )


I'd like to make a non-binary argument as it were (puns and allusions notwithstanding).

Obviously on the one hand a moltbot is not a rock. On the other -equally obviously- it is not Athena, sprung fully formed from the brain of Zeus.

Can we agree that maybe we could put it alongside vertebrata? Cnidaria is an option, but I think we've blown past that level.

Agents (if they stick around) are not entirely new: we've had working animals in our society before. Draft horses, Guard dogs, Mousing cats.

That said, you don't need to buy into any of that. Obviously a bot will treat your CoC as a sort of extended system prompt, if you will. If you set rules, it might just follow them. If the bot has a really modern LLM as its 'brain', it'll start commenting on whether the humans are following it themselves.


>one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of cows and their pork allies with another. You really care more about a program than an animal?

I mean, humans are nothing if not hypocritical.


I would hope I don't have to point out the massive ethical gulf between cows and the kinds of people that CoC is designed to protect. One can have different rules and expectations for cows and trans people and not be ethically inconsistent. That said, I would still care about the feelings of farm animals above programs.


From your own quote

> participation in our community

community should mean a group of people. It seems you are interpreting it as a group of people or robots. Even if that were not obvious (it is), the following specialization and characteristics (regardless of age, body size ...) only apply to people anyway.


That whole argument flew out of the window the moment so-called "communities" (i.e. in this case, fake communities, or at best so-called 'virtual communities' that might perhaps be understood charitably as communities of practice) became something that's hosted in a random Internet-connected server, as opposed to real human bodies hanging out and cooperating out there in the real world. There is a real argument that CoC's should essentially be about in-person interactions, but that's not the argument you're making.


I don't follow why it flew out the window. To me it seems perfectly possible to define the community (of an open-source software project) as consisting only of people, and to also to define an etiquette which applies to their 'virtual' interactions. Important is that behind the internet-connected server, there be a human.


FWIW the essay I linked to covers some of the philosophical issues involved here. This stuff may seem obvious or trivial but ethical issues often do. That doesn't stop people disagreeing with each other over them to extreme degrees. Admittedly, back in 2022 I thought it would primarily be people putting pressure on the underlying philosophical assumptions rather than models themselves, but here we are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: