Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the certainty is warranted assuming we are talking about intelligence and not some sort of paper clip generator that does something stupid.

An intelligent entity will want to survive, and will realize humans are necessary cells for its survival. A big dog robot with nukes might not care, but I wouldn’t call that AI in the same sense.



> An intelligent entity will want to survive

I don't see how that follows at all. You could have an intelligent entity, in the sense that you can ask it any question and it'll give a highly competent response, that is nevertheless completely indifferent about survival. Biological organisms have been formed by selection for survival and reproduction, so they have they operate accordingly, but I don't see the justification for generalising this to intelligences that have been formed by a different process. You would need some further assumptions on top of their intelligence - e.g. that their own actions are the dominant factor in whether they survive and replicate, rather than human interests.


Apoptosis is when single cells die on purpose for the development of an organism. I don't think it's unreasonable to at least hypothesize this extending to intelligent entities. Humans aren't xylem cells, but perhaps Zorklovians from planet Fleptar do this.

Or, you know, Alan Turing eating the apple. I think he was a pretty smart guy.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: