Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think your reply is interesting, as it exactly the sort of thing that Effective Altruism talks about.

The example you gave is endemic of the way we think of charity precisely because it deals with both emotion and things we can individually see. Effective Altruism aims to make charity about things that don't directly affect us, that we can't see, and find areas that where the "bang per buck" is high.

Seattle spent $195 million on the homelessness according to this: https://www.seattletimes.com/seattle-news/homeless/how-much-..., to help a group that this advocacy organisation http://www.homelessinfo.org/what_we_do/one_night_count/ puts at 10,000 people. I wonder how a person can rationally look at that - $195 million at ~$19.5K per person and think they can make a dent. Compare that to https://www.givedirectly.org/basic-income and doubling the poorest of the poor's daily income by giving them $1 a day for 12 years. These are people I can't see and don't have to think about, yet for less than $5000 I can provide someone with security and an income for over a decade. If I give $70 a week for 12 years, I can take 10 people out of abject poverty for 12 years. That's fractionally above my coffee budget.

That's the question Effective Altruism wants people to ask themselves. How can I, with the amount of money I have, do the most tangible good in the world, and what is an equation that helps me decide.

Far from being "absolutely morally repugnant" to walk past a homeless person and preference things we can't see, I think it is morally honest to look at the risks humanity faces, and preference things that are under funded, where $1 can go far. That takes several forms, both immediate - like the Give Directly example - versus long term, where risks that could wipe out our species are considered. I think a strong argument can be made that, for minimal expense, it would be possible to have an affect on a potentially species destroying technology like AI, and that the money spent there is more likely to do good than adding that to an already well funded area.

That's catastrophic risk, minimal expense vs human suffering and high cost. How you solve for the equilibrium there.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: