Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is, some groups do commit crimes at higher rates than others. If we are engineering the model to produce outputs with no disparities when disparities do exist in the real world, then our model is going to be biased.

For instance, men commit more crimes than women. If we are building an AI that predicts risk of committing crime (say, estimating rates of recidivism) and we forcibly make it report equal rates between men and women then we will be creating a discriminatory system because it will either under report the risk of men or over report the risk of women in order to achieve parity. Engineering parity of outcome in the model when the real world outcomes have disparities necessarily results in bias.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: