Artificial intelligence (AI) techniques are used to model human activities and predict behavior. Such systems have shown race, gender and other kinds of bias, which are typically understood as technical problems. Here we try to show that: 1) to get rid of such biases, we need a system that can understand the structure of human activities and; 2) to create such a system, we need to solve foundational problems of AI, such as the common sense problem. Additionally, when informational platforms uses these models to mediate interactions with their users, which is a commonplace nowadays, there is an illusion of progress, for what is an increasingly higher influence over our own behavior is took for an increasingly higher predictive accuracy. Given this, we argue that the bias problem is deeply connected to non-technical issues that must be discussed in public spaces.

DOI

https://doi.org/10.26512/rfmc.v8i3.34363

Reference

Barth, Carlos. 2021. “Is It Possible to Avoid Algorithmic Bias?”. Journal of Modern and Contemporary Philosophy 8 (3):39-68.