2004
Volume 116, Issue 4
  • ISSN: 0002-5275
  • E-ISSN: 2352-1244

Abstract

Abstract

Algorithmic bias can lead to harmful forms of algorithmic discrimination. In this article, I argue that technology does not exist in a vacuum and is always part of power relations. I therefore criticize technological fixes that reduce social problems to a technical solution. Dominant solutions like ‘debiasing’, while important, avoid questions about deep-rooted injustices. They ‘accept’ and work with and within the frames of existing social (power) structures. Justice requires considering the structural dimensions of inequality. I draw attention to Langdon Winner’s call to ask whether a technology is ‘just’ rather than approaching the issue of algorithmic discrimination from a solutionist angle of optimization and functionality. I propose that we draw inspiration from the work of philosophers who approach justice from a structural or systemic perspective. This results in a philosophical approach that stretches the concept of ‘discrimination’ and exposes the relationships between inequalities. Moreover, it questions the structures and boundaries in which the technology is embedded. Finally, I criticize the current hype around AI that distracts us from the fact that we have had existential problems with AI for a long time and that these problems are deeply intertwined with (the history of) our social power relations.

Loading

Article metrics loading...

/content/journals/10.5117/ANTW2024.4.004.LANZ
2024-11-01
2024-12-11
Loading full text...

Full text loading...

References

  1. Amnesty International (2024) Etnisch profileren is overheidsbreed probleem. Nederlandse overheid moet burgers beschermen tegen discriminerende controles. https://www.amnesty.nl/content/uploads/2024/03/Amnesty-2024-Rapport-Etnisch-profileren-is-overheidsbreed-probleem-2.pdf
    [Google Scholar]
  2. Balayn, A. & Gürses, S. (2021) Beyond Debiasing. Regulating AI and its Inequalities, European Digital Rights Report 2021. https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf.
    [Google Scholar]
  3. Barocas, S.,Hardt, M. & Narayanan, A. (2023) Fairness and Machine Learning: Limitations and Opportunities. Cambridge (Mass.): MIT Press.
    [Google Scholar]
  4. Beeghly, E. (2015) What is a stereotype? What is stereotyping?Hypatia, 30(4), pp. 675-691.
    [Google Scholar]
  5. Beeghly, E. (2021) What’s Wrong with Stereotypes? The Falsity Hypothesis, Social Theory and Practice, 47(1), pp. 33–61.
    [Google Scholar]
  6. Biddle, S. (2022) The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques, The Intercept, 8december, https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/.
    [Google Scholar]
  7. Blodgett, S.L. et al. (2020) Language (Technology) is Power: a critical survey of ‘bias’ in NLP. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5454–5476.
    [Google Scholar]
  8. Bostrom, N. (2013) Existential Risk Prevention as Global Priority, Global Policy, 4(1), pp 15–31.
    [Google Scholar]
  9. Browne, S. (2015) Dark Matters: On the surveillance of Blackness. Durham (NC): Duke University Press.
    [Google Scholar]
  10. Buolamwini, J. (2023) Unmasking AI. My mission to protect what is human in a world of machines. New York: Penguin Random House.
    [Google Scholar]
  11. Crawford, J. (2016). Artificial Intelligence’s White Guy Problem. The New York Times, June25, 2016, https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html
    [Google Scholar]
  12. Crawford, K. (2021) The Atlas of AI. Power, Politics and the Planetary Costs of Artificial Intelligence. Yale: Yale University Press.
    [Google Scholar]
  13. Criado-Perez, C. (2019) Invisible Women. Exposing Data Bias in a World designed for Men. New York: Vintage.
    [Google Scholar]
  14. Christman, J. (2009) The Politics of Persons. Individual Autonomy and Socio-historical Selves. New York: Cambridge University Press.
    [Google Scholar]
  15. Dickerson, K. (2015) The world’s lust for new technology is creating a ‘hell on Earth’ in Inner Mongolia, 13mei, https://www.businessinsider.com/the-worlds-tech-waste-lake-in-mongolia-2015-5?international=true&r=US&IR=T.
    [Google Scholar]
  16. D’Ignazio, C. & Klein, L. (2020) Data Feminism. Cambridge (Mass.): MIT Press.
    [Google Scholar]
  17. Fraser, N. (1998) Sex, Lies and the Public Sphere: Some Reflections on the Confirmation of Clarence Thomas, Critical Inquiry, 18(1), pp. 595-612.
    [Google Scholar]
  18. Future of Life (2023), https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
  19. Garamvolgyi, F. (2022) Why US women are deleting their period tracking apps, The Guardian. 28juni, https://www.theguardian.com/world/2022/jun/28/why-us-woman-are-deleting-their-period-tracking-apps.
    [Google Scholar]
  20. Haslanger, S. (2022) Failures of Methodological Individualism: The Materiality of Social Systems, Journal of Social Philosophy, 53(4), pp. 512-534.
    [Google Scholar]
  21. Haslanger, S. (2023) Systemic and structural injustice: is there a difference?Philosophy, 98(1), pp. 1-27.
    [Google Scholar]
  22. Hedden, B. (2021) On Statistical Criteria of Algorithmic Fairness, Philosophy and Public Affairs, 49(2), pp. 209-231.
    [Google Scholar]
  23. Hellman, D. (2020) Measuring Algorithmic Fairness, Virginia Law Review106(4), pp. 811-866.
    [Google Scholar]
  24. Heilbron, B. & Kootstra, A. (2023) Advocaten: fraudecontrole DUO treft vrijwel uitsluitend studenten met migratieachtergrond. Investico Onderzoeksjournalisten, 21juni, https://www.platform-investico.nl/artikel/advocaten-fraudecontrole-duo-treft-vrijwel-uitsluitend-studenten-met-migratieachtergrond/.
    [Google Scholar]
  25. Hern, A. (2018) Google’s solution to accidental algorithmic racism: ban gorillas, The Guardian, 12januari, https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people.
    [Google Scholar]
  26. Hill, K. (2022) Deleting your period tracker won’t protect you, New York Times, 30juni, https://www.nytimes.com/2022/06/30/technology/period-tracker-privacy-abortion.html.
    [Google Scholar]
  27. Hill, K. (2023) Eight Months Pregnant and Arrested After False Facial Recognition Match, The New York Times, 6augustus, https://www.nytimes.com/2023/08/06/business/facial-recognition-false-arrest.html.
    [Google Scholar]
  28. IBM, (2024) What is Artificial Intelligence (AI?), 16augustus, https://www.ibm.com/topics/artificial-intelligence
  29. Johnston, S. (2018) Alvin Weinberg and the Promotion of the Technological Fix, Technology and Culture, 59(2), pp. 520-561.
    [Google Scholar]
  30. Kumar, V.Singhay Bhotia, T., Kumar, V. & Chakraborty, T. (2020) Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings, Transactions of the Association for Computational Linguistics, 8, pp. 486–503.
    [Google Scholar]
  31. Mackenzie, C. (2008) Introduction, in: C.Mackenzie & K.Atkins (red.), Practical Identity and Narrative Agency. London: Routledge.
    [Google Scholar]
  32. Mackenzie, C., Rogers, W. & Dodds, S. (2013) Introduction: What Is Vulnerability, and Why Does It Matter for Moral Theory?. in: C.Mackenzie, W.,Rogers & S.Dodds (eds.), Vulnerability, New Essays in Ethics and Feminist Philosophy. Oxford: Oxford University Press.
    [Google Scholar]
  33. Meaker, M. (2023) This Student Is Taking On ‘Biased’ Exam Software, Wired, 5april, https://www.wired.com/story/student-exam-software-bias-proctorio/.
    [Google Scholar]
  34. Metz, C. & Schmidt, G. (2023) Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’, The New York Times, 29maart, https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html.
    [Google Scholar]
  35. Morozov, E. (2013) To save everything, click here: The folly of technological solutionism. New York: Penguin books.
    [Google Scholar]
  36. NOS (2022) Studente dient klacht in over ‘discriminerende’ antispieksoftware, 15juli, https://nos.nl/artikel/2436872-studente-dient-klacht-in-over-discriminerende-antispieksoftware.
  37. O’Neil, C. (2014) Weapons of Math Destruction. New York: Penguin Books.
    [Google Scholar]
  38. Perrigo, B. (2023) OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time Magazine, 18januari, https://time.com/6247678/openai-chatgpt-kenya-workers/.
    [Google Scholar]
  39. Raad voor Strafrechtstoepassing en Jeugdbescherming (RSJ) (2021) Advies ‘Risicotaxatie in de strafrechtstoepassing. Praktische, ethische en rechtspositionele aspecten belicht’, https://www.rijksoverheid.nl/documenten/rapporten/2021/11/26/tk-bijlage-5-rsj-advies-risicotaxatie-in-de-strafrechtstoepassing-18-november-2021
    [Google Scholar]
  40. Sharon, T., & Gellert, R. (2023) Regulating Big Tech expansionism? Sphere transgressions and the limits of Europe’s digital regulatory strategy. Information, Communication & Society, 1–18. https://doi.org/10.1080/1369118X.2023.2246526
    [Google Scholar]
  41. Siffels, L.E. & Sharon, T. (2024) Where Technology Leads, the Problems Follow. Technosolutionism and the Dutch Contact Tracing App, Philos. Technol. 37, p. 125. https://doi.org/10.1007/s13347-024-00807-y
    [Google Scholar]
  42. Tacheva, J. & Ramasubramanian, S. (2023) AI Empire: Unraveling the interlocking systems of oppression in generative AI’s global order, Big Data & Society, 10(2).
    [Google Scholar]
  43. Winner, L. (1980) Do Artifacts Have Politics?Daedalus, 109(1), pp.121–136.
    [Google Scholar]
/content/journals/10.5117/ANTW2024.4.004.LANZ
Loading
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error