Domains of Artificial Intelligence Utilization in Fundamental Human Rights in Light of Examining Its Legitimacy

Authors

    Abbas Salahshour Ph.D. Student, Department of Theology, Najafabad Branch, Islamic Azad University, Najafabad , Iran
    Ahmad Reza Tavakoli * Assistant Professor, Department of Theology, Najafabad Branch, Islamic Azad University, Najafabad, Iran Tavakolimady @gmail.com
    Mohammadali Heidari Assistant Professor, Department of theology, Najafabad Branch, Islamic Azad University, Najafabad, Iran

Keywords:

Artificial Intelligence, Data Protection, Fundamental Rights, Legitimacy

Abstract

Today, the manifestations of human rights in artificial intelligence and the aspects of its legitimacy represent a critical challenge centered on how to address related gaps, risks, and their influence on the principles of human rights. Key concerns in this domain include algorithmic transparency, cybersecurity vulnerabilities, injustice, bias and discrimination, privacy issues, data protection, liability for harm, and the absence of accountability, all of which have been examined using a library-based research method. This study focuses on the concept of "vulnerability" as a foundation for consolidating an understanding of the domain of artificial intelligence and fundamental human rights, which are frequently at the heart of concern. It employs this concept to explore the process of legitimizing the application of such technology. The findings indicate that, despite the progress made in the legal and regulatory governance of artificial intelligence and the acknowledgment that this domain requires continuous evaluation and agility in approach, the discussion advances the argument that, given the profound impacts of AI technologies—particularly on vulnerable individuals and groups and their human rights—it is crucial to evaluate their legitimacy through the lens of ethics before introducing them into spheres that affect individual rights.

Downloads

Download data is not yet available.

References

Chalmers, A. W. (2019). Choosing lobbying sides: the general data protection regulation of the European Union. J. Public Policy(No39). https://doi.org/10.1017/S0143814X18000223

Cheng, Varshney, & Liu. Socially Responsible AI Algorithms: Issues, Purposes, and Challenges. Journal of Artificial Intelligence Research(N5).

Coeckelbergh, M. (2020). AI Ethics. https://doi.org/10.7551/mitpress/12549.001.0001

Commissioner for Human Rights. (2019). Unboxing Artificial Intelligence: 10 steps to protect Human Rights. In. European Office of the Commissioner for Human Rights.

Ghaly, M. (2022). For the Classical Beginnings of Self-Propelled Machines, an Article Published on the Internet, Aristotle's Politics. In.

Ginevra Le, M. (2022). AI vs Human Dignity: When Human Underperformance is Legally Required. Groupe d'études géopolitiques(4).

Hallström, J. (2022). Embodying the past, designing the future: technological determinism reconsidered in.

Heleen, L. J. (2020). An approach for a fundamental rights impact assessment to automated decision-making. International Data Privacy Law, 10(1). https://doi.org/10.1093/idpl/ipz028

Kaminski, M. E. (2019). Binary Governance: Lessons from the GDPR's Approach to Algorithmic Accountability. University of Colorado Law School University of Colorado Law School, 92. https://doi.org/10.2139/ssrn.3351404

Lilian, E., & Michael, V. (2017). Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking for. 16 DUKE L. & TECH. REV. 17, 44. https://doi.org/10.31228/osf.io/97upg

Mehrabi, N., Morstatter, F., Nripsuta, S., Kristina Lerman, A. U., & Galstyan, A. (2022). A Survey on Bias and Fairness in Machine Learning. https://doi.org/10.1145/3457607

Mostafavi Ardebili, M. M., Taghizadeh Ansari, M., & Rahmati Far, S. (2022). Functions and Requirements of Artificial Intelligence from the Perspective of Fair Trial. Biannual Journal of Law and New Technologies, 3(6).

Rai, P., & Constantinides, S. (2019). Next-generation digital platforms: toward human-AI hybrids. Mis Quart, 43(1).

Richardson, R., Schultz, J., & Crawford, K. (2019). 'Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice'. NYULR.

Salari, A. (2018). Equality and Non-Discrimination in the Human Rights System. Legal Research Journal, 17(35).

Smith, M. (1994). Technological Determinism in American Culture. In Does Technology Drive History? The Dilemma Of Technological Determinism.

Stéphanie Laulhé, S., & Yulia, R. (2024). Challenges to Fundamental Human Rights in the age of Artificial Intelligence Systems: shaping the digital legal order while upholding Rule of Law principles and European values. Era Forum(No24). https://doi.org/10.1007/s12027-023-00777-2

Stone, P., Brooks, R., Brynjolfsson, E., & Calo, R. (2022). Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence. ONE HUNDRED YEAR STUDY ON ARTIFICIAL INTELLIGENCE | REPORT OF THE 2015 STUDY PANEL | SEPTEMBER 2016.

Veale, M., Binns, R., & Lilian, E. (2018). Algorithms that remember: model inversion attacks and data protection law. Royal Society, 376(2133). https://doi.org/10.1098/rsta.2018.0083

Wachter, S., Mittelstadt, B., & Russell, C. (2021). Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review, 41. https://doi.org/10.1016/j.clsr.2021.105567

Downloads

Published

2025-05-23

Submitted

2025-02-01

Revised

2025-05-04

Accepted

2025-05-18

Issue

Section

مقالات

How to Cite

Salahshour, A. ., Tavakoli, A. R., & Heidari, M. (1404). Domains of Artificial Intelligence Utilization in Fundamental Human Rights in Light of Examining Its Legitimacy. The Encyclopedia of Comparative Jurisprudence and Law. https://jecjl.com/index.php/jecjl/article/view/76

Similar Articles

1-10 of 74

You may also start an advanced similarity search for this article.