ARTIFICIAL INTELLIGENCE ACTS UNDER THE NATURAL LAW THEORY AND POSITIVISM PERSPECTIVE: A CRITICAL REVIEW

Authors

  • Qazi Naved Ahmad Author
  • Mudasar Latif Memon Author
  • Sardar Ali Shah Author

Keywords:

artificial intelligence regulation, natural law theory, legal positivism, EU AI Act, fundamental rights, algorithmic accountability, AI ethics, risk-based regulation

Abstract

The collective phenomenon of artificial intelligence (AI) technologies rapidly spreading into the societies of the world has created a desperate need to establish extensive regulatory mechanisms to control the growth of these technologies, their use, and influence. With jurisdictions across the globe struggling to develop effective AI governance structures, there arise underlying issues of what the philosophical basis of such regulatory strategies is. This article thoroughly reviews AI regulation in the two-fold approaches of natural law theory and law positivism and how these two opposing schools of jurisprudence inform AI governance approaches today. The natural law theory assumes the legal justification based on the universal principles of morality, which are available with human reason, which puts the focus on the inherent human dignity, rights, and moral duties. On the other hand, legal positivism upholds that law is a human creation, distinct of morality and has its power originating in institutionalized sources and not metaphysics. We critically examine the expression of these theoretical frameworks in existing AI regulatory efforts with specific attention given to the European Union AI Act, which uses a risk-based approach to classification and compliance. By analyzing the key aspects of the idea of fundamental rights protection, the concept of algorithmic transparency, accountability principles, and ethical concerns, we show how natural law views highlight the role of universal moral imperatives in the governance of AI, whereas positivist views focus on formal legal frameworks and procedural adherence. The review also discusses algorithmic bias and discrimination in both theoretical terms, analyzes the issues in attributing liability and responsibility, and conflicts between innovation needs and rights protection. We single out major weaknesses in current structures such as fragmentation in jurisdictions, enforcing issues, and the necessity to harmonize internationally. Last but not the least, we would suggest balanced AI governance which would incorporate learnings of both schools of jurisprudence but would also deal with the reality of implementing AI to the ground. The provided analysis is part of the existing discussions of the best philosophical underpinnings to the regulation of AI and can be used to provide recommendations and guidance to challenges in AI regulation presented to policymakers, legal experts, and technology developers in the difficult field of AI regulation.

Downloads

Published

29-11-2025

How to Cite

ARTIFICIAL INTELLIGENCE ACTS UNDER THE NATURAL LAW THEORY AND POSITIVISM PERSPECTIVE: A CRITICAL REVIEW. (2025). Journal of Media Horizons, 6(6), 788-848. https://jmhorizons.com/index.php/journal/article/view/1062