A study led by Charles Darwin University finds that artificial intelligence is threatening human dignity on a global scale. The research indicates AI is rapidly reshaping legal and ethical landscapes while undermining democratic values and deepening systemic biases across Western societies.
Dr. Maria Randazzo, the study’s lead author and an academic from CDU’s School of Law, reports that current regulation fails to prioritize fundamental human rights and freedoms. The research identifies privacy, anti‑discrimination, user autonomy, and intellectual property rights as areas where protections are insufficient. This failure is attributed mainly to the untraceable nature of many algorithmic models used in artificial intelligence systems.
Dr. Randazzo terms this lack of transparency the “black box problem.” She explains that decisions made through deep‑learning or machine‑learning processes are impossible for humans to trace. This opacity makes it difficult for users to determine if and why an AI model has violated their rights and dignity, which in turn obstructs their ability to seek justice where necessary. “This is a very significant issue that is only going to get worse without adequate regulation,” Dr. Randazzo stated.
The study contends that AI is not intelligent in a human sense. “It is a triumph in engineering, not in cognitive behavior,” Dr. Randazzo said. “It has no clue what it’s doing or why—there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.” This distinction underscores the mechanical nature of AI operations, functioning without the contextual understanding inherent in human cognition.
Currently, the world’s three dominant digital powers are taking different regulatory paths. The United States follows a market‑centric model, China employs a state‑centric one, and the European Union has adopted a human‑centric approach. Dr. Randazzo identified the EU’s model as the preferred path for protecting human dignity. However, she warned that without a global commitment to the same goal, even that advanced approach falls short of providing comprehensive protection.
A central warning from the research is the need to anchor AI development to human values. “Globally, if we don’t anchor AI development to what makes us human—our capacity to choose, to feel, to reason with care, to empathy and compassion—we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she said. Dr. Randazzo further emphasized the ethical imperative, stating, “Humankind must not be treated as a means to an end.”
The paper, titled “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes,” was published in the Australian Journal of Human Rights. This research is the first installment in a planned trilogy of works by Dr. Randazzo on the topic.
Featured image credit