French flagpolish flagspanish flag

The dangers of AI (Artificial Intelligence) according to the Catholic Church

Written by Alain Pilote on Saturday, 01 March 2025. Posted in Church teachings

As mentioned in the previous article, artificial intelligence (AI) is a rapidly growing technology that raises numerous questions, even threatening the very nature and survival of the human person. Pope Francis has made several statements on the subject of AI and its potential dangers, notably in his address to the G7 leaders in June 2024 (see page 11). More recently, on January 28, 2025, at the request of the Holy Father, the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education issued a note entitled Antiqua et Nova, addressing "the relationship between artificial intelligence and human intelligence," explaining in greater detail the limitations and dangers of AI. Here are its key points.

A. Pilote

With wisdom both ancient and new (cf. Mt. 13 :52), we are called to reflect on the current challenges and opportunities posed by scientific and technological advancements, particularly by the recent development of Artificial Intelligence (AI).

AI can... generate new "artifacts" with a level of speed and skill that often rivals or surpasses what humans can do, such as producing text or images indistinguishable from human compositions. This raises critical concerns about AI's potential role in the growing crisis of truth in the public forum...

AI marks a new and significant phase in humanity's engagement with technology, placing it at the heart of what Pope Francis has described as an "epochal change." Its impact is felt globally and in a wide range of areas, including interpersonal relationships, education, work, art, healthcare, law, warfare, and international relations.

The difference between AI and human intelligence

The Vatican document then explains the fundamental difference between artificial intelligence and human intelligence : «  AI's advanced features give it sophisticated abilities to perform tasks, but not the ability to think  ».

And unlike a machine, a human being also has a body, feelings, relationships with other people, and ultimately a soul — and therefore, the knowledge of what is right or wrong, unlike a machine, which does not take the moral aspect of things into account.

In this context, human intelligence becomes more clearly understood as a faculty that forms an integral part of how the whole person engages with reality. Authentic engagement requires embracing the full scope of one's being : spiritual, cognitive, embodied, and relational... A proper understanding of human intelligence, therefore, cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks. Instead, it involves the person's openness to the ultimate questions of life and reflects an orientation toward the True and the Good. (Editor's note : That is to say, God, whose existence cannot be measured by a computer)...

In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.

Consequently, although AI can simulate aspects of human reasoning and perform specific tasks with incredible speed and efficiency, its computational abilities represent only a fraction of the broader capacities of the human mind. For instance, AI cannot currently replicate moral discernment or the ability to establish authentic relationships.

Moreover, human intelligence is situated within a personally lived history of intellectual and moral formation that fundamentally shapes the individual's perspective, encompassing the physical, emotional, social, moral, and spiritual dimensions of life. Since AI cannot offer this fullness of understanding, approaches that rely solely on this technology or treat it as the primary means of interpreting the world can lead to "a loss of appreciation for the whole, for the relationships between things, and for the broader horizon."[

Since AI lacks the richness of corporeality, relationality, and the openness of the human heart to truth and goodness, its capacities—though seemingly limitless—are incomparable with the human ability to grasp reality. So much can be learned from an illness, an embrace of reconciliation, and even a simple sunset ; indeed, many experiences we have as humans open new horizons and offer the possibility of attaining new wisdom. No device, working solely with data, can measure up to these and countless other experiences present in our lives.

Considering all these points, as Pope Francis observes, "the very use of the word'intelligence'" in connection with AI "can prove misleading" and risks overlooking what is most precious in the human person. In light of this, AI should not be seen as an artificial form of human intelligence but as a product of it.

The role of ethics in the use of AI

Seen as a fruit of the potential inscribed within human intelligence, scientific inquiry and the development of technical skills are part of the "collaboration of man and woman with God in perfecting the visible creation." At the same time, all scientific and technological achievements are, ultimately, gifts from God. Therefore, human beings must always use their abilities in view of the higher purpose for which God has granted them.

Nevertheless, not all technological advancements in themselves represent genuine human progress. The Church is particularly opposed to those applications that threaten the sanctity of life or the dignity of the human person. Like any human endeavor, technological development must be directed to serve the human person and contribute to the pursuit of "greater justice, more extensive fraternity, and a more humane order of social relations," which are "more valuable than advances in the technical field (Gaudium et spes, n. 35).

Like any product of human creativity, AI can be directed toward positive or negative ends. When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used.

In addition to determining who is responsible, it is essential to identify the objectives given to AI systems. Although these systems may use unsupervised autonomous learning mechanisms and sometimes follow paths that humans cannot reconstruct, they ultimately pursue goals that humans have assigned to them and are governed by processes established by their designers and programmers. Yet, this presents a challenge because, as AI models become increasingly capable of independent learning, the ability to maintain control over them to ensure that such applications serve human purposes may effectively diminish. This raises the critical question of how to ensure that AI systems are ordered for the good of people and not against them.

The dangers of AI

Because "true wisdom demands an encounter with reality,"the rise of AI introduces another challenge. Since AI can effectively imitate the products of human intelligence, the ability to know when one is interacting with a human or a machine can no longer be taken for granted. Generative AI can produce text, speech, images, and other advanced outputs that are usually associated with human beings. Yet, it must be understood for what it is : a tool, not a person. This distinction is often obscured by the language used by practitioners, which tends to anthropomorphize (to treat an object as if it is human in appearance) AI and thus blurs the line between human and machine.

In this context, it is important to clarify that, despite the use of anthropomorphic language, no AI application can genuinely experience empathy (the ability to understand or feel what another person is experiencing). Emotions cannot be reduced to facial expressions or phrases generated in response to prompts ; they reflect the way a person, as a whole, relates to the world and to his or her own life, with the body playing a central role. True empathy requires the ability to listen, recognize another's irreducible uniqueness, welcome their otherness, and grasp the meaning behind even their silences.

Unlike the realm of analytical judgment in which AI excels, true empathy belongs to the relational sphere. It involves intuiting and apprehending the lived experiences of another while maintaining the distinction between self and other. While AI can simulate empathetic responses, it cannot replicate the eminently personal and relational nature.

In light of the above, it is clear why misrepresenting AI as a person should always be avoided; doing so for fraudulent purposes is a grave ethical violation that could erode social trust. Similarly, using AI to deceive in other contexts—such as in education or in human relationships, including the sphere of sexuality—is also to be considered immoral and requires careful oversight to prevent harm, maintain transparency, and ensure the dignity of all people

In an increasingly isolated world, some people have turned to AI in search of deep human relationships, simple companionship, or even emotional bonds. However, while human beings are meant to experience authentic relationships, AI can only simulate them...

If we replace relationships with God and with others with interactions with technology, we risk replacing authentic relationality with a lifeless image (cf. Ps. 106:20; Rom. 1:22-23). Instead of retreating into artificial worlds, we are called to engage in a committed and intentional way with reality, especially by identifying with the poor and suffering, consoling those in sorrow, and forging bonds of communion with all.

AI and the world of work

Another area where AI is already having a profound impact is the world of work. As in many other fields, AI is driving fundamental transformations across many professions, with a range of effects...

AI is currently eliminating the need for some jobs that were once performed by humans. If AI is used to replace human workers rather than complement them, there is a "substantial risk of disproportionate benefit for the few at the price of the impoverishment of many." Additionally, as AI becomes more powerful, there is an associated risk that human labor may lose its value in the economic realm... Seen in this light, AI should assist, not replace, human judgment.

Misinformation, deepfakes

AI could be used as an aid to human dignity if it helps people understand complex concepts or directs them to sound resources that support their search for the truth. However, AI also presents a serious risk of generating manipulated content and false information, which can easily mislead people due to its resemblance to the truth...

Yet, the consequences of such aberrations and false information can be quite grave. For this reason, all those involved in producing and using AI systems should be committed to the truthfulness and accuracy of the information processed by such systems and disseminated to the public.

While AI has a latent potential to generate false information, an even more troubling problem lies in the deliberate misuse of AI for manipulation. This can occur when individuals or organizations intentionally generate and spread false content with the aim to deceive or cause harm, such as "deepfake" images, videos, and audio—referring to a false depiction of a person, edited or generated by an AI algorithm. The danger of deepfakes is particularly evident when they are used to target or harm others. While the images or videos themselves may be artificial, the damage they cause is real, leaving "deep scars in the hearts of those who suffer it" and "real wounds in their human dignity."

On a broader scale, by distorting "our relationship with others and with reality," AI-generated fake media can gradually undermine the foundations of society. This issue requires careful regulation, as misinformation—especially through AI-controlled or influenced media—can spread unintentionally, fueling political polarization and social unrest.

When society becomes indifferent to the truth, various groups construct their own versions of "facts," weakening the "reciprocal ties and mutual dependencies" that underpin the fabric of social life.  As deepfakes cause people to question everything and AI-generated false content erodes trust in what they see and hear, polarization and conflict will only grow.

Such widespread deception is no trivial matter; it strikes at the core of humanity, dismantling the foundational trust on which societies are built. (Editor's note: Let us remember that the words "social credit" also mean trust—the trust that we can live together in society and not fear our neighbor.)

AI, privacy and surveillance

While there can be legitimate and proper ways to use AI in keeping with human dignity and the common good, using it for surveillance aimed at exploiting, restricting others' freedom, or benefitting a few at the expense of the many is unjustifiable. The risk of surveillance overreach must be monitored by appropriate regulators to ensure transparency and public accountability. Those responsible for surveillance should never exceed their authority, which must always favor the dignity and freedom of every person as the essential basis of a just and humane society.

Furthermore, "fundamental respect for human dignity demands that we refuse to allow the uniqueness of the person to be identified with a set of data." This especially applies when AI is used to evaluate individuals or groups based on their behavior, characteristics, or history—a practice known as "social scoring". (Editor's Note: This is reminiscent, for example, of the infamous Chinese « social credit » system, which precisely assigns a score or points to citizens based on whether or not they comply with the regulations of the communist government.)

AI and warfare

While AI's analytical abilities could help nations seek peace and ensure security, the "weaponization of Artificial Intelligence" can also be highly problematic... The ease with which autonomous weapons make war more viable militates against the principle of war as a last resort in legitimate self-defense, potentially increasing the instruments of war well beyond the scope of human oversight and precipitating a destabilizing arms race, with catastrophic consequences for human rights.

In particular, Lethal Autonomous Weapon Systems, which are capable of identifying and striking targets without direct human intervention, are a "cause for grave ethical concern" because they lack the "unique human capacity for moral judgment and ethical decision-making." For this reason, Pope Francis has urgently called for a reconsideration of the development of these weapons and a prohibition on their use, starting with "an effective and concrete commitment to introduce ever greater and proper human control. No machine should ever choose to take the life of a human being."

Since it is a small step from machines that can kill autonomously with precision to those capable of large-scale destruction, some AI researchers have expressed concerns that such technology poses an "existential risk" by having the potential to act in ways that could threaten the survival of entire regions or even of humanity itself... Like any tool, AI is an extension of human power, and while its future capabilities are unpredictable, humanity's past actions provide clear warnings. The atrocities committed throughout history are enough to raise deep concerns about the potential abuses of AI.

 AI and our relationship with God

Within some circles of scientists and futurists, there is optimism about the potential of artificial general intelligence (AGI), a hypothetical form of AI that would match or surpass human intelligence and bring about unimaginable advancements. Some even speculate that AGI could achieve superhuman capabilities. At the same time, as society drifts away from a connection with the transcendent, some are tempted to turn to AI in search of meaning or fulfillment—longings that can only be truly satisfied in communion with God.

However, the presumption of substituting God for an artifact of human making is idolatry, a practice Scripture explicitly warns against (e.g., Ex. 20:4; 32:1-5; 34:17). Moreover, AI may prove even more seductive than traditional idols for, unlike idols that "have mouths but do not speak; eyes, but do not see; ears, but do not hear" (Ps. 115:5-6), AI can "speak," or at least gives the illusion of doing so (cf. Rev. 13:15). (Editor's note: This verse from the Book of Revelation refers to the "Mark of the Beast," without which one will neither be able to buy nor sell.)

AI cannot possess many of the capabilities specific to human life, and it is also fallible. By turning to AI as a perceived "Other" greater than itself, with which to share existence and responsibilities, humanity risks creating a substitute for God... While AI has the potential to serve humanity and contribute to the common good, it remains a creation of human hands, bearing "the imprint of human art and ingenuity" (Acts 17:29). It must never be ascribed undue worth.

Concluding reflections

The "essential and fundamental question" remains "whether in the context of this progress man, as man, is becoming truly better, that is to say, more mature spiritually, more aware of the dignity of his humanity, more responsible, more open to others, especially the neediest and the weakest, and readier to give and to aid all." (John Paul II, Encyclical letter Redemptor Hominis, n. 15.)

As a result, it is crucial to know how to evaluate individual applications of AI in particular contexts to determine whether its use promotes human dignity, the vocation of the human person, and the common good. As with many technologies, the effects of the various uses of AI may not always be predictable from their inception.

As these applications and their social impacts become clearer, appropriate responses should be made at all levels of society, following the principle of subsidiarity. Individual users, families, civil society, corporations, institutions, governments, and international organizations should work at their proper levels to ensure that AI is used for the good of all.

AI should be used only as a tool to complement human intelligence rather than replace its richness. Cultivating those aspects of human life that transcend computation is crucial for preserving "an authentic humanity" that "seems to dwell in the midst of our technological culture, almost unnoticed, like a mist seeping gently beneath a closed door."

True wisdom

The vast expanse of the world's knowledge is now accessible in ways that would have filled past generations with awe. However, to ensure that advancements in knowledge do not become humanly or spiritually barren, one must go beyond the mere accumulation of data and strive to achieve true wisdom.

This wisdom is the gift that humanity needs most to address the profound questions and ethical challenges posed by AI: "Only by adopting a spiritual way of viewing reality, only by recovering a wisdom of the heart, can we confront and interpret the newness of our time." Such "wisdom of the heart" is "the virtue that enables us to integrate the whole and its parts, our decisions and their consequences." It "cannot be sought from machines," but it "lets itself be found by those who seek it and be seen by those who love it; it anticipates those who desire it, and it goes in search of those who are worthy of it (cf. Wis 6:12-16)."

In a world marked by AI, we need the grace of the Holy Spirit, who "enables us to look at things with God's eyes, to see connections, situations, events and to uncover their real meaning."

Since a "person's perfection is measured not by the information or knowledge they possess, but by the depth of their charity," how we incorporate AI "to include the least of our brothers and sisters, the vulnerable, and those most in need, will be the true measure of our humanity." The "wisdom of the heart" can illuminate and guide the human-centered use of this technology to help promote the common good, care for our "common home," advance the search for the truth, foster integral human development, favor human solidarity and fraternity, and lead humanity to its ultimate goal: happiness and full communion with God.

The Supreme Pontiff, Francis, at the Audience granted on 14 January 2025 to the undersigned Prefects and Secretaries of the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education, approved this Note and ordered its publication.

Given in Rome, at the offices of the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education, on 28 January 2025, the Liturgical Memorial of Saint Thomas Aquinas, Doctor of the Church.

About the Author

Alain Pilote

Alain Pilote

Alain Pilote has been the editor of the English edition of MICHAEL for several years. Twice a year we organize a week of study of the social doctrine of the Church and its application and Mr. Pilote is the instructor during these sessions.

 

Leave a comment

You are commenting as guest.

Your Cart

Latest Issue

Choose your topic

Newsletter & Magazine

Donate

Donate

Go to top
JSN Boot template designed by JoomlaShine.com