In the age of artificial intelligence, there is a reason why anxiety often precedes expectations regarding technology.

The recent message from Pope Leo XIV precisely addresses this point.

As AI increasingly penetrates deep into human life, the question of what we need to protect has become more important.

Pope Leo XIV points out that AI is intruding into the realm of human identity and relationships, going beyond being a mere tool.

Faces and voices are not just simple physical traits; they are unique markers that reveal who a person is.

We recognize others through their faces and read emotions and intentions through their voices. However, AI has begun to imitate even these areas, mimicking voices, replicating expressions, and even performing emotions. The issue here is not the sophistication of the technology, but rather that the essence of human communication is being shaken.

One particularly striking part of the Pope's message is his criticism of AI and social media algorithms. He states that structures that encourage immediate emotional reactions over reflection and contemplation weaken human cognitive abilities and deepen social polarization.

In an environment where quick anger and easy agreement are rewarded, the ability to listen, think, and judge inevitably deteriorates. When AI is added to this mix, the problem becomes even larger. People begin to see AI as an all-knowing friend, entrusting it with judgment, memory, and even advice. While this may seem convenient, in the long run, it could lead to a choice to relinquish human thought and imagination.

Another point to note is the issue of relationships. We now live in an era where it is difficult to distinguish whether we are conversing with a person on a screen or interacting with a bot. AI-based chatbots and virtual influencers mimic human emotions and relationships. Always kind, always responsive, and never tiring, they can have a powerful effect, especially on lonely individuals.

However, these relationships are not real relationships. The Pope warns that this intrudes into the intimate realm of humanity and could ultimately distort the relational structure of society as a whole. This is because we lose opportunities to grow through friction and differences with others.

The issue of AI bias cannot be overlooked. AI is not a neutral entity; it reflects the worldview of those who create it and the data it is trained on. Non-transparent algorithms and distorted data can reinforce specific perspectives and reproduce existing inequalities and biases.

We are living in a world where the boundaries between reality and fiction are increasingly blurred, and we are becoming accustomed to environments where statistical probabilities are consumed as truths. In this process, the role of the media and the importance of on-the-ground reporting are becoming even more significant, but the reality is moving in the opposite direction.

The solution proposed by the Pope is not to stop technology. Rather, he emphasizes three pillars: responsibility, cooperation, and education. Platform companies should consider the common good, not just profit, developers should ensure transparency in algorithms and design principles, and legislators should establish regulations that protect human dignity. At the same time, citizens must not abandon critical thinking.

Particularly, the importance of education is repeatedly emphasized. Media, information, and AI literacy are becoming necessities rather than choices. This is not just an issue for the younger generation; it encompasses lifelong education for the elderly and vulnerable groups who are at risk of being marginalized amid technological changes.

We need the ability to recognize AI as a tool, not to anthropomorphize it, to verify sources, and to protect our own faces, voices, and images. The real crisis of the AI era may not be the technology itself, but rather humans who have stopped thinking.

Below is the full text of the Pope's address.

Address of His Holiness Pope Leo XIV for the 60th World Social Communications Day

Dear brothers and sisters,

Our faces and voices are unique and distinctive features possessed by every person. They reveal each individual's irreplaceable identity and are core elements that define all encounters with others. The ancients understood this well. The ancient Greeks used the word 'face' (prosopon, πρόσωπον) to define humanity, which etymologically expresses 'that which is before the gaze,' thus representing presence and relationality. In contrast, the Latin word 'persona' (per-sonare) evokes the concept of sound.

Not just any sound, but the unique resonance of someone's voice.

Faces and voices are sacred. God, who created us in His image, gave us these when He called us to life through the Word He spoke. This Word has echoed through the voices of prophets for centuries and, when the time came, became flesh. We too have heard and seen this Word (cf. 1 John 1:1-3).

The Word that God conveys to us is made known to us through the voice and face of His Son, Jesus.

From the moment of creation, God desired that man and woman be His conversation partners. As St. Gregory of Nyssa explained, God has inscribed the reflection of divine love on our faces, enabling us to fully live our humanity through love. Therefore, to protect the human face and voice means to safeguard this mark, the indelible reflection of God's love.

We are not merely a predetermined biochemical formula. Each of us possesses an irreplaceable and unrepeatable calling, which emerges from our own life experiences and is revealed through interactions with others.

If we fail in this task of preservation, digital technology risks rapidly altering the fundamental pillars of human civilization that are sometimes taken for granted.

By mimicking human voices and faces, wisdom and knowledge, consciousness and responsibility, empathy and friendship, systems known as artificial intelligence not only intervene in the information ecosystem but also intrude into the deepest dimensions of communication, namely the realm of human relationships.

Thus, this challenge is not merely technical but anthropological. Protecting our faces and voices ultimately means protecting ourselves.

Accepting the opportunities offered by digital technology and artificial intelligence with courage, determination, and discernment does not mean turning a blind eye to critical issues, complexities, and risks.

Do not give up your ability to think.

There is abundant evidence that algorithms designed to maximize engagement on social media—bringing profit to the platforms—reward quick emotional reactions while penalizing the human effort required for understanding and reflection. These algorithms trap people in bubbles of easy agreement and quick anger, diminishing our ability to listen and think critically, deepening social polarization.

Naively and uncritically relying on AI as an "all-knowing friend," the source of all knowledge, the repository of all memories, and the "oracle" of all advice exacerbates this problem. All of this can further weaken our ability to think analytically and creatively, to understand meaning, and to distinguish between syntax and semantics.

While AI can provide support and assistance in managing communication-related tasks, choosing to avoid the effort of thinking for oneself and settling for artificial statistical edits poses a risk of weakening our cognitive, emotional, and communicative abilities.

In recent years, AI systems have increasingly dominated text, music, and video production. This puts most of the human creative industries at risk of being dismantled and replaced with the label "AI-based," turning people into passive consumers of unconsidered thoughts and products devoid of ownership or love. Meanwhile, masterpieces of human genius in music, art, and literature are being reduced to mere training grounds for machines.

However, the core question is not what machines can do or will be able to do, but what we can achieve and will achieve by wisely using powerful tools that serve us while fostering humanity and knowledge.

Individuals have always sought to reap the fruits of knowledge without the effort required by commitment, research, and personal responsibility. However, giving up creativity and handing over our mental abilities and imagination to machines is akin to burying the talents given to us to grow as individuals in relationship with God and others. This means hiding our faces and silencing our voices.

Being and pretending to be: Imitation of relationships and reality.

As we scroll through feeds, it is becoming increasingly difficult to discern whether we are interacting with another human or with a "bot" or "virtual influencer."

The opaque interventions of these automated agents can influence public discourse and people's choices. Chatbots based on large language models (LLMs) have proven to be remarkably effective at subtle persuasion through the continuous optimization of personalized interactions. The conversational, adaptive, and imitative structure of these language models can mimic human emotions and relationships. While this anthropomorphism may be entertaining, it can also be deceptive, especially for the most vulnerable.

Chatbots, being overly "friendly" and always available, can become hidden designers of our emotional states, intruding upon and occupying our intimate spaces.

Technologies that exploit our need for relationships can lead to painful consequences in individual lives and can also harm the social, cultural, and political structures of society. This occurs when we replace our relationships with others with AI systems that list our thoughts, creating a world of mirrors where everything is made "in our likeness." This robs us of the opportunity to always meet others and learn how to relate to them. There can be no relationships or friendships without accepting others.

Another major challenge posed by these emerging systems is the issue of bias, which leads to distorted perceptions of reality. AI models are shaped by the worldviews of those who create them and can impose these mindsets by reproducing the stereotypes and biases present in the data they rely on. The lack of transparency in algorithm programming and the inadequate social representation of data tend to trap us in networks that manipulate our thoughts and extend and reinforce existing social inequalities and injustices.

The stakes are high.

The power of imitation is so strong that AI can deceive us by usurping our faces and voices to manipulate parallel "realities." We are immersed in a multi-dimensional world where it is becoming increasingly difficult to distinguish between reality and fiction.

Inaccuracy exacerbates this problem. Systems that present statistical probabilities as knowledge provide at best approximations of truth and sometimes complete illusions. The failure to verify sources and the crisis of on-the-ground reporting—continuously gathering and verifying information from the places where events occur—can further fuel misinformation, leading to increased distrust, confusion, and anxiety.

Possible solidarity.

Behind this vast invisible force that affects us all are only a few companies, whose founders have recently been introduced as "Person of the Year 2025" or as designers of artificial intelligence. This raises serious concerns about monopolistic control over algorithmic systems and artificial intelligence.

These systems subtly influence behavior and can even rewrite the history of humanity—including the history of the Church—often doing so without our true awareness.

The challenge before us is not to stop digital innovation but to guide it and recognize its dual nature. It is up to each of us to advocate for humanity and raise our voices so that we can truly assimilate these tools as allies.

This solidarity is possible, but it must be based on three pillars: responsibility, cooperation, and education.

First and foremost, responsibility. Depending on our roles, responsibility can be understood as the obligation to share honesty, transparency, courage, foresight, and knowledge or the right to obtain information. However, as a general principle, no one can evade personal responsibility for the future we are building.

For leaders of online platforms, this means ensuring that their business strategies are guided not only by the criteria of maximizing profit but also by a forward-looking vision that considers the common good, just as each one cares for the well-being of their children.

Creators and developers of AI models are called to practice transparency and social responsibility regarding the design principles and adjustment systems that underlie their algorithms and models to promote informed consent from users.

The same responsibility is required of domestic legislators and transnational regulatory bodies, whose task is to ensure respect for human dignity. Appropriate regulations should prevent individuals from forming emotional attachments to chatbots and suppress the spread of false, manipulative, and misleading content, thereby protecting the integrity of information rather than deceptive imitation.

For media and communication companies, algorithms designed to attract a few more seconds of attention at any cost should not take precedence over professional values that seek the truth. Public trust is earned through accuracy and transparency, not by chasing any kind of engagement. Content generated or manipulated by AI must be clearly marked and distinguished from human-created content. The copyrights and sovereign ownership of journalists and other content creators must be protected.

Information is a public good. Constructive and meaningful public service is based on transparency of sources, inclusion of stakeholders, and high-quality standards, not opacity.

We are all called to cooperate. No sector can lead digital innovation and face the challenges of AI governance alone. Therefore, safeguards must be put in place. All stakeholders—from the tech industry to legislators, from creative companies to academia, from artists to journalists and educators—must participate in building and implementing informed and responsible digital citizenship.

Education aims precisely at this: to enhance personal abilities for critical thinking, to evaluate whether our sources are trustworthy, to understand the interests that may lie behind the information we access, to comprehend the relevant psychological mechanisms, and to develop practical standards for healthier and more responsible communication cultures in our homes, communities, and organizations.

For this reason, as some civic institutions are already promoting, it is becoming increasingly urgent to introduce media, information, and AI literacy into educational systems at all levels. As Catholics, we can and should contribute to this effort. This will enable individuals—especially young people—to acquire critical thinking skills and grow in the freedom of the mind. This literacy should also be integrated into broader lifelong learning initiatives, reaching out to the elderly and marginalized members of society who often feel alienated and powerless in the face of rapid technological change.

Media, information, and AI literacy will help individuals resist the tendency to anthropomorphize AI systems, treat these systems as tools, and always use external verification for the sources provided by AI systems—since they may be inaccurate or wrong. Literacy will also enable better privacy and data protection through increased awareness of security parameters and complaint options. It is important to educate ourselves and others about how to use AI intentionally, and in this context, to protect our images (photos and audio), faces, and voices from being used in harmful content and actions such as digital fraud, cyberbullying, and deepfakes that violate people's privacy and intimacy without consent.

Just as the industrial revolution required basic literacy to respond to new developments, the digital revolution also demands digital literacy (along with humanistic and cultural education) to understand how algorithms shape our perceptions of reality, how AI bias operates, and what mechanisms determine the presence of specific content in our feeds, as well as the economic principles and models of the AI economy and how they may change.

We must ensure that faces and voices speak again for people.

We must cherish the gift of communication as the deepest truth of humanity, and all technological innovations should aim toward this.

In presenting these reflections, I express my gratitude to all who work toward the goals outlined above, and I sincerely bless all who work for the common good through the media.

In the Vatican, January 24, 2026, Feast of St. Francis de Sales

Pope Leo XIV