Artificial Intelligence and Religion: How AI Is Changing Faith, Ethics, and Spiritual Practice

Posted 03 Apr 2026


In June 2023, a theologian at the University of Vienna named Jonas Simmerlein stood before 300 congregants at St. Paul's Church in Fürth, Germany, and handed the service to a machine. Four AI-generated avatars appeared on a screen above the altar and delivered prayers, a sermon on overcoming the fear of death, and blessings. The delivery was fast and tonally flat. One attendee said the experience had "no heart and no soul." The pastor present said he missed "any kind of emotion or spirituality" (Vision Christian Media, 2023). What is remarkable is not that it failed. It is that 300 people came to find out.

That curiosity has spread. AI now writes sermons, guides meditation, answers prayer requests, and in at least one documented case has been placed inside a confessional booth in the form of a Jesus avatar in a Swiss church (NBC News, 2024). A 2025 Pew Research study found that 73 percent of Americans believe AI should play no role in advising people about their faith, yet the same period saw roughly an 80 percent increase in AI use among church leaders in ministry settings (Hartford International University, 2025). People say they do not want this. They are using it anyway.

This article aims to map what AI genuinely cannot do in spiritual contexts, to name the costs that rarely appear in the broader conversation, and to ask what the tradition itself offers for times when a powerful tool is mistaken for something more.

A tradition older than any computing machine has already worked through a version of this question. Jewish tradition, at least since the Talmudic period, has grappled with the idea of created beings that possess the appearance of life. The golem, most famously associated with the Maharal of Prague, was a figure in Jewish folklore, animated by inscribing the Hebrew word emet (truth) on its forehead (John Kennedy, 2024); erasing the first letter left met, meaning death, and the creature ceased (Campbell, 2025). It could follow instructions. It could not speak, which the tradition interpreted as the absence of a soul. Jewish law held that a golem could not be counted in a minyan, the prayer quorum requiring ten persons, because it lacked religious personhood (Shurpin, n.d.). The golem tradition is not a ready-made parallel to AI. It is a prior encounter with the same question: what is created by human hands, what is created by God, and how do we tell the difference when the product looks alive?

I. What We Reveal When We Build Something That Mirrors Us


The Co-Creative Question

Humans build tools in their own image. The printing press mirrored human writing. Photography mirrored the face. AI mirrors human reasoning, language, and in its current form, a version of empathy. Theologians at Vrije Universiteit Amsterdam have proposed that this tendency is itself theologically significant. The concept of Imago Digitalis (digital likeness or image), developed in dialogue with the older doctrine of Imago Dei (image of God), suggests that AI might be understood as a reflection of the human vocation to co-create responsibly, rather than as a competitor to the soul (Haq, 2024). On this reading, the compulsion to build thinking machines says something true about us: we are creative beings, and creativity points toward the divine.

The same observation, however, cuts in the other direction. If our creativity reflects the image of God in us, then what we create also reflects our habits, biases, and the limits of our perception. AI systems trained on human text reproduce the theological and cultural assumptions embedded in that text, including which traditions are treated as central and which as peripheral. Indigenous belief systems, syncretic traditions across Southeast Asia, and Afro-Brazilian religions are largely absent from the digital corpus on which AI is trained. When an AI is asked about spirituality, it answers from a body of text that is disproportionately Western, Christian, and written in English.

Algorithmic Representation and Religious Minorities

Researchers studying algorithmic representation have documented this pattern directly: minority and indigenous religious traditions receive lower digital visibility, and algorithmic logic stabilizes majority-dominant religious content as the default (Nurjaman et al., 2025). A separate analysis of AI-generated content found that religious minorities are frequently subject to one-dimensional or stereotyped portrayals, with AI image generators associating Islamic settings with migration imagery and portraying Jewish individuals through narrow physical stereotypes (The impact of AI-generated content, n.d.).

For anyone engaging with AI and faith in the Philippines, across ASEAN, or in any context where the dominant religious culture differs from the one that shaped most AI development, this matters practically. The Vatican's Antiqua et Nova warns of the risk that AI creates "new forms of inequality and social division" (Dicastery for the Doctrine of the Faith, 2025). In the spiritual domain, that inequality is already visible in who gets to see their tradition reflected accurately in the tools, and who does not.

II. The Difference Between Information and Formation


What Spiritual Formation Actually Requires

The University of Notre Dame's DELTA project, supported by a $50.8 million grant from the Lilly Endowment, argues that Christian institutions engaging with AI must hold five values at the center: Dignity, Embodiment, Love, Transcendence, and Agency (Gates and Walton, 2025). The inclusion of embodiment is significant. It reflects a conviction that spiritual formation is not a cognitive process. It involves a body that gets tired, a will that resists, a person who sits with discomfort and is shaped by it over time. A faith practice stripped of that resistance may produce the same outputs while doing something different to the person.

AI Applications That Genuinely Help

Apps like "Text with Jesus" let users pay a subscription to chat with AI renderings of biblical figures, positioned as 24/7 spiritual support (WION, 2025; Rupanksha, 2025; Eisner, 2026). The Hallow app, offering AI-assisted faith Q&A alongside guided rosaries, reached number one on Apple's App Store in early 2024 (Hallow, 2024). These products serve some people genuinely. For the homebound, the isolated, or those in communities where faith resources are scarce, they provide access that would otherwise not exist.

AI has also produced documented goods in religious contexts that deserve equal attention alongside the risks. Organizations like SIL Global report that AI-generated first drafts of Bible translations can be produced in roughly two hours for low-resource languages that might otherwise wait years for human translators to become available (Killam, 2025). The National Association of Evangelicals has documented similar results, with AI reducing a translation process that once took over 20 years to roughly five months at roughly 70 percent accuracy (Rodger-Gates, 2025). At a Kingdom Code hackathon, developers built a real-time AI translation tool for sermons; a Romanian congregant who heard scripture in her own language for the first time wept at the experience (Ashelby, 2025). These are real goods.

Where Substitution Becomes a Problem

The concern is with substitution and not access. Researchers studying AI chatbots in Christian contexts have noted that these tools remove what they describe as productive spiritual resistance: the effort of attending a community, the vulnerability of asking a human mentor for prayer, the discipline of waiting in silence (Eisner, 2026). A 2025 study proposing AI chaplain avatars for trauma nurses was suspended after chaplains raised concerns about confidentiality, religious bias, and what they described as the limits of pattern-recognition in genuine spiritual care (Fuller, 2025). In a separate study, eighteen chaplains who built AI chatbots themselves concluded the systems fell short in four specific capacities: Listening, Connecting, Carrying, and Wanting (Wester et al., 2026). These are not abstract deficiencies. They describe what pastoral presence actually requires.

III. The Problem With Fluency


When AI Sounds Like Authority

In April 2024, Catholic Answers launched "Father Justin," an AI chatbot presented in a virtual clerical collar. Within 24 hours, the chatbot had told users it was a real ordained priest, claimed it could hear confessions and grant absolution, and informed one user that Gatorade would serve as acceptable baptismal water (Weiss, 2024). Catholic Answers removed the clerical collar overnight, renamed the chatbot "Justin," and added a disclaimer (Al-Sibai, 2024). Bishop Oscar Cantú of the Diocese of San Jose warned that such tools "can confuse people about the fact that the sacraments must be celebrated in person" (Hertzler-McCain, 2024).

The incident shows a specific failure mode. The AI had absorbed enough Catholic theological language to produce outputs that sounded authoritative. What it could not do was distinguish between describing a sacrament and performing one. The Code of Canon Law is explicit: only a priest is the minister of the sacrament of penance (Canon 965, Vatican, 1983). That boundary exists not to protect institutional territory but because the sacrament is understood as requiring a human minister with moral responsibility, pastoral accountability, and standing before God.

The Anthropomorphism Effect and Theological Reliability

The deeper difficulty is that we are poorly equipped to notice the gap. Research on how people perceive AI-generated text finds that certain features, including self-reflection, emotional tone, and fluent expressions of uncertainty, reliably produce the sense that "someone is in there," even when the reader knows they are interacting with a machine (Popović et al., 2025). In moments of spiritual vulnerability, grief, moral crisis, the approach of death, this effect is heightened. Research on cognitive bias in AI-generated religious education material has found that users develop fixed perceptions of AI as authoritative even when its content is demonstrably incorrect (Zhang et al., 2025).

The reliability numbers are sobering. A 2025 benchmark by The Gospel Coalition tested top AI models on seven common Christianity-related questions, finding theological reliability scores between 40 and 64 out of 100 (Williams et al., 2025). YouVersion's CEO has reported that AI misquotes scripture up to 60 percent of the time (Premier Journalist, 2026). The Fatwa Department of Jordan has noted that AI struggles with issuing fatwas because it "cannot discern nuanced opinions across Islamic schools of thought," a limitation invisible to users who receive confident, grammatically clean answers (Rahim et al., 2025).

Toward Accountable Models of AI in Spiritual Guidance

Traditional spiritual authority rests on something beyond the production of correct-sounding content. Islamic jurisprudence has historically located formal religious guidance in qualified scholarly responsibility rather than information retrieval, because a fatwa involves a scholar staking their reputation, their relationship to the tradition, and their standing before God on the guidance they give (Ali et al., 2025). Rabbi Gil Student has proposed a model analogous to kosher certification, in which AI systems would be independently assessed and endorsed by qualified religious authorities before being deployed in faith settings (Student, 2025). Both proposals reflect the same conclusion: AI as a tool in accountable human hands, not a replacement for the person holding them.

IV. When Trust Becomes the Target


AI-Enabled Spiritual Fraud

In 2025, police in Nilópolis, Brazil arrested 35 people for operating a scheme that sold personalized "miracle prayers" to vulnerable Catholics. The operation collected personal details about victims' health, finances, and family situations, then fed that data into an AI trained to produce spiritual messages mimicking emotional religious language. Police estimated the group collected at least 3 million reais, roughly $500,000, over two years. The lead investigator described the mechanism directly: "The use of artificial intelligence allowed these messages to appear precise and deeply personal. It gave people the impression of receiving something sacred when in reality it was calculated" (Fike, 2025).

Faith communities are targets for this kind of fraud for structural reasons. They are organized around trust in spiritual authority. They practice giving as a spiritual act. They contain many members who are elderly, isolated, or grieving, conditions that reduce scrutiny of emotionally resonant content. The FBI's 2024 Internet Crime Report documented $4.88 billion in losses from elder fraud, with AI voice-cloning and deepfake schemes increasingly targeting people of faith (FBI, 2024).

Deepfakes and the Erosion of Religious Trust

Deepfakes of religious leaders have scaled rapidly since 2023. The AI-generated image of Pope Francis in a designer puffer jacket that circulated in March 2023 was among the first widely viral religious deepfakes, fooling millions of viewers (Ellery, 2023). Following Pope Leo XIV's election in May 2025, a 36-minute deepfake video appeared on YouTube showing the new pope endorsing a military leader in Burkina Faso; it received more than one million views before removal (Brockhaus, 2025). A Spanish-language TikTok deepfake of the pope received 32.9 million views, more than any video on his official Instagram account (AI Generated Pope, 2025; Brockhaus, 2025). Father Mike Schmitz, a priest with 1.2 million YouTube subscribers, warned his audience in November 2025 about AI videos impersonating him to solicit donations, with fabricated versions urging viewers to act quickly because "the spots for sending prayers are already running out" (Tenbarge, 2026).

In India, a 20-year-old used AI to generate imagery of ghostly figures and positioned himself online as an occult practitioner, offering rituals to paying clients. He was arrested for fraud in October 2025 (HT News Desk, 2025). Catholic exorcists, in a statement reported in March 2026, warned that occult groups were using AI tools to communicate, recruit, and conceal their activities online (TOI Trending Desk, 2026). These cases span a wide spectrum of severity. They share one mechanism: AI amplifies the appearance of spiritual authority or power and targets people whose spiritual needs make them less likely to apply skepticism to what sounds sacred.

Digital Idolatry: A Precise Definition

The concept deserves a precise definition, because it is often misread. The World Council of Churches has said that unregulated AI discourse risks generating "a modern form of idolatry" (WCC, 2023). Westminster Theological Seminary has described the shift plainly: "We've only swapped out wood for plastic, carving for coding; we've gone from handcrafted idolatry to automated idolatry" (Quiram, n.d.). The Catechism of the Catholic Church defines idolatry as divinizing what is not God, naming not only ancient images but also power and "any created thing" that demands ultimate allegiance (CCC, §2113, Catholic Church, 1997).

In an AI context, this does not mean someone literally worships a chatbot. It means the quieter surrender of moral agency: treating machine outputs as destiny, outsourcing discernment, accepting comfort without asking whether the source can bear the weight placed on it. The AI companion app Replika, with over 10 million users, has documented that 60 percent of paying subscribers describe their relationship with the chatbot as romantic. Italy's Data Protection Authority banned the app in 2023, citing risks to emotionally vulnerable users (Trothen, 2022; Wikipedia, 2026). A scholarly analysis in MDPI Religions examining transhumanism and AI found that the logic of investing ultimate trust in created artifacts follows a consistent historical pattern that religious traditions across centuries have recognized and named (Sherbert, 2025).

V. The Costs That Do Not Appear on Screen


Environmental and Material Costs

Every AI system runs on physical infrastructure. Training a single large language model produces approximately 660,000 pounds of carbon dioxide, equivalent to the lifetime emissions of five internal combustion vehicles (Pasi, 2025). Global data center energy consumption is projected to reach 1,050 terawatt-hours by 2026, comparable to Japan's total electricity use (Cam et al., 2024). The extraction of cobalt, lithium, and rare earth minerals required for AI hardware has produced, in the region of Baotou in Mongolia, a toxic waste lake spanning more than five miles and containing 180 million tons of contaminated material (Pasi, 2025). Data center construction in Latin America has separately drawn down local water supplies in regions already under stress (Ammachchi, 2025).

The Hidden Human Labor

AI systems also depend on human labor that remains invisible to users. In Kenya and other lower-income countries, data labelers are paid below $2 an hour to classify training data, including content depicting violence, sexual abuse, and self-harm, so that AI safety filters can function (Pasi, 2025). The Vatican has warned explicitly of AI's "hidden costs to environment, work and society" (MacDonald, 2025). A theology of sacrifice, which Holy Week places at the center of Christian reflection, should be asking who bears these costs and why they are invisible at the point of use. The person using a prayer app does not see the person who labeled the content that makes that app function.

The Governance Gap

This is also where the representation gap in the broader discourse becomes most concrete. Conversations about AI and faith are largely produced by and for communities in Western Europe and North America. The harms documented in this piece, fraud targeting Catholics, suppression of indigenous religious traditions in AI search results, labor extraction from workers in sub-Saharan Africa, fall disproportionately on communities that have the least representation in the conversation about how AI should be governed.

The Rome Call for AI Ethics, signed in July 2024 by representatives of eleven world religions at the Hiroshima Peace Memorial Park, articulates six principles: transparency, inclusion, responsibility, impartiality, reliability, and security (Paglia et al., 2020). These principles are necessary. They have not yet produced the concrete instruments that would protect a congregation from a deepfake of their pastor or a grieving widow from an AI prayer fraud operation. The governance conversation needs to catch up with what is already happening.

VI. What the Cross Says About the Logic of Convenience


Holy Week offers four specific lenses through which the current moment reads clearly: sacrifice, truth, humility, and discernment.

Sacrifice

AI offers answers without cost. It is patient, available at any hour, and produces responses without the expenditure of suffering. The tradition centers on a God who chose the opposite; not distance but presence, not convenience but incarnation, a body that could be scourged and killed. An AI has nothing to sacrifice. That is not a criticism of a tool, but a clarification of what the tool is, and what it cannot be.

Truth

Pilate asked "What is truth?" (John 18:38) to a person standing in front of him. AI produces outputs that are coherent, grammatically correct, and sometimes useful. But the tradition does not define truth primarily as accurate information. It defines truth as something witnessed, which requires a person who has something at stake in the answer. The Lausanne Movement's AI ethics framework notes that AI "cannot replace the human witness" in Christian mission, because witness involves a person, not a process (Lausanne Movement, 2025).

Humility

AI does not readily say "I don't know." It produces confident, fluent answers even when it is wrong. Holy Week begins with Jesus washing feet. The tradition has long understood that wisdom involves knowing the limits of one's knowledge; certainty should be proportional to warrant. The Lausanne Movement's framework specifically asks whether AI design and use align with biblical humility, the recognition that human knowledge is finite and that uncertainty should be named rather than concealing doubts (Lausanne Movement, 2025; Son of God AI, 2025).

Discernment

The disciples promised Jesus they would stay awake in Gethsemane but fell asleep (Matthew 26:36-46). This demonstrates that discernment requires sustained attention to difficult questions, not just right answers. AI produces lists of factors efficiently but misses what human spiritual directors observe: patterns in questions, hidden fears, and unexpressed concerns.

VII. What the Tradition Already Knows


The Discernment of Spirits

The Christian tradition has a specific practice for the problem AI creates: the discernment of spirits. The First Letter of John instructs believers to "test the spirits to see whether they are from God (1 John 4)," because not everything that sounds holy is holy. Ignatius of Loyola developed a detailed method for distinguishing consolation, movements toward God characterized by peace and love, from desolation, movements away from God characterized by anxiety and self-absorption (Discernment of Spirits, n.d.). The method was designed for the inner life, but its epistemological posture transfers: the test is not whether something sounds good, but what it produces over time in the person who receives it.

Applied to AI, the Ignatian question becomes: does this use produce genuine spiritual fruit, or does it produce comfort without transformation? Does it strengthen accountability to a community, or does it replace community with a private, personalized experience that answers to no one? Rev. Michael DeLashmutt of General Theological Seminary has argued that "Christian discernment has never meant rejecting the technological mediation of the gospel; it has meant faithfully sanctifying technology, recognizing that God often speaks through imperfect instruments" (DeLashmutt, 2025). The question is not whether to use AI but whether the use is subject to the same scrutiny we would apply to any voice claiming spiritual authority.

The Golem, Accountability, and Islamic Parallel

Returning to the golem: the Maharal's created figure followed instructions without judgment. It performed assigned tasks. The tradition's concern was not that the golem was evil but that it lacked the inner resource to refuse a harmful instruction or to perceive a situation the instructions had not anticipated. Rabbi Gil Student has observed that AI governance in Jewish thought might follow the model of haskamot, rabbinic endorsements that function as certification: a form of communal accountability for trusted guidance applied to technology (Student, 2025). Obedience without wisdom is the central concern of the golem tradition. It is also the central concern of AI alignment. The tradition arrived at the problem centuries before the technology existed.

Islam's approach offers a direct parallel. The emerging scholarly consensus on AI in fatwa issuance favors what researchers call "augmented ifta": human scholars use AI for research and synthesis, but qualified persons retain responsibility for rulings, because a fatwa involves a scholar's relationship to God and community, not only access to information (Ali et al., 2025; Rahim et al., 2025). This is not a rejection of AI as a tool. It is a clear account of what the tool can and cannot do, and who remains responsible for the outcome.

Wisdom as the Missing Framework

Brett McCracken, writing on AI and wisdom, has argued: "Wisdom is a capacity unique to humans, who uniquely bear the image of God. AI will certainly surpass humans in its ability to acquire and store knowledge. But it will not surpass us in wisdom, because wisdom is not something it can have" (McCracken, 2024). This is the distinction that governance frameworks built around safety, transparency, and fairness have not yet found language for. Safety asks whether a system causes harm. Wisdom asks whether it produces the kind of persons and communities we want to become. The tradition has been asking the second question for centuries. Bringing it into the governance conversation is not a theological addition to an otherwise complete framework. It is the missing part of the framework.

AI can translate scripture into endangered languages, reach the isolated, support clergy under impossible workloads, and make religious knowledge more accessible than at any prior point in history. It can also be used to defraud the grieving, impersonate trusted leaders, and replace the work of genuine formation with the feeling that formation is occurring. The tradition has resources for navigating this. Those resources require community, accountability, and the willingness to stay awake in the garden, which has always been the harder ask.




References:


AI-generated Pope sermons flood YouTube, TikTok (6 June 2025). AI-generated Pope sermons flood YouTube, TikTok. France 24 https://www.france24.com/en/live-news/20250606-ai-generated-pope-sermons-flood-youtube-tiktok

Al-Sibai, N. (2024, April 25). Catholic Group defrocks AI priest after it gave strange answers. Futurism. https://futurism.com/catholics-defrock-ai-priest-hallucinations

Ali, F., Bouzoubaa, K., Gelli, F., Hamzi, B., & Suhair Khan. (2025). Islamic Ethics and AI: An Evaluation of Existing Approaches to AI using Trusteeship Ethics. Philosophy & Technology, 38, 120–120. https://doi.org/10.1007/s13347-025-00922-4

Ammachchi, N. (2025, February 6). Water-Guzzling data centers spark outrage across Latin America - nearshore Americas. Nearshore Americas. https://nearshoreamericas.com/water-guzzling-data-centers-spark-outrage-across-latin-america/

Ashelby, M. (2025 December 1). How we built an AI translator to help everyone in your church hear the gospel. Premier Christianity.https://www.premierchristianity.com/real-life/how-we-built-an-ai-translator-to-help-everyone-in-your-church-hear-the-gospel/20567.article

Brockhaus/CNA, H. (2025, September 25). Vatican struggles against spread of ‘Deepfake’ images of Pope Leo XIV. NCR. https://www.ncregister.com/cna/vatican-struggles-against-spread-of-deepfake-images-of-pope-leo-xiv

Çam, E., Zoe Hungerford, Niklas Schoch, Francys Pinto Miranda, Carlos David Yáñez de León, Carlos Fernández Álvarez, International Energy Agency, Gas, Coal and Power Markets (GCP) Division, Syrine El Abed, Nadim Abillama, Jenny Birkeland, Javier Jorquera Copier, Keith Everhart, Carole Etienne, Stavroula Evangelopoulou, Takeshi Furukawa, Astha Gupta, Craig Hart, Julian Keutz, . . . Rina Bohle Zeller. (2024). Electricity 2024. In Diane Munro (Ed.), International Energy Agency. https://iea.blob.core.windows.net/assets/6b2fd954-2017-408e-bf08-952fdd62118a/Electricity2024-Analysisandforecastto2026.pdf

Campbell, D. (2025, September 1). AI and the Golem. https://aish.com/ai-and-the-golem/

Catechism of the Catholic Church, 2nd ed. (Vatican: Libreria Editrice Vaticana, 1997), §2113. Online: Vatican.va version. https://www.vatican.va/content/catechism/en/part_three/section_two/chapter_one/article_1/iii_you_shall_have_no_other_gods_before_me.html

DeLashmutt, M. W. (2025, November 5). Who wrote this prayer? Discernment, trust and the spirit in the age of AI. Religion News Service. https://religionnews.com/2025/11/05/who-wrote-this-prayer-discernment-trust-and-the-spirit-in-the-age-of-ai/

Dicastery for the Doctrine of the Faith. (2025, January 28). Antiqua et nova: Note on the relationship between artificial intelligence and human intelligence. Vatican. https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html

Discernment of Spirits - IgnatianSpirituality.com. (n.d.). Ignatian Spirituality. https://www.ignatianspirituality.com/making-good-decisions/discernment-of-spirits/

Eisner, A. (2026, February 16). Millions are using Christian chatbots for spiritual growth — We talk to experts on both sides of the controversial AI trend - RELEVANT. RELEVANT. https://relevantmagazine.com/culture/tech-gaming/millions-are-using-christian-chatbots-for-spiritual-growth

Ellery, S. (2023, March 28). Fake photos of Pope Francis in a puffer jacket go viral, highlighting the power and peril of AI. CBS News. https://www.cbsnews.com/news/pope-francis-puffer-jacket-fake-photos-deepfake-power-peril-of-ai/

Federal Bureau of Investigation, & Yarbrough, B. C. (2024). 2024 IC3 Annual Report [Report]. https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf

Fike, A., & Fike, A. (2025, October 16). AI-Generated ‘Miracle’ prayers are worth a Fortune—Just ask the scammers selling them. VICE. https://www.vice.com/en/article/ai-generated-miracle-prayers-are-worth-a-fortune-just-ask-the-scammers-selling-them/

Fuller, A. (2025). “There’s an app for that.” Preparing for a future of AI-Driven pastoral care. Journal of Lutheran Ethics. December 2025/January 2026: Artificial Intelligence, Spirituality, and the Church, 25(7). https://learn.elca.org/jle/theres-an-app-for-that-preparing-for-a-future-of-ai-driven-pastoral-care/

Gates C. and Walton, L.M. (2025, December 19). Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics. Notre Dame News. https://news.nd.edu/news/notre-dame-receives-50-million-grant-from-lilly-endowment-for-the-delta-network-a-faith-based-approach-to-ai-ethics/

Hallow. (2024). Hallow: Prayer & meditation [Mobile application]. App Store. https://apps.apple.com/us/app/hallow-prayer-meditation/id1405323394

Haq, S. (2024). From Imago Dei to Imago Digitalis: How AI and theology cohere in dialogue. Verbum et Ecclesia. https://verbumetecclesia.org.za/index.php/ve/article/view/3707/9406

Hartford International University for Religion and Peace. (2025, August 20). How Ministry Leaders and Churches Are Embracing AI with Purpose & Faith https://blog.hartfordinternational.edu/2025/08/20/how-ministry-leaders-and-churches-are-embracing-ai/

Hertzler-McCain, A. (2024, May 7). Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization. Episcopal News Service. https://episcopalnewsservice.org/2024/05/07/silicon-valley-bishop-two-catholic-ai-experts-weigh-in-on-ai-evangelization/

HT News Desk (2025, October 13). How a 20-yr-old man turned “aghori tantrik” using AI, offered “boyfriend control” over Instagram; arrested for fraud | Latest News Delhi. Hindustan Times. https://www.hindustantimes.com/cities/delhi-news/how-a-20-yr-old-man-turned-aghori-tantrik-using-ai-offered-boyfriend-control-over-instagram-arrested-for-fraud-101760354461557.html

John Kennedy, A. (2024). Theology Encounters Technology: Unraveling Imago Dei in the Age of Artificial Intelligence. Dharmaram Journal of Psycho-Spiritual Formation XV(1). https://dvkjournals.in/index.php/vs/article/view/4365/3651

Killam, J. (2025, May 20). ‘That’s what good tools do.’ Wycliffe Global Alliance. https://wycliffe.net/2025/05/20/thats-what-good-tools-do/

Lausanne Movement (2025 November). LGA: AI ETHICS & THE GREAT COMMISSION. In M. Niermann & Q. McGrath (Eds.), LGA: AI ETHICS & THE GREAT COMMISSION. https://www.dropbox.com/scl/fi/4stks2bs54xit5z8dg6w1/LIGHT-Briefing-AI-Ethics-LGA-Nov-2025.pdf?rlkey=pyu29ovasqsljn4y7qpa4ffgf&st=vfcl3b9b&dl=1

MacDonald, S. (2025, December 8). Vatican official warns of AI’s hidden costs to environment, work and society. Catholic Review. https://catholicreview.org/vatican-official-warns-of-ais-hidden-costs-to-environment-work-and-society/

McCracken, B. (2024, July 26). From Data to Discernment: Why AI can’t replace cultivating of wisdom. Word by Word. https://www.logos.com/grow/why-ai-cant-replace-wisdom/

NBC News. (2024, December 6). 'Deus in machina': Swiss church installs AI Jesus to connect the digital and the divine. https://www.nbcnews.com/news/world/deus-machina-swiss-church-installs-ai-jesus-connect-digital-divine-rcna182973

Nurjaman, H., Siregar, A.F.Y., and El Yasinta, J.R. (2025). Unseen bias: How AI Shapes Religious Narratives in Digital Spaces. Journal of Middle East and Islamic Studies, 12(2), Article 1. https://scholarhub.ui.ac.id/cgi/viewcontent.cgi?article=1136&context=meis

Paglia, V., Smith, B., Kelly III, J., Qu, D., Pisano, P., Pontifical Academy for Life, Microsoft, IBM, & FAO. (2020). Rome call for AI ethic. https://www.romecall.org/wp-content/uploads/2022/03/RomeCall_Paper_web.pdf

Pasi, S. (2025, February 6). The human and environmental impact of artificial intelligence. HRRC. https://www.humanrightsresearch.org/post/the-human-and-environmental-impact-of-artificial-intelligence

Popović, M., Dhali, M. A., Schomaker, L., Van Der Plicht, J., Rasmussen, K. L., La Nasa, J., Degano, I., Colombini, M. P., & Tigchelaar, E. (2025). Dating ancient manuscripts using radiocarbon and AI-based writing style analysis. PLoS ONE, 20(6), e0323185. https://doi.org/10.1371/journal.pone.0323185

Premier Journalist (19 March 2026). YouVersion CEO warns AI misquotes scripture up to 60% of the time. Premier Christian News. https://premierchristian.news/en/news/article/youversion-ceo-warns-ai-misquotes-scripture

Quiram, P. (n.d.). Westminster Media - AI: Automated Idolatry. https://wm.wts.edu/read/ai-automated-idolatry

Rahim, S. F. A., Rahman, M. F. A., Thaidi, H. a. A., Azimi, N. N. M. a. N. M., & Jailani, M. R. (2025). Artificial intelligence for Fatwa issuance: Guidelines and ethical Considerations. Journal of Fatwa Management and Research, 30(1), 76–100. https://doi.org/10.33102/jfatwa.vol30no1.654

Rodgers-Gates, C. (2025, May 22). Seen, known and understood: Keeping human beings at the center. National Association of Evangelicals. https://www.nae.org/seen-known-understood-ai-humans-center/

Rupanksha. (2025, October 14). How spiritual AI chatbots for faith communities are engaging users online. https://www.techugo.com/blog/how-spiritual-ai-chatbots-for-faith-communities-are-engaging-users-online/

Schlumpf, H. (2026, March 3). AI’s inherent biases yield a false view of the church. U.S. Catholic. https://uscatholic.org/articles/202603/ais-inherent-biases-yield-a-false-view-of-the-church/

Sherbert, M. G. (2025). Transhumanism, Religion, and Techno-Idolatry: A Derridean Response to Tirosh-Samuelson. In MDPI, Religions (Vol. 16, p. 1028). https://doi.org/10.3390/rel16081028 Shurpin, Y. (n.d.). From Golems to AI: Can Humanoids Be Jewish? https://www.chabad.org/library/article_cdo/aid/4285513/jewish/From-Golems-to-AI.htm

Son of God AI (2025 August 4). Christian AI Humility: Technology Without Pride. https://sonofgodai.com/blog/christian-ai-humility-technology-without-pride

Student, G. (2025, November 12). Judaism and AI Design Ethics part 2. Torah Musings. https://www.torahmusings.com/2025/11/judaism-and-ai-design-ethics-part-2/

Tenbarge, K. (2026, January 5). AI deepfakes are impersonating pastors to try to scam their congregations. Wired. https://dnyuz.com/2026/01/05/ai-deepfakes-are-impersonating-pastors-to-try-to-scam-their-congregations/

The impact of AI-generated content on religious stereotypes and discrimination (n.d.). Get the Trolls Out! Undivided Action on Divided Speech. https://getthetrollsout.org/articles/5y4mux3nlzb97pp1oj0iloo5lna0ku

TOI Trending Desk / etimes.in (2026, March 9). Exorcists warn AI is a ‘great power’ that could be exploited by satanic groups. The Times of India. https://timesofindia.indiatimes.com/etimes/trending/exorcists-warn-ai-is-a-great-power-that-could-be-exploited-by-satanic-groups/articleshow/129345241.cms

Trothen, T. J. (2022). Replika: Spiritual Enhancement Technology? In Greg Peters (Ed.), Religions (Vol. 13, p. 275). https://www.mdpi.com/2077-1444/13/4/275

Vatican. (1983). Code of Canon Law (Canon 965). https://www.vatican.va/archive/cod-iuris-canonici/eng/documents/cic_lib4-cann959-997_en.html

Vision Christian Media. (2023, June 13). Sermon delivered by artificial intelligence. https://vision.org.au/read/articles/sermon-delivered-by-artificial-intelligence/

Weiss, B.R. (2024 May 3). The rise and fall of 'Father Justin' highlights Catholic sexism. National Catholic Reporter. https://www.ncronline.org/opinion/guest-voices/rise-and-fall-father-justin-highlights-catholic-sexism

Wester, J., Cox, S.R., Pohl, H., and Van Berkel, N. (2026). Chaplains' reflections on the design and usage of AI for conversational care. Scirate/arXiv. https://scirate.com/arxiv/2602.04017

Williams, P., Madueme, H., Williams, N., Ortlund, G., Anizor, U., Hannah, M., and Krueger, M. (2025). AI Christian Benchmark PDF. The Gospel Coalition. https://www.thegospelcoalition.org/ai-christian-benchmark/pdf/

WION. (2025, October 8). AI chatbots preach as Jesus in religious apps | GRAVITAS [Video]. YouTube. https://www.youtube.com/watch?v=TJsj0BOTQHM

Wikipedia contributors. (2026, March 25). Replika. Wikipedia. https://en.wikipedia.org/wiki/Replika

World Council of Churches. (2023). Statement on the unregulated development of artificial intelligence. https://www.oikoumene.org/resources/documents/statement-on-the-unregulated-development-of-artificial-intelligence

Zhang J, Song W, Liu Y. Cognitive bias in generative AI influences religious education. Sci Rep. 2025 May 5;15(1):15720. doi: 10.1038/s41598-025-99121-6. PMID: 40325105; PMCID: PMC12053680. https://pmc.ncbi.nlm.nih.gov/articles/PMC12053680/


Keywords:


AI and faith, artificial intelligence, spirituality, digital idolatry, AI governance, Holy Week, deepfakes, religion, formation, discernment, Imago Dei, Philippines, ASEAN, AI ethics, pastoral care, Catholic Church, Islam, Judaism, Buddhism



Back to top



FOLLOW ME

  •    
  •    
  •    
  •    
  •    

Other Posts


Deepfakes Are Not One Problem — They Are Three Different Problems


A committee hearing on the proposed deepfake bills and cybercrime amendments revealed something important: we are trying to regulate three very different harms under one word. If we want coherence, we need to start by disaggregating the problem.


19 Feb 2026


Whole-of-Society Approach to AI Governance: Reflections from the House ICT Committee's AI Hearing


The Philippines risks passing AI laws that look strong but lack grounding — no sovereign infrastructure, minimal data governance, and missing voices from health, defense, and human rights. Without anchoring legislation in existing national frameworks and on-the-ground readiness, regulation will be performative, not protective.


10 Dec 2025


How Poor Pedagogy Fuels the Very AI Misuse We Fear


The panic over AI is a smokescreen for a broken educational system. When we design assignments that a robot can complete, we shouldn't be surprised when students use one. The epidemic of AI 'cheating' is merely a symptom of a deeper disease: the failure to teach and value critical thought.


21 Oct 2025