Elder Gerrit W. Gong of The Church of Jesus Christ of Latter-day Saints announced a multifaith initiative to create an AI evaluation tool that tests how artificial intelligence programs respond to religious questions and portray faith traditions. Speaking at the Rome Summit on Ethics and Artificial Intelligence in Vatican City, Elder Gong revealed that computer scientists from Brigham Young University are partnering with evangelical, Catholic, and Jewish institutions to develop what’s called the “Faith and Ethics AI Evaluation,” designed to assess the moral compass of AI systems as they increasingly become primary sources of religious information.
What you should know: The evaluation tool will test AI programs across seven key categories to ensure respectful and accurate treatment of religious content.
- The system evaluates whether AI programs are faith faithful, accurate and expert, child appropriate, pluralism aware, resistant to deluge (high-volume search results that portray faith inaccurately), human centered, and multilingual.
- Partner institutions include Baylor University, the University of Notre Dame, and Yeshiva University in New York City, with plans to expand internationally.
- The team is also collaborating with “socially responsible frontier model AI companies” that recognize the need for fair and respectful responses to faith-based queries.
Why this matters: AI is rapidly becoming a primary source of information about religious beliefs, making accurate representation crucial for society as a whole.
- “Portraying faith traditions accurately or respectfully is not an imposition of religion on AI. Rather, it is a public necessity,” Elder Gong said.
- Jeffrey Rhoads, Notre Dame’s vice president for research, emphasized that while “technology is a wonderful tool to advance humanity,” it “also creates unique challenges to humanity” when left unchecked.
The bigger concerns: Religious and ethics leaders at the summit expressed grave concerns about the current trajectory of AI development and its potential societal impacts.
- Elder Gong warned against AI applications that “supercharge digital dopamine,” including social media algorithms designed to “maximize advertising and monetize rage.”
- He specifically condemned AI-enhanced addictions and harmful uses: “We deplore addictions and evils that AI is being used to enhance, including AI ‘adult companions,’ AI-generated pornography and AI-driven gambling.”
- Conference participants worried about the “winner-take-all race” to create artificial general intelligence and the relatively small group of people designing these systems.
The balanced approach: Elder Gong outlined a measured perspective on AI regulation and development that avoids both extremes.
- “We do not fear AI, nor do we think AI is the answer to everything,” he said. “AI is neither the sum of, nor the solution to, all our opportunities or problems.”
- He advocated for finding balance between “no meaningful regulation of AI and ‘stifling’ overregulation.”
- The evaluation tool aims to provide assessment that is “independent, transparent, pluralistic, technically grounded, community spirited and iterative.”
What they’re saying: Religious leaders emphasized the importance of maintaining human dignity and divine principles in AI development.
- “When we promote human-centric, accurate and respectful, ethical and faith-based standards for artificial intelligence and embed within AI moral grounding and moral compass, we embrace our divine identity and purpose and promote human flourishing for the common good,” Elder Gong said.
- Father Paolo Benanti, a key AI adviser to Pope Francis, argued that “algorithms are not neutral and will shape human reality, so they must be guided by human values rather than technical efficiency alone.”
What’s next: A 2026 conference will explore how the Vatican summit participants can support the Faith and Ethics AI Evaluation tool, according to Meredith Potter, executive director of the American Security Foundation.
How well do AI programs reflect accurate religious beliefs?