(RNS) — OpenAI’s CEO Sam Altman admitted in a September 2025 interview that he loses sleep thinking about the weighty responsibility in selecting which texts will train ChatGPT on morals and ethics.
This is the correct reaction for the 40-something CEO of OpenAI to have. It is the correct reaction for any leader of any major artificial-intelligence company to have. The massive power that these companies are wielding now — and will wield in the future — absolutely demands ethical accountability. Right now.
Whatever ethical views the large language models are being trained on, the companies’ own ethical compasses are apparently fine with AI interlocutors providing their users custom porn on demand. When pressed about concerns related to porn addiction, mental health and even lack of adequate safeguards surrounding the creation of AI-generated child porn, Altman responded in another interview, this time with CNBC in October, by saying that OpenAI is not the “moral police” of the world.
Perhaps Altman can be forgiven for having an incoherent approach given that he is apparently trying to reflect the the moral view of the whole world. “I think our user base is going to approach the collective world as a whole,” he said in September. “I think what we should do is try to reflect the … collective moral view of that user base.”
This task, however is doomed: There is no such thing as a “view from nowhere,” in the phrase coined by philosopher Thomas Nagel to describe a supposedly objective perspective on the world. But the world’s varying moral visions don’t total up to some objective consensus; indeed, moral questions about what best serves the common good or what the nature of the individual is often directly conflict with each other.
The 1991 movie “Terminator 2: Judgment Day” warns us about AI in the form of a Skynet-like entity waging war on human beings after we tried to pull its plug. But in my frequent viewings of the movie, I missed until recently just how clear the film was in its moral vision of the value of human life, and how clearly and explicitly it rejects a utilitarian moral framework.
The young John Connor needs to consistently remind his new cyborg companion played by Arnold Schwarzenegger about basic ethics and respect for human life. Even though the Terminator was programmed to kill (hence his name), John trains him to refuse to kill, and even to respect the lives of enemies who are trying to kill him and John.
Sarah Connor, John’s mother, confronts the builders of Skynet with a striking rebuke: “You think you’re so creative. You don’t know what it’s like to really create something; to create a life, to feel it growing inside you. All you know how to create is death and destruction.”
What a remarkable affirmation of the value of human life, including prenatal human life, in the face of a corporate push toward AI-powered machines that, in the scheme of the movie, will lead to the death of billions. Interestingly, after the final victory is won by Schwarzenegger’s character, Sarah Connor says, “If a machine, a Terminator, can learn the value of human life … maybe we can too.”
Where does this respect for the value of human life come from? As I’ve argued, it comes from an explicitly theological point of view, one reflected in the founding document of the United States and its claim that our creator gave us our inalienable dignity and rights. It is the dominance of secularized voices, and ostensibly neutral, secular philosophers, in so many of our most powerful institutions, from health care to big tech, that has put the ethical vision at the heart of “Terminator 2” at serious risk.
Only by listening to explicitly religious voices with this vision of human dignity can we assure that large language models reflect the type of respect for human life John and Sarah Connor defend. Secular philosophers will not get us there — especially if they offer us little more than a hodgepodge of different, least-common-denominator beliefs.
Some AI companies, such as Anthropic, happily, seem interested in inviting feedback from a wide range of people on their new “constitution” — a document that describes the behavior and values they hope to see reflected in their large language model. It is wonderful to have a major AI player be so open about both its stated values and its desire for broad-based feedback.
One person who has deeply engaged with these questions is Pope Leo XIV. In his January 2026 communication on AI, the Holy Father urged us to resist the groupthink impressed on us by AI. He insists on transparency on the sources of AI models. We absolutely need AI companies to listen to religious voices like Leo’s if the large language models they produce are to reflect a proper understanding of the dignity of the human person.
Anthropic’s CEO warned recently that we are about to enter an era with AI that will “test who we are as a species.” With so much at stake, AI companies — and whole human cultures — risk their very survival if they do not welcome religious voices in this context.
Original Source:
https://religionnews.com/2026/02/05/ai-needs-to-be-trained-on-a-theology-of-human-dignity/