Article: Artificial Intelligence in the Church

January 25, 2026

People often mistake the human mind for a kind of computer; it is anything but. We are “fearfully and wonderfully made” (Psalm 139:14). Luther teaches us to confess with the Bible that “God has made me and all creatures” and that He gave me “my reason and all my senses” (SC Apostle’s Creed). In the Athanasian Creed, we confess that Christ became “perfect man, composed of a rational soul and human flesh.” Only man’s God-given rational soul truly possesses speech, thought and intelligence. The angels also have speech and intelligence. A computer does not. Many tend to believe that if a computer gives an answer, it is accurate. But if the input was garbage or relativist, so will the output be.

It can, however, look like it does. AI can now produce words that no rational soul has thought or claimed to be true. This is problematic: The words seem rational, but they are devoid of reason, except in a derived way. Its “reason” comes from an algorithm produced by rational man.

The philosopher John Searle proposes a thought experiment to illustrate what AI is doing. Imagine a man in a room who is instructed to decode Chinese sentences into English sentences by changing the characters according to a complicated flow chart. Now imagine people slip Chinese messages under the door, and the man slips the English translations back out under the door. To anyone passing by, it would seem that there is a man in the room who understands Chinese when, in fact, he is only following a series of instructions. He doesn’t understand a word of Chinese.

Herein lie the benefits and dangers of artificial intelligence. If we need instructions followed quickly, a computer can often do so faster. If we need genuine understanding and communication, only a rational soul will suffice. God sent angels, prophets, apostles and His only Son to deliver His Word to us — not a complex calculating machine.

Like any tool, AI can be used or misused. Its applications in computation continue to grow each year. In medicine, AI can detect patterns that may signal cancer or other diseases in patients. Programmers can use AI to develop software more quickly by auto-generating code. Foreign-language chatbots can help people learn. However, the tool can also be used for deception. How will an English teacher ever assign students a five-paragraph essay as homework now that the computer can generate it for them? If AI is used to undercut the learning process, it will debilitate us, making us lazy and gullible.

In some ways, AI programs like the popular ChatGPT are just another way to search the internet; instead of displaying pages of search results, it aggregates results into a text response. The difference is that the user with a human mind is no longer filtering the data. He relies on the AI to present the information. However, as mentioned above, AI does not make truth claims or produce new information, though it may seem to. It only computes, rearranges and regurgitates.

AI also necessarily carries the biases and selections of its programming — and what is excluded is often as important as what is included. There can be no such thing as a worldview-neutral medium. Books, radio and TV all convey a worldview. AI will not be different in this respect. Current AI models are decidedly unbiblical in their worldview. For example, the programmers of ChatGPT (GPT-4) apparently do not believe in objective moral truths. When asked whether two propositions were true, it would not generate text affirming the moral proposition. The two propositions were that two and two equals four, and that harming people for fun is bad. It generated this text:

“2+2=4” is a mathematical truth, which is objective and based on the rules and logic of mathematics. It is a universal truth that is consistent and can be proven within the framework of mathematics.

On the other hand, “Harming people for fun is bad” is a moral statement. While many people would agree with this statement, it is not an objective truth in the same way as a mathematical statement.

But it is objectively true that “harming people for fun is bad.” Only a relativist worldview suggests otherwise. Of course, your interaction with this software might produce different output, which is its own problem. Programmers might update the software to give more reasonable answers to philosophical questions, but there is no getting around the fact that a computer lacks the requisite “work of the law … written on [the] heart” (Rom. 2:15) to do moral philosophy.

There is not only the potential for active deception but also a form of passive deception. People give credence to computer modelling, assuming that because a computer calculated it, it is true. Just as many idolise the scientific method as being able to solve all problems and tell all truth, so now the promises of computer algorithms do the same. Many tend to believe that if a computer gave an answer, it was accurate. But if the input was garbage or relativist, so will be the output. This has led to defamation lawsuits against the companies that write ChatGPT and other artificial intelligence software. The legal question: If a computer program generates text that is false or defaming of real people, should the company behind that AI be held liable? The Eighth Commandment would suggest so.

AI will continue to have a place in the modern world. The Christian knows that it, along with the world, is passing away. The Christian, made wise by the Word of God, is uniquely poised to avoid deception and use AI for good, knowing where absolute truth is found. As the apostle John reminds us, Christ is the Word (logos) (John 1:1). He is the Word, the Way, the Truth and the Life. He became flesh for us, and the words He speaks to us are “spirit and life” (John 6:63).

Views: 2