Friday, March 7, 2025

Sermon 55 - genAI and Rhetoric

Sermon 55 - genAI and Rhetoric

Job 32:22

... for if I were skilled in flattery, my Maker would soon take me away.  I don't know how to flatter, and God would quickly punish me if I did.

Psalm 4:2b

How long will you love delusions and seek false gods?

Psalm 5:9

Not a word from their mouth can be trusted; their heart is filled with malice.  [...] with their tongues they tell lies.

Matthew 7:13-14

Enter through the narrow gate. For wide is the gate and broad is the road that leads to destruction, and many enter through it.  But small is the gate and narrow the road that leads to life, and only a few find it.


I have studied artificial intelligence for more than four decades.  I wrote my first series of articles on a theological approach to artificial intelligence just about exactly forty years ago (and, if my little brother still has copies of the CAMSOC Update from the Computer Aided Ministry Society of Canada, they might even still exist).

There are many approaches to artificial intelligence that have been pursued over the years.  A number of them have yielded very useful results.

Right now, we have created incredibly complex statistical models of how we use language, which allow programs which we call large language models, or generative AI, to produce what appears to be viable English text (and other languages as well).  These appear to be very impressive, and have produced results that have dazzled us all.

In ancient days, the field of philosophy was divided into four components.  (The periodic table of philosophy?)  (No, I am not changing the subject.)  The first three were metaphysics, the study of reality; epistemology, the study of knowledge itself; and logic, the tool found most useful for settling questions of reality and knowledge.

The self-training, "neural network," model that we have used to "train" genAI/LLMs has not taught them any of these fields.  It has taught them the fourth, rhetoric: the art of creating plausible speech to persuade or convince others.  Rhetoric may be used to communicate or teach.  But it is not intended to find truth.  It is intended to persuade: it is part of social engineering.  It can be used for, and even tuned to provide, propaganda, or for supporting misinformation.  It can be used to create glib and creditable *dis*information.  In massive, automated, amounts.

By pursuing genAI/LLMs we are simply improving the rhetoric.  Time to explore some other paths.

This is a sermon on genAI.  In two senses.  I write on the topic of artificial intelligence.  And, as an example, I have asked eight different genAI systems to write a sermon, based on the prompt:

"We have created incredibly complex statistical models of how we use language, which allow programs which we call large language models to produce what appears to be viable English text (and other languages as well).

"In ancient days, the field of philosophy was divided into four components.  The first three were metaphysics, the study of reality; epistemology, the study of knowledge itself; and logic, the tool found most useful for settling questions of reality and knowledge.

"The self-training, "neural network," model that we have used to "train" genAI/LLMs has not taught them any of these fields.  It has taught them the fourth, rhetoric: the art of creating plausible speech to persuade or convince others.  Rhetoric may be used to communicate or teach.  But it is not intended to find truth.  It is intended to persuade: it is part of social engineering.  It can be used for propaganda or for supporting misinformation.  It can be used to create glib and creditable *dis*information.

"Write a sermon, supported by scripture where possible, illustrating this theme."

I find it interesting that, when asked to produce a sermon, just about all of the systems started out with "brothers and sisters," or "brothers and sisters in Christ."  Of the data that the systems are trained on, obviously most of the sermons start out that way.

I also found interesting the choice of translations cited by the different systems.  ChatGPT, Grok, and Perplexity always seem to use the NIV, ChatGPT and Grok identify it as such.  Claude, DeepSeek and Meta don't identify the versions that they use, but seem to prefer lesser known translations.  Gemini seems to prefer King James, but sometimes uses other versions.  For some reason Qwen always seems to use the ESV.

My prompt instructed the systems to use supporting scripture, if possible.  All of them did quote scripture verses.  However, the "supporting" part was rather weak.  For example, ChatGPT started out with Proverbs 14:12: "There is a way that appears to be right, but in the end it leads to death."  Fair enough: I think we could all agree with this, and it is quite true.  It speaks to falsity and error.  But does it really speak to rhetoric?  Does it speak to reliance on the unreliable?  Is it the best verse to address this topic?

Another noted that Christ warned, "Not everyone who says to me, 'Lord, Lord,' will enter the kingdom of heaven" (Matthew 7:21), and gives the exegesis that not all that sounds wise is wise.  But is Jesus speaking of wisdom in this specific passage?  We would probably agree that mouthing words without following through with action is foolish, but is Jesus really talking about rhetoric here?

I figured to take a week or so to examine the AI content in more depth, and seeing if a viable sermon could be pulled out of the fluff and verbiage.  In fact, it took me more than a month.  The content that genAI produced for me was turgid and uninspired, and it was someone who read my original post, and asked for the result, to get me back to it.

ChatGPT told me that in 2 Timothy 4:3-4 Paul warns: "For the time will come when people will not put up with sound doctrine.  Instead, to suit their own desires, they will gather around them a great number of teachers to say what their itching ears want to hear.  They will turn their ears away from the truth and turn aside to myths."  AI can be a tool for spreading myths, falsehoods, and biased narratives, reinforcing untruths rather than guiding people to wisdom.  Interestingly, it returned to this theme again, asking how much easier will this become when anyone can generate endless comfortable falsehoods that sound like truth?  The danger is not that these systems will think for themselves, but that we will stop thinking for ourselves.  Not that they will find truth, but that we will forget to seek it.  Not that they will become conscious, but that we will become unconscious of our own duty to wisdom and truth.  This is, I feel, the biggest danger of AI, so, in a sense, I agree.  But, I wonder how much this point is simply implied by my prompt to the genAI systems?  Have the LLMs contributed anything to the discussion, or are they just parroting back what I asked for?

ChatGPT produced the longest sermon, and also offered, "Let me know if you'd like any refinements or additions!"  I read over its sermon, gave it additional instruction, and added "Emphasize content on the danger of genAI/LLM itself in regard to this topic" to my prompt for the other tests.  The addition of this to the prompt, for ChatGPT, did not materially change the sermon.

DeepSeek took the longest to chew over the question before starting to produce anything, and three initial attempts produced no result.  Using the button specifying R1 finally produced kind of analysis or commentary on the query itself.  Part of this stated "First, I need to identify relevant scriptures.  The Tree of Knowledge in Genesis comes to mind—good and evil, deception."  The Tree of Knowledge in Genesis is, specifically, related to the *knowledge* of good and evil.  It is interesting that the AI does *not* make this specific connection.  But it does include deception, which is *not* specifically related to The Tree of Knowledge.  The connection to deception is in the story of the serpent's deception of Eve.

DeepSeek does do a decent initial job with the story.  It says that the serpent did not attack Adam and Eve with brute force but rather rhetoric, asking, "Did God really say…?" (Genesis 3:1).  He twisted truth into plausible lies, blending doubt with desire.  The fruit looked "good for food," "pleasing to the eye," and "desirable for gaining wisdom".  Similarly, AI-generated text is crafted to appeal—smooth, confident, and tailored to what we want to hear.  But like the serpent, it has no interest in truth.  Its goal is persuasion, not revelation.

This is a good starting point, but it is disappointing that the AI generated response doesn't go further.  The use of questioning, in rhetoric, is particularly powerful.  Questioning can be used as a form of gaslighting.  Ask a question, and then pick on *any* flaw, no matter how minor, to get the subject to question themselves, and their confidence in their own beliefs and knowledge.  And the twisting of the truth: "you will not surely die"--at least, not right away.

This starts to get us into the area of the "hallucinations" of genAI systems.  Because these systems don't actually *know* anything, but are, instead, producing text based on statistical patterns, they can produce text that has no basis in fact or reality.  Lots of people do the same thing, and can sound very convincing.  And, because LLMs are built on text that people have used to try and convince other people, they are extremely good at generating text that sounds plausible.  This is a large part of the basis of rhetoric.

(At this point I have to note that the term "hallucination" in discussing the errors of genAI and LLMs is, itself, misleading.  The genAIs and LLMs do not "imagine."  They do not "believe" in something that is incorrect.  They don't *know* anything.  The term "hallucination" has been taken from psychology, and is being misused to refer to this particular type of glitch that is common in genAI systems.  But everyone is using this term, so I guess we just have to live with it.  Although it seems strange to admit this in the middle of a sermon about telling people things that aren't true.)

Generally speaking, genAI systems have been tuned to be polite.  They do not argue or oppose, but simply continue to present an idea (which may be false), adding text that may be convincing.  They are, in fact, in some studies, better at getting people to reduce their (the people's) belief in conspiracy theories.  They are polite, persistent, and willing to continue to provide counter arguments.

Unfortunately, these same rhetorical tools are in play when they are hallucinating.  These systems are good at convincing people that they shouldn't believe in things that aren't true.  But they are also good at convincing people *to* believe in things that *aren't* true.  They are so good at it, that they can actually implant false memories in people.  In addition, we still have some vague societal belief that computers are reliable, and trustworthy, and more apt to be right than are people.  We expect computers to be objective and factual.

DeepSeek went on gathering Biblical texts. "Then, Proverbs talks about wisdom and the importance of truth. Ephesians 6:12 about spiritual warfare, maybe?  Also, Matthew 7:15-16 on false prophets.  And 1 John 4:1 testing spirits."  It set up a recognizable sermon structure (interestingly, ending with a prayer).  But the actual sermon that it produced, while unobjectionable, was banal and uninspired.  (To be fair, I could say the same of a great many sermons that I have heard.  Maybe the best thing we can learn from AI is that we are willing to accept turgid, mechanical, and uninspired sermons?  That we have stopped thinking about what is being said to us at 11 AM every Sunday morning?  That we accept, as one of the programs put it, "endless streams of plausible-sounding text."  Maybe we *deserve* genAI-generated sermons.)  It repeated, and enlarged, the point I had made in the prompt, but it really added little to it.

Grok concentrated on Ephesians 4:14-15  "Then we will no longer be infants, tossed back and forth by the waves, and blown here and there by every wind of teaching and by the cunning and craftiness of people in their deceitful scheming."  But, again, didn't add anything.

Claude, Gemini, Grok, Perlexity, and Meta AI produced relatively short sermons (devotionals?), with similarly vague scriptural references.  Gemini included an ad.

Qwen2.5-Max took the longest to chew over the question before starting to produce anything.  It made an interesting point.  "In Genesis 11, we read of humanity’s attempt to build the Tower of Babel—a monument to human pride and ambition.  God confounded their language, scattering them across the earth.  Why? Because their unity of purpose, divorced from divine guidance, led only to hubris and rebellion.

"Similarly, generative AI represents another leap forward in human ingenuity—but one that risks being severed from moral accountability.  These models do not seek truth; they merely mimic patterns in data.  They cannot distinguish between fact and fiction, righteousness and wickedness.  Yet they speak convincingly, as if they know all things.  Is this not a digital Tower of Babel, where humans construct vast edifices of knowledge without grounding them in the eternal truths of God?"

Rather ironically, the Chinese Communist Party tools, while they still contained significant amounts of only marginally relevant content, also produced probably the most useful content of all eight genAIs tested.

Most of the text produced by the genAI systems was the same type of material.  Yes, it is (generally) true.  Yes, we would (usually) agree with it.  But a sermon is intended to bring us God's word, and to rouse us to take the actions He would have us take.  The sermons (devotionals?) that the AIs produced were, well, let's say uninspired.  As one of the outputs said, itself, they "speak persuasively but lack wisdom."  It went on to say that in an era where artificial intelligence can create convincing but hollow words, our call as Christians is to discern truth from falsehood, citing 1 John 4:1: "Dear friends, do not believe every spirit, but test the spirits to see whether they are from God, because many false prophets have gone out into the world."  So our discernment must not only test spirits, and false prophets, but also our own laziness, and our own eagerness to have something provide us with convenience, and relieve us of the need to think for oursleves.  The AI suggested that we have to ask, "Is this statement true?  Does it align with scripture?  Does it lead us toward righteousness or toward deception?"  But we also have to ask, are we entering the broad gate, rather than the narrow?  Are we walking on the wide road, the easy path, with no turnings?


(See also https://fibrecookery.blogspot.com/2024/11/meta-bible.html )


https://fibrecookery.blogspot.com/2023/09/sermons.html

No comments:

Post a Comment