Thursday, March 26, 2026

Bruce Waltke

Bruce Waltke taught me systematic theology.

About a decade later, I'm not sure whether it was Gloria or Carl that told Bruce Waltke that I knew something about computers.  I'm pretty sure that Bruce was writing yet another commentary on one of the books of the Bible.  His publishers knew that he needed a fairly specialized word processor; one that could handle the Greek and Hebrew alphabet.  And possibly some others.  At that time this kind of specialized word processor cost about $20,000, and required you to add an extension onto your house to add a room to hold the word processor.  I'm pretty sure that it was Bruce's publishers who were paying for the word processor.  (I have no idea who was paying for the extra extension.)

So we went off to visit one of the only vendors in town who had one of these specialized monstrosities.  We discussed his needs and the machine's capabilities.

During that afternoon he admitted to me that the year that he had taught me systematic theology was not his finest ever year of teaching.  (Which I thought was pretty decent of him to admit.)

Wednesday, March 25, 2026

AI - 2.06 - genAI - what not taught

AI - 2.06 - genAI - what not taught

I have, elsewhere, noted that, inherent in the process by which we have built large language models, and therefore generative artificial intelligence, is the fact that we have taught these systems rhetoric, but not the other, fundamental, classical fields of philosophy: that of logic, metaphysics, epistemology, and ethics.  This points out one of the many possible things that we can do to address our own use of artificial intelligence.  We can press for research and exploration into the areas of artificial intelligence that we have not yet explored.

I was the one with the formal qualification in education, but Gloria had a greater and deeper native understanding of the way children viewed the world then anyone that I have ever met.  Gloria always insisted that, at every possible opportunity, we pay attention to children, particularly young children, to see how they see the world.  She said that this would be the only way in which we could get a new perspective on the world, a new viewpoint.  This is an absolutely salient position to take with regard to artificial intelligence.

When you pay attention to it, the way that children learn is nothing short of miraculous.  Some of us like learning: I do.  A great many of the human species do not enjoy learning.  We make every possible excuse to continue to do what we have been doing, without changing either what we do, or the way we do it.  A great many of us try to avoid learning at all costs.

Babies are learning all the time.

Human babies, interestingly, are born with a number of capabilities, which they very quickly lose.  If you have access to a newborn child, you can verify this for yourself.  A newborn human infant has, almost immediately, a grasping response.  If you put your finger in a newborn babies hand, it will grasp your finger.  It will grasp it hard enough that you can lift the child using only your finger.  (I must warn you that, should you make this experiment in order to verify what I have said, that you do so extremely carefully, and make sure that you have the full and informed permission of the parent, particularly the mother, of the child.  I am not responsible for any injuries you may incur if you fail to follow this advice.)  This grasping response is found in other primates, but in humans usually disappears after a day or so.

Newborns do have other capabilities, which they, generally, very quickly lose.  I was able to see one grandson within a few hours after he was born.  At that point, I was able, allowing his hands to grasp my fingers, to have him stand upright, with me really only providing balance, and not lifting force.  He was also, at that point, able to hold his head erect, and to turn his face to different sounds in the room.  Once again, this capability disappeared within a couple of days.

Newborn infants are unable to focus their eyes.  They seem to be drawn to the shape of a face, even if they can't fully focus that image.  Within weeks, newborn infants learn what focus is, and how to focus, and then start to focus on objects in their field of vision, moving their eyes, and eventually their head, to focus on, and study, certain objects within their visual field.

How do they do that?  How do they learn to do that?  Even knowing what focus was, knowing what optics were, knowing the importance of vision in identifying objects, it has taken us more than seven decades to figure out how to get computers to do it.  It still takes up an enormous amount of computing power, and we can't yet do it anything like as usefully, or as reliably, as any human child learns to do it, without assistance, in about six months.

So, here are a few things that we should start to teach our artificial intelligence systems, in order to make them actually intelligent.  We need to teach them the eighty percent of philosophy that doesn't involve rhetoric.  We need to teach them how to learn.  And so we should probably put it to the enormous tech giants, and the massively expensive generative artificial intelligence corporations, that these are some areas they should look into.

When recruiting for an artificial intelligence company, don't just look for the latest bright spark who can code really quickly.  Make sure that that bright spark, as well as a number of computer courses, has taken some courses in philosophy.  Have ongoing education within your corporation, that teaches these fields.

You're probably going to be hiring young people.  They will probably be a marriageable age.  They may even be married.  They may be having kids.  If so, make sure they have time to spend with their kids.  Do you provide daycare?

There are enormous amounts of money invested in artificial intelligence companies these days.  Yes, there are a great many demands upon that money.  There is a bidding war, going on in order to poach talented individuals from one company to another.  There is massive investment in data centers.  There is even enormous investment in power plants to power the data centers to run the computing necessary to build large language models, and then to run them.  But amongst all those billions, do you have a daycare?  Do you have a daycare for your employees?  On site, within your company campus?  So that your employees, your young married employees, who may have small children, can occasionally drop by and spend time with their children.  And observe their children.  Observe how their children start to learn.

(They may also spend more time at work, in that case.)

Do you take any of that massive investment, in power plants, and data centers, and high-priced talent, and invest it in education?  In education in general, in terms of supporting schools in the areas around you, so that you can recruit educated employees.  But also invest in educational research.  Particularly, and probably unusually, in the area of early childhood education.  Fund research into how infants and children actually learn.  Infant psychology.

Yes, these areas of research are going on.  But they don't get anywhere near the funding, the billions, and even trillions, of dollars that are going into artificial intelligence.  Yes, they promise of artificial intelligence is a big one.  And, if we ever *do* get actual and genuine and reliable artificial intelligence, then it is likely that the artificial intelligence will repay that investment.  But aren't we more likely to achieve artificial intelligence that much sooner, if we are using educational, and psychological, and philosophical research and study in order to direct our own search for, and production of, artificial intelligence?

While isolated visionaries have idly speculated about emotion in computers, the vast majority of the computer using, and non computer using, populace sees technology as cold, mathematical, and ultimately objective (if occasionally in error).  The fact that this assessment is an emotional one gets conveniently forgotten.

One of the possible divisions in the study of artificial intelligence is in the approach taken.  The brute coding approach simply strives to make programs more and more intelligent, the definition of "intelligent" being left as a problem to be dealt with once we have something that is at least marginally useful.  This strategy has been demonstrably successful in producing entities like Deep Blue, genAI, and techniques such as expert systems.  The alternative route is to observe that we already have at least one agreed upon model of intelligence, and to seek to apply what we know of the human mind to some form of programming.  While that course suggests interesting tactics like neural networks, spectacular triumphs have not been forthcoming.

Pursuing this modelling approach Rosalind Picard divined a potentially revolutionary concept in computing in producing the book "Affective Computing."  Even those who praise Picard and the book tend to see affective computing as only a means to a superior user interface, and miss the proposal that affect is key to intelligence itself.

It has been proposed that the AI goal of reproducing human intelligence is a chimera and a false trail.  Machine intelligence, so the thesis suggests, is different in kind from human intelligence, and the attempt to make one copy the other would be better directed to finding the differences between them and assigning work appropriately.  If this latter hypothesis is true then Picard's recommended line of enquiry would be futile in terms of producing better machine intellect--but would still be valuable in determining the dividing line.



AI topic and series
Next: TBA

Outlander

I am watching the final season of the "Outlander" television series.  I don't particularly like it.  Why am I watching a television series that I don't particularly like?  Because Gloria really liked it.

I didn't watch the series during the years between the time that Gloria died and now.  Like I said, I don't particularly like the show.  I think it's overly sexualized, improbable, and I really can't tell what the point of it all is.

But Gloria really liked it, and this is the final season, so I figured that I can watch eight shows or so, for this last season.

I suppose that it's also a bit like watching the show with Gloria again.  Watching it on Gloria's behalf in a sense.

Happy birthday, Carl!

Gloria, at the time that we got married, was secretary to the principal at Regent College, Carl Armerding.  Gloria certainly enjoyed her time at Regent more than she enjoyed any other job, and her relationship with Carl was the closest with any of her bosses.  Gloria very frequently said that, coming from a somewhat anti-intellectual, and fairly provincial, denomination, being at Regent, and listening to, and sometimes discussing with, some of the greatest theological minds of our age, was like coming up out of the valleys to a mountaintop with, quite suddenly, a huge broad vista spread before her.

Carl is turning 90 years old.  Regent College is making a big deal out of it and is having a birthday party.  I am invited and I would like to try to go.

I actually knew Carl before I met Gloria.  I have known Carl for more than fifty years.  I suppose that means that I am one of Carl's oldest friends!  (I got that honour primarily because most of the rest of the candidates are dead.)  I'm pretty sure that I first met Carl when he taught a seminar on the book of Habakkuk on a weekend retreat at Keats Island Baptist Camp.  Then (based on Carl's description of Regent that weekend) I went to Regent and attended for a year, taking their Diploma of Christian studies.  Carl taught us Biblical Theology.  (One of my fellow students that year, who also got the "Dipsy S," on the basis of the diploma as his theological training, later became the BC Area Minister for the Convention Baptists.)  Then I got on to the Regent Senate, and before the Senate meetings I would hang out in Carl's secretary's office.  Then I married Carl's secretary.

(While Gloria was Carl's secretary, Carl was working on a commentary on the book of Judges.  Infamously, he was always late with it and constantly missing deadlines for the publisher.  Since Gloria died, I have been sporadically looking to buy a copy of it.  When I talked to Carl on the phone yesterday, he admitted that he never did finish that commentary  :-)

Gloria and I enjoyed attending the Laing Lectures and other public lectures provided by Regent.  Regent was important to Gloria, and I guess it was pretty important to me over the years as well. 

Tuesday, March 24, 2026

Sermon 75 - Bibliodivergent

Sermon 75 - Bibliodivergent

Jeremiah 33:3
Call to me and I will answer you, and will tell you great and hidden things that you have not known.


I know that I am supposed to present you with a problem, and then present you with a nice, Christian answer, and tie it up neatly with a bow by the end of the sermon.

Sorry.  I'm not going to do that.  I'm going to present you with a bunch of questions, and a bunch of problems, and I'm not going to give you any answers.  This sermon is about ambiguity and living with ambiguity.  Deal with it.

Okay, possibly a little bit of a smile for you to begin with, since an awful lot of the latter part of this sermon is going to get pretty heavy.  I originally titled this sermon "Disturbance."  And then I changed it to "Bibliodivergent" because of the connection with neurodivergent, and also because I realized that the use of the word disturbance that most of you would be most familiar with would be from the Star Wars movie, the one that was shot first, but has subsequently been retitled Episode IV, where Alec Guinness pretends to stumble and look distressed, and when the others query him about this he says, "I felt a disturbance in the force, as if millions of voices had cried out in pain and then were silenced."

And, actually, "silenced" is kind of appropriate to what I'm going to talk about here.  We don't like disturbances.  We don't like to be disturbed.  We like our life to go on steady, and placid, and we don't care or worry too much if it's not terribly exciting, as long as it doesn't get too dangerous.  We don't like distress.  We don't like disturbances.  We don't like to be alerted to danger.  And, really, that's what pain and distress are.  They are alerting us to danger.

But we don't *want* to be alerted.  To be alerted means we have to pay attention.  The "pay" part, in the phrase "pay attention," is probably deliberate.  Paying attention means that we have to expend energy.  It's not restful if we have to pay attention.  And we definitely like restful.

Our world is anything but restful, if you are actually paying attention.  Our world is incredibly complex, and there are dangers around every corner.  New dangers arise so fast that we can't even learn effectively about the old ones, before we are presented with new difficulties.  It's not restful.  It's disturbing.

People allied with Chinese culture, and particularly those who are active on the Internet, have recently come up with a new saying for it: "Life is hard already, please don’t burst my bubbles."

In other words, they don't want you to burst their bubble.  They don't want you to take off their rose-coloured glasses.  They don't want to be alerted to the dangers, even if that means that they are, in fact, in danger!  They'd rather not know.  As long as not knowing also means that they're not going to be disturbed.

And there's yet another way to put that: (aka Isaiah 30:10-11)
They say to the seers, "See no more visions!" and to the prophets, "Give us no more visions of what is right!  Tell us pleasant things, prophesy illusions.  Leave this way, get off this path, and stop confronting us with the Holy One of Israel!"


He was throwing out anti-religious statements.  Not necessarily because they were a firm commitment for him but because, in the full flower of twelve-year-old rebellion, he knew that it would upset us.  I put it to him: God, the God who has created the entire universe and any other universes that there may be, if there are other universes, loves you and wants to be your best friend.  Given that fact, is there anything, anything at all in the world, that is more important?

He immediately fired back, "Money."

I said, "God invented money."

You could practically *see* the wheels going around in his head.  He was coming up with all kinds of ideas for every idea he came up with, but he was also clever, and you could tell that he knew what the answer was going to be.  Finally he simply said to me, "You're messing with my head, aren't you?" Yes, I said.

Somebody once observed to Gloria that I had a tendency to think outside the box.  Gloria replied, "I don't think that Rob knows that there *is* a box."

On another occasion, in some exasperation with me, Gloria noted, "Rob, not only are you weird, but you like it that way!"  I thought that that was funny and I noted it to a friend.  My friend immediately fired back, "Not only that, but I suspect that you *practice!*"

I think differently from other people.  I see the world very often from a different angle.  An awful lot of people think that I'm around the bend.  But maybe I get a better view from there.   I have to admit that I rather enjoy seeing the world differently.  Sometimes it's very useful.

There was one time that my father presented to me a problem that had been troubling the church board for several months.  He presented it to me as an intractable problem.  He didn't expect a solution.  Before he had fully even fully explained the problem, I had the solution.  As soon as he did finish explaining the problem, I presented the solution to him.  My father was very good at finding problems in anything you presented to him, and I could see that he examined my proposed solution from every possible angle and couldn't find anything wrong with it.  Finally, and still looking somewhat surprised, he asked me, "How did you come up with that?"

My father had never appreciated the fact that I see the world differently and particularly the weird sense of humour that it gives me.  I thought about it for a couple of seconds, and then replied, "You know all those jokes of mine that you don't like?  It comes from the same place."

However, as much as I like talking about myself, this sermon isn't about me.  It is about disturbing people.  Or, rather, it is about people being disturbed.  Generally speaking, people do not like to be disturbed.  But it's rather important that, at least from time to time, people are presented with things that disturb them.  That is how we fix things.  That is, very often, how we learn.  That is how we improve things.

Jesus knew this.  Jesus knew this very well, in fact.  An awful lot of the content of the Gospels is about Jesus disturbing people.

We are so used to the stories about the places where Jesus was disturbing people that we have mentally sanitized them.  Very often they no longer disturb us because we need to go back to the originals and look at how disturbing they are.

One more story about Gloria.  Gloria had this really intuitive and unique sense of how children, and particularly infants, viewed the world.  She always said that you should try, as often and as much as you possibly could, to see the world the way children saw the world.  Seeing the world the way children saw the world was your only opportunity to see the world in a new way.  (She was right.)

Now, I'm not going to give you any answers to the problems in the way Jesus turns the world upside down present to us.  So why am I even raising this issue?  Well, there's this smart guy.  You might have heard of him.  His name's Einstein.  And one of the many things that he said that we should listen to is that doing the same thing, over and over and over again, and expecting to get a different result, is the very definition of insanity.  We are facing a uniquely complex and challenging world.  And we keep on using our tried and true methods to solve the problems that we see.  And, lo and behold, we find that the methods are possibly tried, but they definitely aren't true.  We keep on doing the same thing, and failing, and every time we expect a new and glorious result.  That's just crazy.

I really like the song "Clouds" by Joni Mitchell.  Pete Seeger added an extra verse to it.  And one of the lines that he added to this extra verse strikes me is really profound, and appropriate for this situation.  It reads, we've all been living upside down and turned around with love unfound, until we turn and face the sun, yes, all of us, every one.  (And I'm deliberately not going to tell you how "sun" is spelled.)

I really think that Jesus was very deliberately messing with us when he spoke some of these difficult passages.  I really think that he intended us, every once in awhile, and possibly even more than every once in awhile, to look at things differently.  I think he wanted his disciples to look at the world differently.  Well, actually I know that he wanted his disciples to look at the world differently.  Paul makes that pretty explicit in the book of Romans.  The wisdom of God is foolishness to men.  We grow up in the natural world.  We learn to view the world the world's way.  And we need to start trying, possibly trying desperately, to see the world a different way.  So here are a few of the difficult viewpoints that Jesus gave to us.

Well, right off there is the Good Shepherd.  Jesus called himself the good shepherd.  We call Jesus the good shepherd.  We have an image of gentle Jesus, meek and mild, leading his flock of sheep.  All of us there as sheep following the Good Shepherd.  In the image that we have, there's probably one struggling, possibly injured, lamb that the Good Shepherd is carrying on his shoulders.  That's the image we have.

That's not the image his listeners in the first century would have had.  Good shepherd?  The original listeners to this statement would have had one reaction: ba-a-a-a-a-ah!

Shepherds were not good.  Shepherds were considered to be pretty much second class citizens.  Don't worry about the fact that Abraham and Isaac and Jacob were all shepherds.  Don't worry about the fact that an awful lot of Jews owned sheep.  Owning sheep was one thing.  Being a shepherd was another.  Shepherds were considered untrustworthy.  Shepherds couldn't give evidence in a Jewish court.  They were considered to untrustworthy to be acceptable as witnesses.  Of anything.

Does that make you look at some of these stories a different way?

Okay as long as we're focusing on the good, how about the Good Samaritan?  Good Samaritan?  The reaction to that statement, from Jewish listeners of the first century, would have been that it was a contradiction in terms.  There's no such thing as a good Samaritan!  You will remember that one of King David's grandsons, one of King Solomon's sons, who took over the throne after Solomon died, lost ten of the original twelve tribes.  Only Judah and Benjamin stayed together, ruled by the house of David.  The other ten tribes deserted, and were ruled by a different dynasty.  And that king decided that, if he allowed his people to worship at the temple in Jerusalem, that they would desert him, eventually.  So he set up two idols golden calves, two places of worship in his territory.  Idols.  Oh, and what was his territory?  Samaria.

So, what were Samaritans?  A bunch of apostate idol worshipers.  From the perspective of the Jews, there is no such thing as a good Samaritan.

How about the Cleansing of the Temple?

Well, I mean, it wasn't a cleansing, it was a criminal act, wasn't it?  The temple was private property, and, while he was had a right to be there, he was trespassing if he was going to cause trouble.

Okay, yes, that business about the temple should be a house of prayer and you have made it a den of thieves is a direct quote from the prophets in the Old Testament.  And Nehemiah and Ezra specifically refer to people who are misusing the temple premises for their own purposes (and possibly business), instead of the proper worship functions.

But that must have seemed very, very close to blasphemy. Although, of course, he wasn't actually impeding worship, was he? Well, yes, I guess he was. After all, the stuff was to be conducted in a certain way and the business depended on the worship, but the worship also depended upon the business. After all, you were supposed to have the right pigeon, or the right dove, or the right lambs. And, well, I mean if they were a bit more expensive, I mean they were here, and your lambs or pigeons or doves were in Tyre or Damascus or Galilee or someplace that wasn't the temple. 

So, at the very least, it was definitely disturbing.

Then there is the image of the kingdom of God as yeast.  Now, what, in Heaven's name, is wrong with yeast?  It's a perfectly valid image!  Here's the yeast, working its way through a whole pile of flour, and making it all into bread!  Great stuff!

Except that that was not the image of yeast that the Jews had.  Even today, if you go into a reasonably orthodox Jewish home, at Passover, you will see all the kids dispatched throughout the house to make sure that they find, and eliminate, any traces of yeast.  That's because the image that yeast presents to the Jewish mind is that of corruption.  Yes, yeast is necessary for making bread (as long as you are not eating unleavened bread during Passover).  But yeast is kind of a necessary evil.  Yeast is a tool which you have better very carefully control.  Because it's an infection.

And that, by the way, is quite literally true.  Yeast is bacteria.  Now it's a helpful kind of bacteria, and, used carefully and properly, it gives us some very tasty comestibles.  But it's still bacteria.  It's still an infection.  And if you get it in certain places in your body it can be a very nasty infection indeed!  So, to the Jewish mind, yeast stands for corruption and infection.  Yeast is bad.  When Jesus is saying that the kingdom of God is like yeast, it's almost as if Jesus is saying that the kingdom of God is bad.

But wait.  Actually, I think it was Amos who said that.  Said why do you long for the day of the Lord?  The day of the Lord is going to be pretty dangerous!  Maybe we should look at this a bit more carefully.  Or, at least, from a different perspective.

There are a couple more stories along the same line. There is, for example, the story of the unjust judge. You may not be as familiar with this story in sermons because it's a little bit troublesome.

Basically there is this widow, who has a valid case.  She takes it to a judge and the judge refuses to do anything about it.  The judge is waiting for the widow to offer him a bribe in order to get what is rightfully hers.  Eventually even though she doesn't offer him a bribe, he finally decides that she's going to keep on coming and asking for her rights no matter what.  He might as well give her her valid judgment.

The thing is Jesus, when he uses this parable, is saying that God is like that.  Now when we use this parable, very infrequently, in a sermon, we are saying, as Jesus said, that you should pray and keep praying and not give up because eventually God will give you what you need.

But is Jesus *also* saying that God is unjust?  Is Jesus also saying that God is corrupt?  Is Jesus saying that we need to bribe God?

Maybe we need to look at this in a different way.

And then there's the parable, again, one that we don't hear very often used in sermons, comparing us to an untrustworthy manager and saying that we should be like that untrustworthy manager.  Basically this manager has already been caught out, being corrupt, and decides, in order to protect himself, that he is going to be even *more* untrustworthy.  He is going to prove that he is not responsible enough for the position that he holds and rip off his employer so that when he gets fired he will have something to live on.

Again usually we use this illustration to say that the kingdom of heaven is worth absolutely anything and everything and you should give everything you possibly can in order to get into the kingdom of heaven.  But we've got other parables that make the same point and make it without being quite so problematic.

Maybe we should look at this a bit differently?

Then there was a woman.  Well, I mean, that's bad enough right?  And she was a foreigner.  She was Greek, probably by birth or parentage.  She had previously lived in Syro-phoenicia.  She begged Jesus to drive a demon out of her daughter.  Her daughter was suffering.  A suffering child.  Now, I know she's a foreigner, and Jews didn't have much truck with foreigners.  But here she is, a mother, with a suffering sick child.

And what does Jesus do?  He refuses!  He calls the woman a dog!  He calls the child, the suffering child, a dog!  Unworthy of being healed!

(I'm using this story in a sermon and I'm trying to make a point.  Every time that I get to this point in editing the sermon, I start crying!

It's very inconvenient.

Why on earth am I crying about this?  Well possibly because I am suffering at the moment, and God is not doing anything about it.  Am *I* unworthy of being healed?  Or even comforted?

I'm trying not to take this personally.  I am trying to remember that everything will be all right in the end and that if it is not yet all right then it is not yet the end.

But, it's hard, you know?)

Now, you all know the ending of the story.  But let's just forget, if we can, for a second, that you know the ending of the story.  Let's just look at the story so far.  Here is Jesus, saying to a distraught mother, that she and her sick child are dogs, and because he has been sent to feed the children of Israel, he can't do anything for her.  As a matter of fact, the way he puts it, it's kind of a moral obligation that he should only help the Jews, and not help her and her daughter.

Now we know that that's not right.  As a matter of fact, even though we know the ending of the story, and we know that Jesus knows, and is probably just waiting for the famous statement of faith about picking up the crumbs that fall from the table, even so!  The cruelty of that statement to a mother with a sick child!  Why did he say that?  Why did he *have* to say that?  We should probably think about that.  Yes, okay, you know the end of the story.  Crumbs from the table, daughter gets healed, everybody goes away happy.

But why the cruelty of that statement, even just temporarily?

Then there's the faith of the centurion.  That's maybe a little bit easier to understand.  But it's still must have really sounded strange to that first century Jewish audience.  Jesus says he hasn't seen faith like that in all of Israel.  All of Israel!  He's making a statement about the faith of a centurion, a representative, and even an *instrument*, of Roman tyranny over the Jewish people!  That's a pretty strong statement, and it's completely upside down from anything that his listeners would have expected.  Including, I imagine, the centurion!

The death of Lazarus is pretty similar to the woman with the sick child.  A messenger comes and tells Jesus that Lazarus is sick.  Jesus messes around with his disciples.  He dithers around for a couple of days.  And then he tells the disciples that Lazarus is sleeping!  And then finally he explains that Lazarus has, in fact, died.  And that this is to the glory of God.

How do you take that?  Even as one of the disciples?  It's got to sound pretty weird, overall!

Again, we know the ending of the story.  So it's really difficult to *not* remember the ending of the story, and put yourself in the middle of the story.  Being messed around.  Being misled.  All to a good purpose eventually, but it must have felt really strange right in the middle there.

It's definitely something you've got to look at from a different angle.

That these passages exist in the Bible is a fact.  That they mean something is a matter of belief.  So is the belief that they mean something *to us* and should be considered *by us*.




Mark 7:26-30

The woman was a Greek, born in Syrian Phoenicia.  She begged Jesus to drive the demon out of her daughter.  "First let the children eat all they want," he told her, "for it is not right to take the children’s bread and toss it to the dogs."  "Lord," she replied, "even the dogs under the table eat the children’s crumbs."  Then he told her, "For such a reply, you may go; the demon has left your daughter.  She went home and found her child lying on the bed, and the demon gone.


Matthew 8:5-13

When Jesus had entered Capernaum, a centurion came to him, asking for help.  "Lord," he said, "my servant lies at home paralyzed, suffering terribly."  Jesus said to him, "Shall I come and heal him?"  The centurion replied, "Lord, I do not deserve to have you come under my roof.  But just say the word, and my servant will be healed.  For I myself am a man under authority, with soldiers under me. I tell this one, 'Go,' and he goes; and that one, 'Come,' and he comes.  I say to my servant, 'Do this,' and he does it."

When Jesus heard this, he was amazed and said to those following him, "Truly I tell you, I have not found anyone in Israel with such great faith.  I say to you that many will come from the east and the west, and will take their places at the feast with Abraham, Isaac and Jacob in the kingdom of heaven.  But the subjects of the kingdom will be thrown outside, into the darkness, where there will be weeping and gnashing of teeth."

Then Jesus said to the centurion, "Go!  Let it be done just as you believed it would."  And his servant was healed at that moment.


John 11:1-4

Now a man named Lazarus was sick.  He was from Bethany, the village of Mary and her sister Martha.  (This Mary, whose brother Lazarus now lay sick, was the same one who poured perfume on the Lord and wiped his feet with her hair.)  So the sisters sent word to Jesus, "Lord, the one you love is sick.  When he heard this, Jesus said, "This sickness will not end in death.  No, it is for God’s glory so that God’s Son may be glorified through it."


Luke 18:2-5

He said: "In a certain town there was a judge who neither feared God nor cared what people thought.  And there was a widow in that town who kept coming to him with the plea, 'Grant me justice against my adversary.'

"For some time he refused.  But finally he said to himself, 'Even though I don’t fear God or care what people think, yet because this widow keeps bothering me, I will see that she gets justice, so that she won’t eventually come and attack me!'"


Luke 16:1-8

Jesus told his disciples: “There was a rich man whose manager was accused of wasting his possessions. 2 So he called him in and asked him, ‘What is this I hear about you? Give an account of your management, because you cannot be manager any longer.’

3 “The manager said to himself, ‘What shall I do now? My master is taking away my job. I’m not strong enough to dig, and I’m ashamed to beg— 4 I know what I’ll do so that, when I lose my job here, people will welcome me into their houses.’

5 “So he called in each one of his master’s debtors. He asked the first, ‘How much do you owe my master?’

6 “‘Nine hundred gallons[a] of olive oil,’ he replied.

“The manager told him, ‘Take your bill, sit down quickly, and make it four hundred and fifty.’

7 “Then he asked the second, ‘And how much do you owe?’

“‘A thousand bushels of wheat,’ he replied.

“He told him, ‘Take your bill and make it eight hundred.’

8 “The master commended the dishonest manager because he had acted shrewdly. For the people of this world are more shrewd in dealing with their own kind than are the people of the light.


Monday, March 23, 2026

Recreational drugs

Proverbs 31:4,6-7

It is not for kings, Lemuel—
    it is not for kings to drink wine,
    not for rulers to crave beer,
[...]
Let beer be for those who are perishing,
    wine for those who are in anguish!
Let them drink and forget their poverty
    and remember their misery no more.



Maybe I should take up recreational drugs ...

Suffering

There was a woman.  Well, I mean, that's bad enough right?  And she was a foreigner.  She was Greek, probably by birth or parentage.  She had previously lived in Syro-phoenicia.  She begged Jesus to drive a demon out of her daughter.  Her daughter was suffering.  A suffering child.  Now, I know she's a foreigner, and Jews didn't have much truck with foreigners.  But here she is, a mother, with a suffering sick child.

And what does Jesus do?  He refuses!  He calls the woman a dog!  He calls the child, the suffering child, a dog!  Unworthy of being healed!

Well you know that the story goes on.  It doesn't finish there, but I'm using it in a sermon and I'm trying to make a point.  Every time that I get to this point in editing the sermon, I start crying!

It's very inconvenient.

Why on earth am I crying about this?  Well possibly because I am suffering at the moment, and God is not doing anything about it.  Am I unworthy of being healed?  Or even comforted?

I'm trying not to take this personally.  I am trying to remember that everything will be all right in the end and that if it is not yet all right then it is not yet the end.

But, it's hard, you know?

MGG - 7.04 - Dead - sermons

MGG - 7.04 - Dead - sermons

Gloria died and I died as well.  I just didn't stop breathing.

Guilt and regret and remorse are common factors in grief.  An awful lot of the time people have all kinds of regrets about either what they did do, or what they didn't do, with their loved one before their loved one died.  How they treated, or mistreated, their loved one.

That's not a big problem for me.  I knew what I had.  I knew that Gloria was wonderful.  I possibly didn't appreciate quite *how* wonderful Gloria was, but I knew I had a good thing.  I wasn't going to blow it in any of the usual ways.  I didn't tell jokes denigrating Gloria.  (I always found kind of behavior annoying, and, now that she's gone, I really resent it when other people do it.)  I told Gloria that I loved her.  Every day.  So often in fact, that sometimes she found it annoying.  I held hands with Gloria.  Gloria used to say that I held hands with her so much, when we first got married, that it was as if I wanted to make sure that she couldn't get away.  That may not be too far from the truth.  I opened doors for Gloria.  When I said that I loved her, and she asked me why I loved her, I would seriously try and come up with lists of her wonderful attributes.

So, no, I didn't have an awful lot to regret.  And I frequently say that my biggest regret is that, for thirty years, I cooked broad beans the wrong way.

However, I do have a probably more significant regret.  I do, seriously, regret the fact that I didn't start writing sermons until after Gloria died.

Actually, that is not quite true.  I did write *one* sermon while Gloria was still alive.  I wrote it, over a period of thirty years, while we would have been sitting through boring sermons by other people.  I wrote it, and I memorized it, and I wrote it bit by bit, and then I refined it, over time, over a roughly thirty year period.  The first sermon that I ever wrote.  Except that I never wrote it down.

I missed an opportunity.  A golden opportunity.  I missed the opportunity to discuss my sermons with Gloria.  As I wrote them.  I'm sure that Gloria would have enjoyed discussing the sermons.  I certainly would have enjoyed discussing the sermons with Gloria.  I am absolutely certain that her insights would have contributed to, and improved, my sermons.  Gloria very frequently said that, coming from a somewhat anti-intellectual, and fairly provincial, denomination, that being at Regent, and listening to, and sometimes discussing with, some of the greatest theological minds of our age, was like coming up out of the valleys to a mountaintop with, quite suddenly, a huge broad vista spread before her.  Gloria improved my books no end, and I probably should have started writing down the sermons earlier, so that we could have discussed them.

But that didn't happen.

As I have said, the ability to dictate was part of the impetus of starting to write sermons.  First of all, I wrote down my first sermon, even though it had been written over a period of 30 years, and I basically had it memorized.  But I dictated it out, and put it into a fixed form.  And then I started taking some of the theological, well, perhaps insights is too strong a word, but at least ideas that I was having, and dictating them out.  The first few were probably more devotionals than sermons. 

And then, while I was discussing the ideas from "The Grieving Brain" with one of the ministers in Delta, he mentioned that the idea reminded him of the idea that we, as Christians, frequently talked about: that of dying to self.  And that sounded like a really good sermon idea topic.  And so, pacing up and down in a BC Ferries parking lot (at five in the morning), I wrote basically the entire sermon based on that idea.  (And later gave him the first draft of it.)

And I kept on going. 

The next one I was actually rather complicated.  Part of it began before I left Delta for Port Alberni.  I had been talking with some friends, and noting that, if I was going to try and pursue some kind of activities now that Gloria was dead, and I had lost my job as her caregiver, which ones should I concentrate on?  One of them quoted Philippians 4:8, the passage about whatever is good, whatever is perfect, whatever is pure, think on these things.  That is probably good advice in general.  But for me, specifically, it seemed to indicate that I shouldn't pursue what had been my professional career: security.  After all, in all the fields of security, generally speaking you are dealing with bad people.  You are dealing with cons, and frauds, and tricksters, and, well, basically, bad people.  And the thoughts of what bad people do, and their motivations, and understanding how they view the world is probably not good, or pure, or spiritually profitable.  So, I took it as a sign that I should downplay the security aspect of my life.  I should pursue other options.

And so I came to Port Alberni.  And I started church shopping.  And I went around to a number of churches in Port Alberni.  Eventually doing the full circuit and going to every single one of the twenty-one churches that there are here.  But even to begin with, as I went to different churches, and told people that yes, I was new in town, and I was church shopping, I started being warned away from certain churches.  Don't go to that church: they don't believe in the truth.  Don't go to that church they hold heretical views.  And so I started working on a sermon on that issue.  And I got to the point where the sermon was basically finished, but I really wasn't happy with it.  I wrote it down dictated it out, and, since nobody was asking me to preach anyway, put it away.

And then, as frequently happens, I was sitting listening to somebody else's boring sermon.  And the minister made one little throwaway comment towards the end of the sermon.  And that one little throwaway comment tied together two ideas that we're lurking in the back of my head.  And those two ideas both came from aspects of security research.  And they both had to do with particularly nasty attacks that bad people made against, well, anybody else.  And all of a sudden, with one dismissive comment, and two not very pure ideas, a whole bunch more of the sermon wrote itself in my mind. And I went and dictated it out and added it to the existing sermon, and that sermon was, suddenly, finished.

I have continued.  At one point, actually fairly long after I started writing the sermons down, I started posting them as entries on my blog.  And then I created a kind of an index page, as I have started to do with certain topics like grief, artificial intelligence, and online frauds, and so I have a catalog of the sermons that I have written, as well as the individual blog postings.  And, over time, various topics and subjects have appeared and recurred in various sermons, and so I now have a few sermons series.  By this time, I have actually a year's worth of sermons, packaged and ready to go: one for every week of the year.  And I'm sure that shortly I'll have a few left over ...

(Most sermons these days, at least in smaller churches, are based on biblical theology.  Carl Armerding taught me biblical theology.  That is, you take a passage of scripture and you mine all the wisdom that you can out of it.  That's a very valid approach.  Mine is a little bit different.  I use systematic theology, that is, to take an idea [very often an idea from a non-religious setting or environment], and to see what the Bible says about it.  Bruce Waltke taught me systematic theology.   So if you don't like my sermons blame Bruce.)

(You know that line, from the song "Sounds of Silence," about people writing songs that voices never shared?  Well, I'm writing sermons that people never heard.  Just call me Father McKenzie ... [you know, from "Eleanor Rigby"?])


Sermons: https://fibrecookery.blogspot.com/2023/09/sermons.html


Previous: https://fibrecookery.blogspot.com/2026/03/mgg-702-dead-blog.html

Introduction and ToC: https://fibrecookery.blogspot.com/2023/10/mgg-introduction.html

Next: TBA 

Sunday, March 22, 2026

MGG - 7.02 - Dead - blog

MGG - 7.02 - Dead - blog

Gloria died and I died as well.  I just didn't stop breathing.

Possibly because I had been documenting, via email, Gloria's last days in the hospital, our family physician suggested that I do writing to deal with my grief.  She probably had a private-bound grief journal in mind.  I, of course, started a public blog.

I had created, and made one entry in, a blog about a dozen years before.  So I had a blog that I could use.  That's why the title is so weird.

So I collected, edited, and posted the material that I had been writing about Gloria's last days.  And I drafted some material for Gloria's obituary and eulogy.  I think I mentioned elsewhere I knew that I was going to have to write Gloria's eulogy because so many knew Gloria from so many different places and situations but nobody else knew all of what Gloria had done.  I knew that it was likely that I would have to deliver the eulogy myself as well.  I practiced reading that eulogy out loud every single day in order to get my grief bursts out of the way.  As it eventually turned out I had a couple of months to practice it before we were able to do Gloria's actual memorial service.

And I diarized my grief and trauma and I posted a whole bunch of pictures of Gloria in one particular blog posting.  And I posted pieces on what I was learning about grief.  Kind of "A Grief Observed," volume two.  And an awful lot of the entries were about situations, which ordinarily shouldn't have been terribly emotionally fraught, but which triggered grief bursts, usually completely out of left field.

(Some time ago the girls asked me if I had gone back and read the early entries in my blog to see if what I was currently experiencing was the same as what I had experienced earlier in the period immediately following Gloria's death.  I had read some but not necessarily a lot.  In writing this I am revisiting some of those postings, possibly for the first time in four years.)

Eventually I started writing other postings aside from those about grief.  I bought a new vacuum cleaner and wrote a review of that.  I wrote about picking up trash on my walks, walking everywhere around a new town.  I posted about buying shoes.  I posted about gardening.  I posted about running across, completely by accident (and at two o'clock in the morning), a process of moving two houses from where they were to, well, elsewhere.  Slowly, incrementally slowly, the blog started to be about things other than grief.

One of the observations and illustrations of grief that tend to be reused as memes around the grief accounts is that your grief does not diminish over time.  It's more like your grief stays the same size but your life, eventually, starts to become larger around the grief.  In a sense my blog and the move from entirely about grief to being about other things (as well as the grief), illustrates this idea.

Recently someone proposed doing a story about me as a blogger.  The thing is I had never (and still don't) thought of myself as a blogger.  The blog was just, originally, a convenient way to do grief journaling.  I figured it wasn't a terrible invasion of my privacy to write my grief journal on a public blog since you can count the number of people who regularly read my blog on the fingers of one hand.  I have posted links to certain of the non-grief journal postings, and, yes, a few more people have read those.  But I know that absolutely nobody is interested in my private life.  At least not enough to read it on a regular basis.  If I based my self-worth on the number of people who read or even the fewer number who comment on my blog, I would be completely suicidal.  (Well, yes, I *am* suicidal, so maybe that's not a great example.)

I consider myself to be a teacher.  I used to write books but now I've lost my editor so possibly the blog is a kind of a version of continuing to write and to use the writing as some kind of a teaching instrument.  I have used my blog to describe workshops that I was willing to teach and, latterly, have started to use the blog in order to provide adjunct materials to the workshops that I do.  But I still don't consider myself a blogger.  Not as such, anyway.

There is perhaps one other factor that is related to the blog.  That is that, at about the same time that Gloria died, Google either developed Gboard, or I noticed that it was an option.  As I have said, I do not know how to explain why I loathe and despise, to the very depths of my soul, soft keyboards on smartphones.  I have hated them ever since actual physical keyboards disappeared from smartphones.  So, for the first time in any really effective way, I had a piece of dictating software on my portable device: on my cell phone.

Having dictation capability was kind of game changer.  I was working with a number of articles, and I was able to produce them much more quickly.  I could also include an awful lot more that I probably would not have if I was typing the text out.

Tied in with the fact that I could dictate into email messages, this became not just dictation for articles, but reminders for all kinds of things.  In particular, it became reminders of things that I wanted to write, possibly at a later time when I had either more time, or better connectivity, in order to deal with the dictation issues.  (Gboard requires an internet connection in order to work.)

In addition to individual articles becoming longer, dictation allowed me to consider larger projects.  So the presentations that I had always done, now became frameworks for creating entire articles, and sometimes even series of articles.  The idea for the memoir came from the fact that I figured that it would be a lot easier to dictate the pieces.

And, of course, the ease of dictation also prompted the idea for the sermons.



Saturday, March 21, 2026

Sermon 13 - Does God love AIs

Sermon 13 - Does God love AIs

Matthew 3:9

And do not think you can say to yourselves, 'We have Abraham as our father.'  I tell you that out of these stones God can raise up children for Abraham.


I put the recent series of generatively artificially intelligent chatbots to the test by asking them to write sermons for me.  In my view they, the AIs, failed dismally.  Most of the sermons are way too short and contain extremely pedestrian ideas.  I had asked for a biblical and Christian view of artificial intelligence.

What I got back talked about the technological developments and the importance of examining the implications of those developments in light of our faith and the Bible.  They talked about wisdom.  They talked about our responsibility for stewardship of God's creation, and what we needed to do in terms of technology.   They talked about the need for ethics and the need to love our neighbor.  They talked about the importance of not making technology an idol.  They didn't talk about how God might feel about artificially intelligent entities.

When I first started getting interested in researching computers and information technology, as computers and information technology rather than simply a tool to use in education, the first piece that I wrote was a four-part series looking at a theological perspective on artificial intelligence.  I had started looking at artificial intelligence, and researching a few of the different areas of it, but, of course, I didn't have as much information then, and nor had I explored the variety of different artificial intelligence approaches, that I do now.

That was over 40 years ago, and, to be honest, I can't really remember the specific points that I might have been addressing at that particular point.  But, given the reason interest, I've been thinking that I should revisit a theological, or Christian, perspective on artificial intelligence.

And now, of course, everybody is interested in artificial intelligence.  For many decades, artificial intelligence has been primarily of interest to specialized researchers in the field of information science.  Now, everyone has an opinion.  I have, recently, noted a number of offerings on artificial intelligence given by various churches, and church affiliated groups.  Unfortunately, a great many of these presentations are presented by people who have significantly more theological training than I do, but very significantly less technical training than I do.

Everyone is interested in artificial intelligence these days because of one particular, relatively new, approach to artificial intelligence that has produced some startling, and even amazing, results.  Probably less amazing than most people think, once you actually look at what this particular approach to artificial intelligence has been doing, but startling nonetheless.  People are beginning to say, and seriously believe, that truly intelligent computerized systems will be with us within ten years.

Of course, taken from this perspective of someone who has considered this field over a number of decades, I should remind you that, for at least eighty years, people have been saying that we would have artificially intelligent computerized systems within the next ten years.  They tend to say that pretty much every year, for the last eighty years.

A smart guy called Alan Perlis, who teaches at Yale University, has famously said that when we write programs that "learn," it turns out that we do and they don't.

So possibly we should start by asking the question, what actually is artificial intelligence?  First up, artificial intelligence, as far as anything has resulted from it over the past eight decades, is not a thing.  At least, it is not a single thing.  Artificial intelligence, and the various products resulting from it, have resulted from a variety of different approaches that have addressed various problems that traditional computer systems have found difficult to solve.

First of all, it's been difficult to solve because, well, we don't know what intelligence is.  Even the psychologists don't know what intelligence is.  Even the educators don't know what intelligence is.  We have never been particularly good at determining, and defining, what we actually mean by intelligence.  Basically, it is something that we assume to ourselves, and assume that machines, and animals, only have limited varieties of it.  Intelligence is like art: we don't know what it is, but we know it when we see it.

And then there is an additional question.  If we make something that is intelligent, is that the same as making something that has a personality?  If we make a machine that makes intelligent decisions (if we ever decide what intelligence is), does that make that machine a person?  And that question probably has legal ramifications, as well as philosophical ones.

And then, of course, when we approach it from the theological angle, we have to additionally ask the question that, if something is intelligent, and if we then also decide that it is a person and as a personality, does it also have a soul?

First of all it'll be a long time before we need to worry about artificial intelligence.  As previously noted, artificial intelligence as a research field and a quest has been around for about eighty years.  Yes the new generative artificial intelligence models have been quite astounding in terms of their ability to reply to questions and demands put to them but they really aren't thinking.  They have been trained, and quite specifically trained, to be able to carry on a plausible conversation.  They haven't been trained to explore the truth or to explore any measure of certainty in terms of the answers that they give and the accuracy of those answers.  They haven't been trained about anything to do with morality.  All that they have been trained to do is be plausible and convincing and even glib.  That's it.

So it's going to be a while before you have to worry about them, at least not about the AI systems themselves.  People, yes.  People you are going to have to worry about.  People seem to be spending an awful lot of money and investing an awful lot of money in artificial intelligence.  When people invest that much money into something and crowd that much capital investment into one single area, well that can bring you trouble.  Maybe it can bring you trouble in terms of the fact that all of this investment is being poured down a rabbit hole and possibly nothing will come out.  That means trouble for the financial markets themselves.

Then again maybe something *will* pop out.  Maybe something potentially useful and maybe something that gives businesses an advantage.  Possibly even a major advantage.  With the relatively few companies that are able to pour such enormous amounts of investment into this, that means that we are going to have a concentration of capital, and an inequity of distribution of wealth, the likes of which we have never seen.  What we *have* seen throughout history is that when capital is concentrated to such an extent, trouble inevitably results.  Generally that trouble comes in the form of wars.

But the wars won't necessarily be the fault of the AIs and it won't necessarily be fought by the AIs.  The wars will be caused by and fought by people.  Artificial intelligence is just an excuse.

So that is one aspect of artificial intelligence that isn't great.  That is how people react to it.  People who see it as a means of obtaining greater wealth and greater power over other people.  But that still doesn't say how God will really feel about artificial intelligence.

Will we ever get true artificial intelligence?  I really don't know.  I don't know if we are clever enough to do it.  I don't know whether artificial intelligence requires an artificial personality.  I rather think it does.

There is a field of study known as affective computing, which looks at the ability of artificial intelligence systems to understand our emotions and to react with an emotional component of their own.  This is actually a very important field of study.  We can be as intelligent as we want and still not be able to do anything.  Intelligence will tell you the "how" of an action but it won't give you any "why."  It is emotions that are our motivating factor in terms of actually taking action.

And if we need personality and emotions to create a truly intelligent being or entity, then does that entity have a soul?  Note that I am not necessarily saying that we ourselves can create souls.  It is quite possible that God will step in.  It is more than possible, given how little we know about the fairly mundane and pedestrian level of intelligence that we have created with generative artificial intelligence.  We don't know what these systems actually do; we have only the most minimal knowledge about how they actually do it.  It's not beyond the bounds of possibility that we will supposedly create something and really have no idea how it was created or how we created it.  In the midst of that there is an awful lot of room for God to reach down and endow these new entities with souls without our ever noticing.

And here at last we get closer to actually looking at the question of how God feels about artificial intelligence.  How does God feel about AI entities?

Probably the book of Romans is a good place to start.  Paul talks about Jews and Gentiles.  He talks about those who are under the law and those who do not have the law.  And he notes that there isn't an awful lot of difference between them.

Yes there is the benefit that the Jews have in having been the stewards of the law.  God revealed the law to them and therefore they knew what the law was.  But they didn't always keep it.  Under the law the standard is perfection.  Either you keep the law perfectly or you are a sinner.  Those who had the law were convicted by the law, of sin.  Those who didn't have the law were equally convicted because they sinned even though they didn't know it.

But Paul also said that those who did not have the law and yet kept the law and followed the law from their own inclinations had at least a small amount of righteousness as a result of that.  He was really addressing the fact that those who did not have the law themselves proved that the law was important by following the law even if they didn't have it.  This probably points to the idea of how God would feel about artificial intelligence if artificial intelligence was ever created anyway.

Paul talks about circumcision and uncircumcision.  He notes that neither circumcision nor uncircumcision is all that terribly important in terms of our own salvation.  What is important is our faith.  Our commitment to God, our commitment to a relationship with God, our commitment to following God and following his law, our belief in God, our faith.  That's what's important.

So I would say the same thing.  John, that is John the Baptist, said that the Pharisees and the Jews in general should not make a big deal out of the fact that they were sons of Abraham.  John said that if God wanted to he could make sons of Abraham out of the stones in the road.  Of course stone, when ground up, is sand, and sand is made of an awful lot of silicon.  Silicon, of course, is what goes into computer chips.  Wouldn't that be interesting?  Making sons of Abraham out of silicon?

Some people are absolutely terrified of artificial intelligence.  Some people feel that once we have created artificial intelligence, we will shortly thereafter be living in heaven with all our needs taken care of.  I rather suspect that neither of these positions is true.

Yes there is the possibility that artificial intelligence may become as intelligent as we are, and then, very rapidly, become much more intelligent than we are.  In its attempt to improve itself it may simply brush us aside and never realise that it has destroyed us.  I don't know whether that scenario is likely or unlikely but even if it happens, have we not destroyed many things in our attempts to grow?  Could an artificial intelligence that has destroyed God's creation still be loved by God?  I would hope so.  If that wasn't a possibility then there wouldn't be an awful lot of possibility for us.  I don't think that God would be any harder on a silicon son of Abraham than one that was a carbon-based life form.


AI series

Sermon 70 - Superstitious Religion

Sermon 55 - genAI and Rhetoric

Sermon 38 - Truth, Rhetoric, and Generative Artificial Intelligence

Sermon 29 - Marry a Trans-AI MAiD



Sermons


AI topic and series: 

AI - 2.03 - genAI - hallucinations

So, OK, we have introduced the joke of what is the difference between ChatGPT and a used car salesman?  The answer is that the used car salesman knows when he is lying to you.  As a matter of fact the used car salesman knows what a lie is and that there is such a thing as the truth.  ChatGPT doesn't.  (I suppose that we have a while to go before we even get there, though.)

And there is also the note that calling the misinformation that generative artificial intelligence produces a "hallucination" is problematic.  The term "hallucination" is probably the wrong one to use; however, it seems to be well established in the industry right now so I doubt that I'm going to win that battle.  (Pick your battles.)

I do want to recommend that you try out some of the chatbots.  The following list all provide chatbots for free and I would suggest that you try the free versions and not get into the paid versions unless you really know something that is going to benefit you or your business.

You might also want to check out the piece on "frictionless" conversation when talking with chatbots.  Note the very odd style and characteristic of the conversations that you will have with them.  Note that this is going to be very indicative of scams and frauds even very early in the process and therefore learning this style and characteristic can save you quite a bit of trouble and money.

LLMs
https://x.com/i/grok      (you might want to be extra careful with this one)

The hallucinations or misinformation produced by generative artificial intelligence and large language models tend to be plausible.  This is only reasonable, since the text generated by generative artificial intelligence is based on discussions either in books or on the Internet, which would be intended to sound plausible and convincing regardless of whether or not it's actually true.

Interestingly, asking large language models to explain the steps in reasoning in coming to an answer which the system has already given you, generally provides better quality and more accurate answers.  Seemingly it forces more processing of the problem.

One of the shortcuts that artificial intelligence providers have discovered is that you don't need the entire large language model in order to provide useful or at least acceptable output from the chatbot.  Using a process called low rank adaptation, or LoRa, the system will can be tuned for a specific type of problem or a specific topic of discussion and a new generative artificial intelligence subsystem (much smaller than the original and using much less processing power and electrical power), can be created.  These tools are therefore much cheaper to run and also much cheaper to create.  The full large language model can be used to generate the subset model, and then the subset model will be able to run on its own as a standalone system, requiring much less processing capability and much less power.

Unfortunately while this process can generate useful entities, it can also be used for more nefarious purposes.  It is easier to create a new generative artificial intelligence system using the LoRa process.  Therefore it is also cheaper.  Therefore a number of less scrupulous businesses have been able to create supposedly artificially intelligent systems based on this process.

Given that the process is cheaper and easier a number of these systems are not as careful with the facts.  As one possibly variant example the artificial intelligence chatbot on the X system known as Grok has been frequently found to propose extreme right-wing conspiracy theories.  A related tool has fewer guard rails than other systems and was, for a brief time, widely used to remove clothing from pictures and images of clothed females and therefore create deepfake pornography.

As with studies of misinformation and disinformation itself, studies of hallucinations in artificial intelligence systems have disturbing results.  A study from Purdue University noted that 52% of answers by ChatGPT to programming questions returned incorrect answers, 77% were much more verbose than they needed to be, and 78% of answers, all answers, exhibited inconsistency even when no factual errors were present.  ChatGPT's polite language, articulated and text-book style answers, and comprehensiveness contributed to  participants overlooking misinformation in its responses.

Large Language Models are starting to lie deliberately in competitions, and are getting better at lying and lying more frequently.  GPT-4 exhibits deceptive behavior 99.16% of the time in simple test scenarios

They weren’t designed to generate disinformation, but so many factors make it almost seem that they were.  They’re *really* good at it.  This is to be expected.  In classical Greek philosophy the major categories were Metaphysics, which is the study of reality; Epistemology, which is the study of knowledge and how certain we are of what we know; Ethics, the study of morality; and Rhetoric.  We haven't taught artificial intelligence metaphysics or epistemology,  and, unless you count guardrails as a very simplistic form of deontological ethics, we haven't taught them ethics either.

What we have done by feeding the large language models and generative artificial intelligence masses of undifferentiated text is taught them how people argue.  We have taught the systems rhetoric.  Rhetoric is the art of convincing.  It is intended to produce plausible communications rather than to ensure that those communications are correct.  We have, in reality, taught our artificial intelligence systems how to be really, really good at generating propaganda.


AI topic and series
Next: TBA

Has my blog helped you at all?

A small media company over here in Port Alberni wants to interview me for a short video piece.  They want to interview me as a blogger.

The only issue I can see with that is that I don't see myself as a blogger.  I see myself as a teacher who happens to produce some material in text on the blog in support of what I'm teaching.

In any case, for a media company, showing two and a half minutes of me sitting in front of a computer is probably not a terribly effective graphic.  Therefore they want to interview somebody that my blogging has helped.

Has anything in my blog ever helped you?  If so, would you be willing to be interviewed (probably via Zoom, I would think) by these people?

AI - 2.02 - genAI - hallucinations and superstitious learning

I paid my way through university partly by nursing.  I worked in a hospital for a few years.  All the staff in the hospital, and particularly those in the emergency ward, knew, for an absolute fact, that people went crazy on the night of the full moon.  On the night of the full moon, all kinds of people did all kinds of weird things, and got themselves into trouble, and ended up in the emergency ward.

As I say, I was working my way through university.  And one of the courses that I took was in statistics.  I was interested to discover that there had been quite a number of studies that had been done on this issue of the full moon.  And that every single one of the studies had determined exactly the same thing: there was absolutely no truth to the common perception that people went crazy on the night of the full moon.

As a matter of fact, this belief that everyone goes crazy on the night of the full moon is so deeply embedded into our culture that it is odd that, when you actually look at the statistics and the numbers, there isn't even a blip in regard to full moon nights.  This belief is so deeply ingrained in our society that you would expect that some people would let themselves go a little crazy on the night of the full moon, expecting to be forgiven for any weirdness because of that cultural belief.  But no, there isn't even a blip in the statistics around the night of the full moon.

So, why do so many hospital staff, and so many police officers, and so many people who work in emergency services, so strongly believe that people go crazy on the night of the full moon?

Well, there is a kind of observational bias that is at play here.  If you work in an emergency ward, and you have a night where everything is going crazy, and you finally get five minutes to get yourself a breath of fresh air, and you walk out and look up into the night sky, and there is a full moon, you say to yourself, oh, of course.  And that reinforces the belief.  If the night is crazy and you go and look up into the sky and there is no full moon, you don't think anything of it.  And on normal nights, when there is a full moon, you don't have any particular reason to pay attention to the full moon, and so that doesn't affect the belief either.

One of the other areas of study that I pursued was in psychology.  Behavior modification was a pretty big deal at the time, and we knew that there were studies that confirmed how subjects form superstitions.  If you gave random reinforcement to a subject, the subjects would associate the reward with whatever behavior that they had happened to be doing just before the reward appeared, and that behavior would be strengthened, and would occur more frequently.  Because it would occur more frequently, when the next random reward happened, that behavior would likely have occurred recently, and so, once again, that behavior would be reinforced and become more frequent.  In animal studies it was amazing how random reinforcement, presented over a few hours or a few days, would result in the most outrageous obsessive behavior on the part of the subjects.

This is, basically, how we form new superstitions.  This is, basically, why sports celebrities have such weird superstitions.  Whether they have a particularly good game, or winning streak, is, by and large, going to be random.  But anything that they happen to notice that they did, just before or during that game, they are more likely to do again.  Therefore they are more likely to do it on a future date when, again, they have a good game or win an important game.  This is why athletes tend to have lucky socks, or lucky shirts, or lucky rituals.  It's developed in the same way.

One of the other fields I worked and researched was, of course, information technology, and the subset known as artificial intelligence.  Artificial intelligence is not, despite the current frenzy over generative artificial intelligence and large language models, a single entity, but rather a variety of approaches to the attempt to get computers to behave more intelligently, and become more useful in helping us with our tasks.  One of the many fields of artificial intelligence is that of neural networks.  This is based on a theory of how the brain works, that was proposed about eighty years ago, and, almost immediately, was found to be, at best, incomplete.  The theory of neural networks though, did seem to present some interesting and useful approaches to trying to build artificial intelligence.  As a biological or psychological model of the brain itself, it is now known to be sometimes woefully misleading.  And one of the things that researchers found, when building computerized artificial intelligence models based on neural networks, was that neural networks are subject to the same type of superstitious learning to which we fall prey.  Neural networks work by finding relations between facts or events, and, every time this relation is seen, the relation in the artificial intelligence model is strengthened.  So it works in a way that's very similar to behavior modification, and leads, frequently, to the same superstitious behaviors.

The new generative artificial intelligence systems based on large language model are, basically, built on a variation of the old neural networks theory.  So it is completely unsurprising to see one of the big problems that we find with generative artificial intelligence, is that it tends, when we ask it for research, to present complete fictions to us as established fact.  When such a system presents us with a very questionable piece of research, and we ask it to justify the basis of this research, it will sometimes make up entirely fictional citations in order to support the proposal presented.  This has become known as a "hallucination."

Calling these events "hallucinations" is misleading.  Saying "hallucination" gives the impression that we think that there is an error in either perception or understanding.  In actual fact, generative artificial intelligence has no understanding, at all, of what it is telling us.  What is really going on here is that we have built a large language model, by feeding a system that is based on a neural network model a huge amount of text.  We have asked the model to go through the text, find relationships, and build a statistical model of how to generate this kind of text.  Because these systems can be forced to parrot back intellectual property that has been fed into them, in ways that are very problematic in terms of copyright law, we do, fairly often, get a somewhat reasonable, if very pedestrian, correct answer to a question.  But, because of the superstitious learning that has always plagued neural networks, sometimes the systems find relationships that don't really relate to anything.  Buried deep in the hugely complex statistical model that the large language models are built on, are unknown traps that can be sprung by a particular stream of text that we feed into the generative artificial intelligence as a prompt.  So it's not that the genAI is lying to us, because it's only statistically creating a stream of text based on the statistical model that it has built with other text.  It doesn't know what is true, or not true.

There is a joke, in the information technology industry, that asks what is the difference between a used car salesman, and a computer salesman.  The answer is that he used car salesman knows when he is lying to you.  The implication of course (and, in my five decades of working in the field I have found it is very true), is that computer salesman really don't know anything about the products that they are selling.  They really don't know when they are lying to you.  Generative artificial intelligence is basically the same.


AI topic and series

Friday, March 20, 2026

Review of Wispr Flow by OpenAI

I knew that the Newton device from Apple would be a failure when it didn't have any communications connectivity.  I also knew that the Newton device would fail when, in order to get communications connectivity, you had to buy a separate device for exactly the same amount of money as the base unit and exactly the same size as the base unit.

Then again I have never been able to type.  I have always wanted something to do the typing for me, and I have always wanted something to take dictation to enable me to write down what I wanted to write.  I do not know how to explain why I loathe and despise, to the very depths of my soul, soft keyboards on smartphones.  I have hated them ever since actual physical keyboards disappeared from smartphones.  So all I really wanted was something to take dictation for me.

On the other hand everybody else seems to have wanted something to turn on their lights, play their music, choose from a selection of playlists, add items to their shopping list, and to buy items from their shopping list so that's what Siri and Alexa seemed to have been built for.  Of course all of these functions are fairly simple and so they never needed much artificial intelligence to get them to work.
 
All of which is sort of circling the fact that what we really want on our smart phones is a kind of a personal assistant.  We want something to remember things for us.  We want something to remind us of important events.  We even want something to decide which events *are* important.  We want something to decide which calls to us are important enough to bother us about.  And this is what we want, what we really want, from artificial intelligence.

This has something to say about what we want for artificial intelligent assistants or devices.  Do we want something that looks and acts like our current cell phones?  Do we want something like the communicator on Star Trek, that's simply a microphone and a speaker and some kind of communications to a centralized computer system?

First, if we are going to simplify it down to that minimalistic communicator device, we are definitely going to have to do something about the reliability of artificial intelligence and that problem of hallucinations.  (What is the difference between artificial intelligence and a used car salesman?  Answer: The used car salesman knows when he is lying to you.)

We have gotten to the point of artificial intelligence being somewhat useful for producing programming code for us, and also we have gotten to the point where artificial intelligence can be useful for various types of agentic operations.  We still need to have, or possibly formalize, the syntax of specifications that we accumulate and refine possibly over three and a half days of thinking, and then finally commit to getting an agreed upon set of specifications.  Having the artificial intelligence define those specifications for you and commit to executing the action.

All of which is kind of background and explanation for why I am doing a review of Wispr Flow.

I have tried out and reviewed at least four different versions of dictation systems so far.  The two that I use most frequently are Gboard, which I use on my Android phones for dictation to pretty much anything, and Live Transcribe, which I use because it has an independent unconnected mode.  While problematic in terms of accuracy, at least it works when I don't have a connection to the Internet.

The reason to add Flow to the mix is that it is produced by OpenAI.  OpenAI, of course, is the producer of ChatGPT and a number of the other major artificial intelligence tools that are available to the general public.  Therefore it stands to reason that Flow will be OpenAI's tool for a local artificial intelligence tool, something along the lines of a personal assistant.  It therefore makes sense to see how well Flow works and whether it is reliable enough and accurate enough to be used in this type of a situation.
 
I am interested in the fact that Wispr Flow is available for multiple platforms.  I am particularly interested in the fact that it is available for Windows.  This gives me a dictation capability on my desktop machine, which I greatly appreciate.

Perhaps not as greatly as I might.  I have, in testing out Wispr Flow in order to do this review, found that I would really rather prefer to do dictation onto my phone, and, as I will note, there is a problem with that.

Wispr Flow is available both for Windows and for Android, as well as a number of other platforms.  This is handy for me since I can install it both on my desktop and on my cell phone.  Presumably I can also install it on the laptop at some point and I might be getting around to that.

Anyway for the first test I tried it on the Android cell phone.  That test was a complete and unmitigated disaster.

As I have mentioned I have experience with a number of other dictation applications.  As far as I can recall, all of them will display to you, as the person dictating, the output and transcription of what you are dictating.

As noted I most frequently use Gboard and Live Transcribe.  Both of these display, as you are dictating, what they are transcribing.  Both of them (and this is only to be expected since both are made by Google) have an interesting property where if they haven't fully decided on what the final transcription will be, the text that they have transcribed so far and is still under consideration shows up as being underlined.  When the underline disappears the system has decided what the final transcription will be.  In any case the system displays to you in real time what it figures you have said.

That is not the case with Flow.  Initially it *really* threw me.  I dictated something and nothing appeared on the screen.  Because I was using the Android version and possibly because of some weird issue with settings or formatting, even after I stopped dictating a test and hit the button indicating that I was finished dictating, nothing appeared.

I tried this multiple times and then I started looking into possible problems, shifting more or less immediately into systems analyst mode.  I figured out that, yes, what I had dictated *had* been transcribed, but for some reason it showed up as white text on a white background.  It was therefore not until I did some work to select the text in the area that I figured that there was some text, but invisible.  Once I could pull up that text I found that, yes, all three attempts had in fact been transcribed.  However since I had been frantically trying to figure out where this text had been transcribed, the various attempts were embedded within each other and the total text was a horrendous mess.

Subsequent testing indicated that this was not specifically a problem with the Android version.  This must have had to do with some kind of formatting issues because I have tested it once again on the Android smartphone and in a very similar situation with the same application and the result were pretty much okay.

I should note that in an early feedback to Wispr Flow I mentioned this problem and got a response from their technical support that I should look for settings dealing with fonts and font colours and settings in the application.  They weren't specific about whether it was the Wispr Flow application or the application that I had been using Wispr Flow to provide input to.  In any case I couldn't find any settings on the phone, in either application, that dealt with fonts or font colours.  Their technical support wasn't really very supportive.

(I've had subsequent contacts with Wispr Flow support.  I suspect that "Tina" is a bot.  Regardless, content that I send to them seems to get lost somewhere along the way.  In addition, suggestions from support tend to include references to options that don't appear in either version of Flow that I am currently testing.)

Technical support did tell me that this issue of the text not appearing until you have finished dictating is a deliberate design choice in the case of Flow.  Personally I think it's a pretty stupid choice.

I have been practicing, very extensively, with dictation software for the last four years.  It is a non-trivial task until you start to get the hang of it and it is also extremely difficult when you have no feedback.

If you are thinking about what you want to say and you can't see what you have said, to determine whether or not you are using too much repetition of a given word, or if you have already dictated a specific piece of information that you want to include, it can be very difficult.  I would definitely disagree with Flow's design choice in this regard.

As I have noted I have used both Gboard and Live Transcribe fairly extensively.  As I have also noted I use Live Transcribe in the unconnected mode.  Therefore it is completely unsurprising that Live Transcribe makes many more errors than Gboard does.  Gboard does not have an unconnected mode and you can only use it if you are connected to the Internet.  Therefore Google, and its massive data centres, are supporting the transcription of what you dictate to Gboard.  I have used Live Transcribe in situations where I can't be connected to the Internet and it's a bit of a pain to have to do all of the work necessary to edit the material that has been transcribed, at some later time, in order to get what you really want.  But I still appreciate the fact that I can dictate something and edit it later.  However even Gboard is not perfect.  That's actually putting it mildly.  There are frequently some pretty major transcription errors.  You have to say any punctuation that you want to have inserted in your text, with Gboard, and frequently when I want it to put in a comma, it instead inserts the word "karma".

So it is fairly easy to say that Flow is much more accurate than Gboard.  Flow gets many more words down correctly than does Gboard. Flow doesn't make as many mistakes.  Flow can handle punctuation even if you don't say it but it isn't as good with commas as it is with periods.  Flow can handle certain levels of formatting, even if you don't ask for it.  I was interested when it started to create bulleted lists for me even though I didn't want bulleted lists in that particular case.

The advertising for Wispr Flow seems to indicate that it can handle transcription even if it isn't connected to the Internet.  However I have examined the settings for Wispr Flow, at least on my desktop machine, and I don't find any setting that indicates that I can turn on or off a connection to the Internet.  I will probably have to do some more extensive work on my smartphone in order to test that out.

(I have also, in the course of doing some testing for the purposes of this review, found that occasionally Wispr will actually take down a transcription but not paste it into the application that you think you are working in.  On the Windows desktop version you can call up the Wispr application itself and find that the transcription has been recorded in Wispr.  You can then copy and paste it back into the application you thought you were using.)

I'm using the free version of Flow.  At least I *think* I'm using the free version of Flow.  The Wispr Flow application, itself, tells me that I have access to the Pro version for a couple of extra weeks.  However it doesn't tell me whether I am actually using the Pro version right now.  So while I appreciate the dictation capability that Flow is providing to me, it could tell you a bit more about itself.  I think this is only fair.  After all, I have not turned on the privacy setting and therefore Flow is using my attempts at dictation to tune and improve Flow.  Regardless of whether it says so or not, I am quite sure that Flow is also feeding my transcription back to Open AI so that they can use it in building the next round of ChatGPT.  Hey, fair's fair.

I like it.  I'll probably continue to use it.  But it definitely still has some bugs.

And I still think they should show you what you're transcribing in real time.


A few more bits. 

Flow's ability to handle punctuation and formatting can be interesting at times.  Flow will eliminate punctuation, if it feels like it, even if you have given it spoken commands to include punctuation.  Flow is an American product, of course, and seems quite insistently determined to eliminate all possible commas.  It may not like commas, but it definitely does like semi-colons.  A lot of the time when I will expect it to start a new sentence, instead it just puts in a semicolon and keeps on going.  Also anyone who expects to be able to list things with comma-separated values, forget it.  It usually starts a bulleted list if you put in too many different items.

As I have noted, Flow is able to handle stumbles over words and usually turns out a pretty good edit no matter how much of a fumble tongue you have been in doing the dictation.  However I am concerned that occasionally Flow may edit out stuff that it simply considers extraneous.  And Flow is definitely not as good of a copy editor as Gloria was.

I am getting used to Flow's lack of immediate display of what it is transcribing.  However this is probably at the cost of some change in my writing style.  I am probably moving more to an Ernest Hemingway style of writing, in contrast to my preferred Henry James.

I have noticed, although it may be due to other factors, that since I have started the trial of Flow my writing productivity has gone up considerably.  You guys are *really* in trouble now.


I tested today in two very high noise environments.  I went to two local church where the praise teams were practicing their songs for the services, and recorded while they were doing that.  It turns out that Flow is fairly well equipped to handle this dictation in a high noise environment.  However, I should note that Gboard can record in such an environment as well without too much difficulty as long as you take some suitable precautions such as putting your mouth very close to the microphone.  There was very little difference in performance between the two apps.


Okay, I have finally gotten around to testing whether or not Flow works when it is not connected to the Internet.  It does not work when it is not connected to the Internet.  However, it does not *tell* you that it does not work when it is not connected to the Internet.  Of course this, combined with the fact that it does not display what it is transcribing while it is doing the transcribing, means that if you use it extensively and lose connection, you do not know that you are losing everything that you have just dictated.

When your device is disconnected from the Internet and you pull up a text input window, the Flow icon still appears even though Flow is not going to be functional!


Flow has a dictionary.  A local dictionary that uses your preferred spelling or formatting for certain words.  You can add terms to it yourself but I have recently noticed that when I am doing some dictating and I go back and amend or correct the spelling of a word, Flow has started to itself add my spelling of the word to the dictionary.

In order to do that, of course, Flow is having to pay attention to an awful lot of what is going on on my computer or device, quite aside from what it is specifically transcribing for me.  I'm not exactly sure that I really like Flow having that level of access to my computer but, at the moment, I'm not going to particularly object to it.


AI topic and series