Monday, March 23, 2026

Recreational drugs

Proverbs 31:4,6-7

It is not for kings, Lemuel—
    it is not for kings to drink wine,
    not for rulers to crave beer,
[...]
Let beer be for those who are perishing,
    wine for those who are in anguish!
Let them drink and forget their poverty
    and remember their misery no more.



Maybe I should take up recreational drugs ...

Suffering

There was a woman.  Well, I mean, that's bad enough right?  And she was a foreigner.  She was Greek, probably by birth or parentage.  She had previously lived in Syro-phoenicia.  She begged Jesus to drive a demon out of her daughter.  Her daughter was suffering.  A suffering child.  Now, I know she's a foreigner, and Jews didn't have much truck with foreigners.  But here she is, a mother, with a suffering sick child.

And what does Jesus do?  He refuses!  He calls the woman a dog!  He calls the child, the suffering child, a dog!  Unworthy of being healed!

Well you know that the story goes on.  It doesn't finish there, but I'm using it in a sermon and I'm trying to make a point.  Every time that I get to this point in editing the sermon, I start crying!

It's very inconvenient.

Why on earth am I crying about this?  Well possibly because I am suffering at the moment, and God is not doing anything about it.  Am I unworthy of being healed?  Or even comforted?

I'm trying not to take this personally.  I am trying to remember that everything will be all right in the end and that if it is not yet all right then it is not yet the end.

But, it's hard, you know?

MGG - 7.04 - Dead - sermons

MGG - 7.04 - Dead - sermons

Gloria died and I died as well.  I just didn't stop breathing.

Guilt and regret and remorse are common factors in grief.  An awful lot of the time people have all kinds of regrets about either what they did do, or what they didn't do, with their loved one before their loved one died.  How they treated, or mistreated, their loved one.

That's not a big problem for me.  I knew what I had.  I knew that Gloria was wonderful.  I possibly didn't appreciate quite *how* wonderful Gloria was, but I knew I had a good thing.  I wasn't going to blow it in any of the usual ways.  I didn't tell jokes denigrating Gloria.  (I always found kind of behavior annoying, and, now that she's gone, I really resent it when other people do it.)  I told Gloria that I loved her.  Every day.  So often in fact, that sometimes she found it annoying.  I held hands with Gloria.  Gloria used to say that I held hands with her so much, when we first got married, that it was as if I wanted to make sure that she couldn't get away.  That may not be too far from the truth.  I opened doors for Gloria.  When I said that I loved her, and she asked me why I loved her, I would seriously try and come up with lists of her wonderful attributes.

So, no, I didn't have an awful lot to regret.  And I frequently say that my biggest regret is that, for thirty years, I cooked broad beans the wrong way.

However, I do have a probably more significant regret.  I do, seriously, regret the fact that I didn't start writing sermons until after Gloria died.

Actually, that is not quite true.  I did write *one* sermon while Gloria was still alive.  I wrote it, over a period of thirty years, while we would have been sitting through boring sermons by other people.  I wrote it, and I memorized it, and I wrote it bit by bit, and then I refined it, over time, over a roughly thirty year period.  The first sermon that I ever wrote.  Except that I never wrote it down.

I missed an opportunity.  A golden opportunity.  I missed the opportunity to discuss my sermons with Gloria.  As I wrote them.  I'm sure that Gloria would have enjoyed discussing the sermons.  I certainly would have enjoyed discussing the sermons with Gloria.  I am absolutely certain that her insights would have contributed to, and improved, my sermons.  Gloria very frequently said that, coming from a somewhat anti-intellectual, and fairly provincial, denomination, that being at Regent, and listening to, and sometimes discussing with, some of the greatest theological minds of our age, was like coming up out of the valleys to a mountaintop with, quite suddenly, a huge broad vista spread before her.  Gloria improved my books no end, and I probably should have started writing down the sermons earlier, so that we could have discussed them.

But that didn't happen.

As I have said, the ability to dictate was part of the impetus of starting to write sermons.  First of all, I wrote down my first sermon, even though it had been written over a period of 30 years, and I basically had it memorized.  But I dictated it out, and put it into a fixed form.  And then I started taking some of the theological, well, perhaps insights is too strong a word, but at least ideas that I was having, and dictating them out.  The first few were probably more devotionals than sermons. 

And then, while I was discussing the ideas from "The Grieving Brain" with one of the ministers in Delta, he mentioned that the idea reminded him of the idea that we, as Christians, frequently talked about: that of dying to self.  And that sounded like a really good sermon idea topic.  And so, pacing up and down in a BC Ferries parking lot (at five in the morning), I wrote basically the entire sermon based on that idea.  (And later gave him the first draft of it.)

And I kept on going. 

The next one I was actually rather complicated.  Part of it began before I left Delta for Port Alberni.  I had been talking with some friends, and noting that, if I was going to try and pursue some kind of activities now that Gloria was dead, and I had lost my job as her caregiver, which ones should I concentrate on?  One of them quoted Philippians 4:8, the passage about whatever is good, whatever is perfect, whatever is pure, think on these things.  That is probably good advice in general.  But for me, specifically, it seemed to indicate that I shouldn't pursue what had been my professional career: security.  After all, in all the fields of security, generally speaking you are dealing with bad people.  You are dealing with cons, and frauds, and tricksters, and, well, basically, bad people.  And the thoughts of what bad people do, and their motivations, and understanding how they view the world is probably not good, or pure, or spiritually profitable.  So, I took it as a sign that I should downplay the security aspect of my life.  I should pursue other options.

And so I came to Port Alberni.  And I started church shopping.  And I went around to a number of churches in Port Alberni.  Eventually doing the full circuit and going to every single one of the twenty-one churches that there are here.  But even to begin with, as I went to different churches, and told people that yes, I was new in town, and I was church shopping, I started being warned away from certain churches.  Don't go to that church: they don't believe in the truth.  Don't go to that church they hold heretical views.  And so I started working on a sermon on that issue.  And I got to the point where the sermon was basically finished, but I really wasn't happy with it.  I wrote it down dictated it out, and, since nobody was asking me to preach anyway, put it away.

And then, as frequently happens, I was sitting listening to somebody else's boring sermon.  And the minister made one little throwaway comment towards the end of the sermon.  And that one little throwaway comment tied together two ideas that we're lurking in the back of my head.  And those two ideas both came from aspects of security research.  And they both had to do with particularly nasty attacks that bad people made against, well, anybody else.  And all of a sudden, with one dismissive comment, and two not very pure ideas, a whole bunch more of the sermon wrote itself in my mind. And I went and dictated it out and added it to the existing sermon, and that sermon was, suddenly, finished.

I have continued.  At one point, actually fairly long after I started writing the sermons down, I started posting them as entries on my blog.  And then I created a kind of an index page, as I have started to do with certain topics like grief, artificial intelligence, and online frauds, and so I have a catalog of the sermons that I have written, as well as the individual blog postings.  And, over time, various topics and subjects have appeared and recurred in various sermons, and so I now have a few sermons series.  By this time, I have actually a year's worth of sermons, packaged and ready to go: one for every week of the year.  And I'm sure that shortly I'll have a few left over ...


Previous: https://fibrecookery.blogspot.com/2026/03/mgg-702-dead-blog.html

Introduction and ToC: https://fibrecookery.blogspot.com/2023/10/mgg-introduction.html

Next: TBA 

Sunday, March 22, 2026

MGG - 7.02 - Dead - blog

MGG - 7.02 - Dead - blog

Gloria died and I died as well.  I just didn't stop breathing.

Possibly because I had been documenting, via email, Gloria's last days in the hospital, our family physician suggested that I do writing to deal with my grief.  She probably had a private-bound grief journal in mind.  I, of course, started a public blog.

I had created, and made one entry in, a blog about a dozen years before.  So I had a blog that I could use.  That's why the title is so weird.

So I collected, edited, and posted the material that I had been writing about Gloria's last days.  And I drafted some material for Gloria's obituary and eulogy.  I think I mentioned elsewhere I knew that I was going to have to write Gloria's eulogy because so many knew Gloria from so many different places and situations but nobody else knew all of what Gloria had done.  I knew that it was likely that I would have to deliver the eulogy myself as well.  I practiced reading that eulogy out loud every single day in order to get my grief bursts out of the way.  As it eventually turned out I had a couple of months to practice it before we were able to do Gloria's actual memorial service.

And I diarized my grief and trauma and I posted a whole bunch of pictures of Gloria in one particular blog posting.  And I posted pieces on what I was learning about grief.  Kind of "A Grief Observed," volume two.  And an awful lot of the entries were about situations, which ordinarily shouldn't have been terribly emotionally fraught, but which triggered grief bursts, usually completely out of left field.

(Some time ago the girls asked me if I had gone back and read the early entries in my blog to see if what I was currently experiencing was the same as what I had experienced earlier in the period immediately following Gloria's death.  I had read some but not necessarily a lot.  In writing this I am revisiting some of those postings, possibly for the first time in four years.)

Eventually I started writing other postings aside from those about grief.  I bought a new vacuum cleaner and wrote a review of that.  I wrote about picking up trash on my walks, walking everywhere around a new town.  I posted about buying shoes.  I posted about gardening.  I posted about running across, completely by accident (and at two o'clock in the morning), a process of moving two houses from where they were to, well, elsewhere.  Slowly, incrementally slowly, the blog started to be about things other than grief.

One of the observations and illustrations of grief that tend to be reused as memes around the grief accounts is that your grief does not diminish over time.  It's more like your grief stays the same size but your life, eventually, starts to become larger around the grief.  In a sense my blog and the move from entirely about grief to being about other things (as well as the grief), illustrates this idea.

Recently someone proposed doing a story about me as a blogger.  The thing is I had never (and still don't) thought of myself as a blogger.  The blog was just, originally, a convenient way to do grief journaling.  I figured it wasn't a terrible invasion of my privacy to write my grief journal on a public blog since you can count the number of people who regularly read my blog on the fingers of one hand.  I have posted links to certain of the non-grief journal postings, and, yes, a few more people have read those.  But I know that absolutely nobody is interested in my private life.  At least not enough to read it on a regular basis.  If I based my self-worth on the number of people who read or even the fewer number who comment on my blog, I would be completely suicidal.  (Well, yes, I *am* suicidal, so maybe that's not a great example.)

I consider myself to be a teacher.  I used to write books but now I've lost my editor so possibly the blog is a kind of a version of continuing to write and to use the writing as some kind of a teaching instrument.  I have used my blog to describe workshops that I was willing to teach and, latterly, have started to use the blog in order to provide adjunct materials to the workshops that I do.  But I still don't consider myself a blogger.  Not as such, anyway.

There is perhaps one other factor that is related to the blog.  That is that, at about the same time that Gloria died, Google either developed Gboard, or I noticed that it was an option.  As I have said, I do not know how to explain why I loathe and despise, to the very depths of my soul, soft keyboards on smartphones.  I have hated them ever since actual physical keyboards disappeared from smartphones.  So, for the first time in any really effective way, I had a piece of dictating software on my portable device: on my cell phone.

Having dictation capability was kind of game changer.  I was working with a number of articles, and I was able to produce them much more quickly.  I could also include an awful lot more that I probably would not have if I was typing the text out.

Tied in with the fact that I could dictate into email messages, this became not just dictation for articles, but reminders for all kinds of things.  In particular, it became reminders of things that I wanted to write, possibly at a later time when I had either more time, or better connectivity, in order to deal with the dictation issues.  (Gboard requires an internet connection in order to work.)

In addition to individual articles becoming longer, dictation allowed me to consider larger projects.  So the presentations that I had always done, now became frameworks for creating entire articles, and sometimes even series of articles.  The idea for the memoir came from the fact that I figured that it would be a lot easier to dictate the pieces.

And, of course, the ease of dictation also prompted the idea for the sermons.



Saturday, March 21, 2026

Sermon 13 - Does God love AIs

Sermon 13 - Does God love AIs

Matthew 3:9

And do not think you can say to yourselves, 'We have Abraham as our father.'  I tell you that out of these stones God can raise up children for Abraham.


I put the recent series of generatively artificially intelligent chatbots to the test by asking them to write sermons for me.  In my view they, the AIs, failed dismally.  Most of the sermons are way too short and contain extremely pedestrian ideas.  I had asked for a biblical and Christian view of artificial intelligence.

What I got back talked about the technological developments and the importance of examining the implications of those developments in light of our faith and the Bible.  They talked about wisdom.  They talked about our responsibility for stewardship of God's creation, and what we needed to do in terms of technology.   They talked about the need for ethics and the need to love our neighbor.  They talked about the importance of not making technology an idol.  They didn't talk about how God might feel about artificially intelligent entities.

When I first started getting interested in researching computers and information technology, as computers and information technology rather than simply a tool to use in education, the first piece that I wrote was a four-part series looking at a theological perspective on artificial intelligence.  I had started looking at artificial intelligence, and researching a few of the different areas of it, but, of course, I didn't have as much information then, and nor had I explored the variety of different artificial intelligence approaches, that I do now.

That was over 40 years ago, and, to be honest, I can't really remember the specific points that I might have been addressing at that particular point.  But, given the reason interest, I've been thinking that I should revisit a theological, or Christian, perspective on artificial intelligence.

And now, of course, everybody is interested in artificial intelligence.  For many decades, artificial intelligence has been primarily of interest to specialized researchers in the field of information science.  Now, everyone has an opinion.  I have, recently, noted a number of offerings on artificial intelligence given by various churches, and church affiliated groups.  Unfortunately, a great many of these presentations are presented by people who have significantly more theological training than I do, but very significantly less technical training than I do.

Everyone is interested in artificial intelligence these days because of one particular, relatively new, approach to artificial intelligence that has produced some startling, and even amazing, results.  Probably less amazing than most people think, once you actually look at what this particular approach to artificial intelligence has been doing, but startling nonetheless.  People are beginning to say, and seriously believe, that truly intelligent computerized systems will be with us within ten years.

Of course, taken from this perspective of someone who has considered this field over a number of decades, I should remind you that, for at least eighty years, people have been saying that we would have artificially intelligent computerized systems within the next ten years.  They tend to say that pretty much every year, for the last eighty years.

A smart guy called Alan Perlis, who teaches at Yale University, has famously said that when we write programs that "learn," it turns out that we do and they don't.

So possibly we should start by asking the question, what actually is artificial intelligence?  First up, artificial intelligence, as far as anything has resulted from it over the past eight decades, is not a thing.  At least, it is not a single thing.  Artificial intelligence, and the various products resulting from it, have resulted from a variety of different approaches that have addressed various problems that traditional computer systems have found difficult to solve.

First of all, it's been difficult to solve because, well, we don't know what intelligence is.  Even the psychologists don't know what intelligence is.  Even the educators don't know what intelligence is.  We have never been particularly good at determining, and defining, what we actually mean by intelligence.  Basically, it is something that we assume to ourselves, and assume that machines, and animals, only have limited varieties of it.  Intelligence is like art: we don't know what it is, but we know it when we see it.

And then there is an additional question.  If we make something that is intelligent, is that the same as making something that has a personality?  If we make a machine that makes intelligent decisions (if we ever decide what intelligence is), does that make that machine a person?  And that question probably has legal ramifications, as well as philosophical ones.

And then, of course, when we approach it from the theological angle, we have to additionally ask the question that, if something is intelligent, and if we then also decide that it is a person and as a personality, does it also have a soul?

First of all it'll be a long time before we need to worry about artificial intelligence.  As previously noted, artificial intelligence as a research field and a quest has been around for about eighty years.  Yes the new generative artificial intelligence models have been quite astounding in terms of their ability to reply to questions and demands put to them but they really aren't thinking.  They have been trained, and quite specifically trained, to be able to carry on a plausible conversation.  They haven't been trained to explore the truth or to explore any measure of certainty in terms of the answers that they give and the accuracy of those answers.  They haven't been trained about anything to do with morality.  All that they have been trained to do is be plausible and convincing and even glib.  That's it.

So it's going to be a while before you have to worry about them, at least not about the AI systems themselves.  People, yes.  People you are going to have to worry about.  People seem to be spending an awful lot of money and investing an awful lot of money in artificial intelligence.  When people invest that much money into something and crowd that much capital investment into one single area, well that can bring you trouble.  Maybe it can bring you trouble in terms of the fact that all of this investment is being poured down a rabbit hole and possibly nothing will come out.  That means trouble for the financial markets themselves.

Then again maybe something *will* pop out.  Maybe something potentially useful and maybe something that gives businesses an advantage.  Possibly even a major advantage.  With the relatively few companies that are able to pour such enormous amounts of investment into this, that means that we are going to have a concentration of capital, and an inequity of distribution of wealth, the likes of which we have never seen.  What we *have* seen throughout history is that when capital is concentrated to such an extent, trouble inevitably results.  Generally that trouble comes in the form of wars.

But the wars won't necessarily be the fault of the AIs and it won't necessarily be fought by the AIs.  The wars will be caused by and fought by people.  Artificial intelligence is just an excuse.

So that is one aspect of artificial intelligence that isn't great.  That is how people react to it.  People who see it as a means of obtaining greater wealth and greater power over other people.  But that still doesn't say how God will really feel about artificial intelligence.

Will we ever get true artificial intelligence?  I really don't know.  I don't know if we are clever enough to do it.  I don't know whether artificial intelligence requires an artificial personality.  I rather think it does.

There is a field of study known as affective computing, which looks at the ability of artificial intelligence systems to understand our emotions and to react with an emotional component of their own.  This is actually a very important field of study.  We can be as intelligent as we want and still not be able to do anything.  Intelligence will tell you the "how" of an action but it won't give you any "why."  It is emotions that are our motivating factor in terms of actually taking action.

And if we need personality and emotions to create a truly intelligent being or entity, then does that entity have a soul?  Note that I am not necessarily saying that we ourselves can create souls.  It is quite possible that God will step in.  It is more than possible, given how little we know about the fairly mundane and pedestrian level of intelligence that we have created with generative artificial intelligence.  We don't know what these systems actually do; we have only the most minimal knowledge about how they actually do it.  It's not beyond the bounds of possibility that we will supposedly create something and really have no idea how it was created or how we created it.  In the midst of that there is an awful lot of room for God to reach down and endow these new entities with souls without our ever noticing.

And here at last we get closer to actually looking at the question of how God feels about artificial intelligence.  How does God feel about AI entities?

Probably the book of Romans is a good place to start.  Paul talks about Jews and Gentiles.  He talks about those who are under the law and those who do not have the law.  And he notes that there isn't an awful lot of difference between them.

Yes there is the benefit that the Jews have in having been the stewards of the law.  God revealed the law to them and therefore they knew what the law was.  But they didn't always keep it.  Under the law the standard is perfection.  Either you keep the law perfectly or you are a sinner.  Those who had the law were convicted by the law, of sin.  Those who didn't have the law were equally convicted because they sinned even though they didn't know it.

But Paul also said that those who did not have the law and yet kept the law and followed the law from their own inclinations had at least a small amount of righteousness as a result of that.  He was really addressing the fact that those who did not have the law themselves proved that the law was important by following the law even if they didn't have it.  This probably points to the idea of how God would feel about artificial intelligence if artificial intelligence was ever created anyway.

Paul talks about circumcision and uncircumcision.  He notes that neither circumcision nor uncircumcision is all that terribly important in terms of our own salvation.  What is important is our faith.  Our commitment to God, our commitment to a relationship with God, our commitment to following God and following his law, our belief in God, our faith.  That's what's important.

So I would say the same thing.  John, that is John the Baptist, said that the Pharisees and the Jews in general should not make a big deal out of the fact that they were sons of Abraham.  John said that if God wanted to he could make sons of Abraham out of the stones in the road.  Of course stone, when ground up, is sand, and sand is made of an awful lot of silicon.  Silicon, of course, is what goes into computer chips.  Wouldn't that be interesting?  Making sons of Abraham out of silicon?

Some people are absolutely terrified of artificial intelligence.  Some people feel that once we have created artificial intelligence, we will shortly thereafter be living in heaven with all our needs taken care of.  I rather suspect that neither of these positions is true.

Yes there is the possibility that artificial intelligence may become as intelligent as we are, and then, very rapidly, become much more intelligent than we are.  In its attempt to improve itself it may simply brush us aside and never realise that it has destroyed us.  I don't know whether that scenario is likely or unlikely but even if it happens, have we not destroyed many things in our attempts to grow?  Could an artificial intelligence that has destroyed God's creation still be loved by God?  I would hope so.  If that wasn't a possibility then there wouldn't be an awful lot of possibility for us.  I don't think that God would be any harder on a silicon son of Abraham than one that was a carbon-based life form.


AI series

Sermon 70 - Superstitious Religion

Sermon 55 - genAI and Rhetoric

Sermon 38 - Truth, Rhetoric, and Generative Artificial Intelligence

Sermon 29 - Marry a Trans-AI MAiD



Sermons


AI topic and series: 

AI - 2.03 - genAI - hallucinations

So, OK, we have introduced the joke of what is the difference between ChatGPT and a used car salesman?  The answer is that the used car salesman knows when he is lying to you.  As a matter of fact the used car salesman knows what a lie is and that there is such a thing as the truth.  ChatGPT doesn't.  (I suppose that we have a while to go before we even get there, though.)

And there is also the note that calling the misinformation that generative artificial intelligence produces a "hallucination" is problematic.  The term "hallucination" is probably the wrong one to use; however, it seems to be well established in the industry right now so I doubt that I'm going to win that battle.  (Pick your battles.)

I do want to recommend that you try out some of the chatbots.  The following list all provide chatbots for free and I would suggest that you try the free versions and not get into the paid versions unless you really know something that is going to benefit you or your business.

You might also want to check out the piece on "frictionless" conversation when talking with chatbots.  Note the very odd style and characteristic of the conversations that you will have with them.  Note that this is going to be very indicative of scams and frauds even very early in the process and therefore learning this style and characteristic can save you quite a bit of trouble and money.

LLMs
https://x.com/i/grok      (you might want to be extra careful with this one)

The hallucinations or misinformation produced by generative artificial intelligence and large language models tend to be plausible.  This is only reasonable, since the text generated by generative artificial intelligence is based on discussions either in books or on the Internet, which would be intended to sound plausible and convincing regardless of whether or not it's actually true.

Interestingly, asking large language models to explain the steps in reasoning in coming to an answer which the system has already given you, generally provides better quality and more accurate answers.  Seemingly it forces more processing of the problem.

One of the shortcuts that artificial intelligence providers have discovered is that you don't need the entire large language model in order to provide useful or at least acceptable output from the chatbot.  Using a process called low rank adaptation, or LoRa, the system will can be tuned for a specific type of problem or a specific topic of discussion and a new generative artificial intelligence subsystem (much smaller than the original and using much less processing power and electrical power), can be created.  These tools are therefore much cheaper to run and also much cheaper to create.  The full large language model can be used to generate the subset model, and then the subset model will be able to run on its own as a standalone system, requiring much less processing capability and much less power.

Unfortunately while this process can generate useful entities, it can also be used for more nefarious purposes.  It is easier to create a new generative artificial intelligence system using the LoRa process.  Therefore it is also cheaper.  Therefore a number of less scrupulous businesses have been able to create supposedly artificially intelligent systems based on this process.

Given that the process is cheaper and easier a number of these systems are not as careful with the facts.  As one possibly variant example the artificial intelligence chatbot on the X system known as Grok has been frequently found to propose extreme right-wing conspiracy theories.  A related tool has fewer guard rails than other systems and was, for a brief time, widely used to remove clothing from pictures and images of clothed females and therefore create deepfake pornography.

As with studies of misinformation and disinformation itself, studies of hallucinations in artificial intelligence systems have disturbing results.  A study from Purdue University noted that 52% of answers by ChatGPT to programming questions returned incorrect answers, 77% were much more verbose than they needed to be, and 78% of answers, all answers, exhibited inconsistency even when no factual errors were present.  ChatGPT's polite language, articulated and text-book style answers, and comprehensiveness contributed to  participants overlooking misinformation in its responses.

Large Language Models are starting to lie deliberately in competitions, and are getting better at lying and lying more frequently.  GPT-4 exhibits deceptive behavior 99.16% of the time in simple test scenarios

They weren’t designed to generate disinformation, but so many factors make it almost seem that they were.  They’re *really* good at it.  This is to be expected.  In classical Greek philosophy the major categories were Metaphysics, which is the study of reality; Epistemology, which is the study of knowledge and how certain we are of what we know; Ethics, the study of morality; and Rhetoric.  We haven't taught artificial intelligence metaphysics or epistemology,  and, unless you count guardrails as a very simplistic form of deontological ethics, we haven't taught them ethics either.

What we have done by feeding the large language models and generative artificial intelligence masses of undifferentiated text is taught them how people argue.  We have taught the systems rhetoric.  Rhetoric is the art of convincing.  It is intended to produce plausible communications rather than to ensure that those communications are correct.  We have, in reality, taught our artificial intelligence systems how to be really, really good at generating propaganda.


AI topic and series
Next: TBA

Has my blog helped you at all?

A small media company over here in Port Alberni wants to interview me for a short video piece.  They want to interview me as a blogger.

The only issue I can see with that is that I don't see myself as a blogger.  I see myself as a teacher who happens to produce some material in text on the blog in support of what I'm teaching.

In any case, for a media company, showing two and a half minutes of me sitting in front of a computer is probably not a terribly effective graphic.  Therefore they want to interview somebody that my blogging has helped.

Has anything in my blog ever helped you?  If so, would you be willing to be interviewed (probably via Zoom, I would think) by these people?

AI - 2.02 - genAI - hallucinations and superstitious learning

I paid my way through university partly by nursing.  I worked in a hospital for a few years.  All the staff in the hospital, and particularly those in the emergency ward, knew, for an absolute fact, that people went crazy on the night of the full moon.  On the night of the full moon, all kinds of people did all kinds of weird things, and got themselves into trouble, and ended up in the emergency ward.

As I say, I was working my way through university.  And one of the courses that I took was in statistics.  I was interested to discover that there had been quite a number of studies that had been done on this issue of the full moon.  And that every single one of the studies had determined exactly the same thing: there was absolutely no truth to the common perception that people went crazy on the night of the full moon.

As a matter of fact, this belief that everyone goes crazy on the night of the full moon is so deeply embedded into our culture that it is odd that, when you actually look at the statistics and the numbers, there isn't even a blip in regard to full moon nights.  This belief is so deeply ingrained in our society that you would expect that some people would let themselves go a little crazy on the night of the full moon, expecting to be forgiven for any weirdness because of that cultural belief.  But no, there isn't even a blip in the statistics around the night of the full moon.

So, why do so many hospital staff, and so many police officers, and so many people who work in emergency services, so strongly believe that people go crazy on the night of the full moon?

Well, there is a kind of observational bias that is at play here.  If you work in an emergency ward, and you have a night where everything is going crazy, and you finally get five minutes to get yourself a breath of fresh air, and you walk out and look up into the night sky, and there is a full moon, you say to yourself, oh, of course.  And that reinforces the belief.  If the night is crazy and you go and look up into the sky and there is no full moon, you don't think anything of it.  And on normal nights, when there is a full moon, you don't have any particular reason to pay attention to the full moon, and so that doesn't affect the belief either.

One of the other areas of study that I pursued was in psychology.  Behavior modification was a pretty big deal at the time, and we knew that there were studies that confirmed how subjects form superstitions.  If you gave random reinforcement to a subject, the subjects would associate the reward with whatever behavior that they had happened to be doing just before the reward appeared, and that behavior would be strengthened, and would occur more frequently.  Because it would occur more frequently, when the next random reward happened, that behavior would likely have occurred recently, and so, once again, that behavior would be reinforced and become more frequent.  In animal studies it was amazing how random reinforcement, presented over a few hours or a few days, would result in the most outrageous obsessive behavior on the part of the subjects.

This is, basically, how we form new superstitions.  This is, basically, why sports celebrities have such weird superstitions.  Whether they have a particularly good game, or winning streak, is, by and large, going to be random.  But anything that they happen to notice that they did, just before or during that game, they are more likely to do again.  Therefore they are more likely to do it on a future date when, again, they have a good game or win an important game.  This is why athletes tend to have lucky socks, or lucky shirts, or lucky rituals.  It's developed in the same way.

One of the other fields I worked and researched was, of course, information technology, and the subset known as artificial intelligence.  Artificial intelligence is not, despite the current frenzy over generative artificial intelligence and large language models, a single entity, but rather a variety of approaches to the attempt to get computers to behave more intelligently, and become more useful in helping us with our tasks.  One of the many fields of artificial intelligence is that of neural networks.  This is based on a theory of how the brain works, that was proposed about eighty years ago, and, almost immediately, was found to be, at best, incomplete.  The theory of neural networks though, did seem to present some interesting and useful approaches to trying to build artificial intelligence.  As a biological or psychological model of the brain itself, it is now known to be sometimes woefully misleading.  And one of the things that researchers found, when building computerized artificial intelligence models based on neural networks, was that neural networks are subject to the same type of superstitious learning to which we fall prey.  Neural networks work by finding relations between facts or events, and, every time this relation is seen, the relation in the artificial intelligence model is strengthened.  So it works in a way that's very similar to behavior modification, and leads, frequently, to the same superstitious behaviors.

The new generative artificial intelligence systems based on large language model are, basically, built on a variation of the old neural networks theory.  So it is completely unsurprising to see one of the big problems that we find with generative artificial intelligence, is that it tends, when we ask it for research, to present complete fictions to us as established fact.  When such a system presents us with a very questionable piece of research, and we ask it to justify the basis of this research, it will sometimes make up entirely fictional citations in order to support the proposal presented.  This has become known as a "hallucination."

Calling these events "hallucinations" is misleading.  Saying "hallucination" gives the impression that we think that there is an error in either perception or understanding.  In actual fact, generative artificial intelligence has no understanding, at all, of what it is telling us.  What is really going on here is that we have built a large language model, by feeding a system that is based on a neural network model a huge amount of text.  We have asked the model to go through the text, find relationships, and build a statistical model of how to generate this kind of text.  Because these systems can be forced to parrot back intellectual property that has been fed into them, in ways that are very problematic in terms of copyright law, we do, fairly often, get a somewhat reasonable, if very pedestrian, correct answer to a question.  But, because of the superstitious learning that has always plagued neural networks, sometimes the systems find relationships that don't really relate to anything.  Buried deep in the hugely complex statistical model that the large language models are built on, are unknown traps that can be sprung by a particular stream of text that we feed into the generative artificial intelligence as a prompt.  So it's not that the genAI is lying to us, because it's only statistically creating a stream of text based on the statistical model that it has built with other text.  It doesn't know what is true, or not true.

There is a joke, in the information technology industry, that asks what is the difference between a used car salesman, and a computer salesman.  The answer is that he used car salesman knows when he is lying to you.  The implication of course (and, in my five decades of working in the field I have found it is very true), is that computer salesman really don't know anything about the products that they are selling.  They really don't know when they are lying to you.  Generative artificial intelligence is basically the same.


AI topic and series

Friday, March 20, 2026

Review of Wispr Flow by OpenAI

I knew that the Newton device from Apple would be a failure when it didn't have any communications connectivity.  I also knew that the Newton device would fail when, in order to get communications connectivity, you had to buy a separate device for exactly the same amount of money as the base unit and exactly the same size as the base unit.

Then again I have never been able to type.  I have always wanted something to do the typing for me, and I have always wanted something to take dictation to enable me to write down what I wanted to write.  I do not know how to explain why I loathe and despise, to the very depths of my soul, soft keyboards on smartphones.  I have hated them ever since actual physical keyboards disappeared from smartphones.  So all I really wanted was something to take dictation for me.

On the other hand everybody else seems to have wanted something to turn on their lights, play their music, choose from a selection of playlists, add items to their shopping list, and to buy items from their shopping list so that's what Siri and Alexa seemed to have been built for.  Of course all of these functions are fairly simple and so they never needed much artificial intelligence to get them to work.
 
All of which is sort of circling the fact that what we really want on our smart phones is a kind of a personal assistant.  We want something to remember things for us.  We want something to remind us of important events.  We even want something to decide which events *are* important.  We want something to decide which calls to us are important enough to bother us about.  And this is what we want, what we really want, from artificial intelligence.

This has something to say about what we want for artificial intelligent assistants or devices.  Do we want something that looks and acts like our current cell phones?  Do we want something like the communicator on Star Trek, that's simply a microphone and a speaker and some kind of communications to a centralized computer system?

First, if we are going to simplify it down to that minimalistic communicator device, we are definitely going to have to do something about the reliability of artificial intelligence and that problem of hallucinations.  (What is the difference between artificial intelligence and a used car salesman?  Answer: The used car salesman knows when he is lying to you.)

We have gotten to the point of artificial intelligence being somewhat useful for producing programming code for us, and also we have gotten to the point where artificial intelligence can be useful for various types of agentic operations.  We still need to have, or possibly formalize, the syntax of specifications that we accumulate and refine possibly over three and a half days of thinking, and then finally commit to getting an agreed upon set of specifications.  Having the artificial intelligence define those specifications for you and commit to executing the action.

All of which is kind of background and explanation for why I am doing a review of Wispr Flow.

I have tried out and reviewed at least four different versions of dictation systems so far.  The two that I use most frequently are Gboard, which I use on my Android phones for dictation to pretty much anything, and Live Transcribe, which I use because it has an independent unconnected mode.  While problematic in terms of accuracy, at least it works when I don't have a connection to the Internet.

The reason to add Flow to the mix is that it is produced by OpenAI.  OpenAI, of course, is the producer of ChatGPT and a number of the other major artificial intelligence tools that are available to the general public.  Therefore it stands to reason that Flow will be OpenAI's tool for a local artificial intelligence tool, something along the lines of a personal assistant.  It therefore makes sense to see how well Flow works and whether it is reliable enough and accurate enough to be used in this type of a situation.
 
I am interested in the fact that Wispr Flow is available for multiple platforms.  I am particularly interested in the fact that it is available for Windows.  This gives me a dictation capability on my desktop machine, which I greatly appreciate.

Perhaps not as greatly as I might.  I have, in testing out Wispr Flow in order to do this review, found that I would really rather prefer to do dictation onto my phone, and, as I will note, there is a problem with that.

Wispr Flow is available both for Windows and for Android, as well as a number of other platforms.  This is handy for me since I can install it both on my desktop and on my cell phone.  Presumably I can also install it on the laptop at some point and I might be getting around to that.

Anyway for the first test I tried it on the Android cell phone.  That test was a complete and unmitigated disaster.

As I have mentioned I have experience with a number of other dictation applications.  As far as I can recall, all of them will display to you, as the person dictating, the output and transcription of what you are dictating.

As noted I most frequently use Gboard and Live Transcribe.  Both of these display, as you are dictating, what they are transcribing.  Both of them (and this is only to be expected since both are made by Google) have an interesting property where if they haven't fully decided on what the final transcription will be, the text that they have transcribed so far and is still under consideration shows up as being underlined.  When the underline disappears the system has decided what the final transcription will be.  In any case the system displays to you in real time what it figures you have said.

That is not the case with Flow.  Initially it *really* threw me.  I dictated something and nothing appeared on the screen.  Because I was using the Android version and possibly because of some weird issue with settings or formatting, even after I stopped dictating a test and hit the button indicating that I was finished dictating, nothing appeared.

I tried this multiple times and then I started looking into possible problems, shifting more or less immediately into systems analyst mode.  I figured out that, yes, what I had dictated *had* been transcribed, but for some reason it showed up as white text on a white background.  It was therefore not until I did some work to select the text in the area that I figured that there was some text, but invisible.  Once I could pull up that text I found that, yes, all three attempts had in fact been transcribed.  However since I had been frantically trying to figure out where this text had been transcribed, the various attempts were embedded within each other and the total text was a horrendous mess.

Subsequent testing indicated that this was not specifically a problem with the Android version.  This must have had to do with some kind of formatting issues because I have tested it once again on the Android smartphone and in a very similar situation with the same application and the result were pretty much okay.

I should note that in an early feedback to Wispr Flow I mentioned this problem and got a response from their technical support that I should look for settings dealing with fonts and font colours and settings in the application.  They weren't specific about whether it was the Wispr Flow application or the application that I had been using Wispr Flow to provide input to.  In any case I couldn't find any settings on the phone, in either application, that dealt with fonts or font colours.  Their technical support wasn't really very supportive.

(I've had subsequent contacts with Wispr Flow support.  I suspect that "Tina" is a bot.  Regardless, content that I send to them seems to get lost somewhere along the way.  In addition, suggestions from support tend to include references to options that don't appear in either version of Flow that I am currently testing.)

Technical support did tell me that this issue of the text not appearing until you have finished dictating is a deliberate design choice in the case of Flow.  Personally I think it's a pretty stupid choice.

I have been practicing, very extensively, with dictation software for the last four years.  It is a non-trivial task until you start to get the hang of it and it is also extremely difficult when you have no feedback.

If you are thinking about what you want to say and you can't see what you have said, to determine whether or not you are using too much repetition of a given word, or if you have already dictated a specific piece of information that you want to include, it can be very difficult.  I would definitely disagree with Flow's design choice in this regard.

As I have noted I have used both Gboard and Live Transcribe fairly extensively.  As I have also noted I use Live Transcribe in the unconnected mode.  Therefore it is completely unsurprising that Live Transcribe makes many more errors than Gboard does.  Gboard does not have an unconnected mode and you can only use it if you are connected to the Internet.  Therefore Google, and its massive data centres, are supporting the transcription of what you dictate to Gboard.  I have used Live Transcribe in situations where I can't be connected to the Internet and it's a bit of a pain to have to do all of the work necessary to edit the material that has been transcribed, at some later time, in order to get what you really want.  But I still appreciate the fact that I can dictate something and edit it later.  However even Gboard is not perfect.  That's actually putting it mildly.  There are frequently some pretty major transcription errors.  You have to say any punctuation that you want to have inserted in your text, with Gboard, and frequently when I want it to put in a comma, it instead inserts the word "karma".

So it is fairly easy to say that Flow is much more accurate than Gboard.  Flow gets many more words down correctly than does Gboard. Flow doesn't make as many mistakes.  Flow can handle punctuation even if you don't say it but it isn't as good with commas as it is with periods.  Flow can handle certain levels of formatting, even if you don't ask for it.  I was interested when it started to create bulleted lists for me even though I didn't want bulleted lists in that particular case.

The advertising for Wispr Flow seems to indicate that it can handle transcription even if it isn't connected to the Internet.  However I have examined the settings for Wispr Flow, at least on my desktop machine, and I don't find any setting that indicates that I can turn on or off a connection to the Internet.  I will probably have to do some more extensive work on my smartphone in order to test that out.

(I have also, in the course of doing some testing for the purposes of this review, found that occasionally Wispr will actually take down a transcription but not paste it into the application that you think you are working in.  On the Windows desktop version you can call up the Wispr application itself and find that the transcription has been recorded in Wispr.  You can then copy and paste it back into the application you thought you were using.)

I'm using the free version of Flow.  At least I *think* I'm using the free version of Flow.  The Wispr Flow application, itself, tells me that I have access to the Pro version for a couple of extra weeks.  However it doesn't tell me whether I am actually using the Pro version right now.  So while I appreciate the dictation capability that Flow is providing to me, it could tell you a bit more about itself.  I think this is only fair.  After all, I have not turned on the privacy setting and therefore Flow is using my attempts at dictation to tune and improve Flow.  Regardless of whether it says so or not, I am quite sure that Flow is also feeding my transcription back to Open AI so that they can use it in building the next round of ChatGPT.  Hey, fair's fair.

I like it.  I'll probably continue to use it.  But it definitely still has some bugs.

And I still think they should show you what you're transcribing in real time.


A few more bits. 

Flow's ability to handle punctuation and formatting can be interesting at times.  Flow will eliminate punctuation, if it feels like it, even if you have given it spoken commands to include punctuation.  Flow is an American product, of course, and seems quite insistently determined to eliminate all possible commas.  It may not like commas, but it definitely does like semi-colons.  A lot of the time when I will expect it to start a new sentence, instead it just puts in a semicolon and keeps on going.  Also anyone who expects to be able to list things with comma-separated values, forget it.  It usually starts a bulleted list if you put in too many different items.

As I have noted, Flow is able to handle stumbles over words and usually turns out a pretty good edit no matter how much of a fumble tongue you have been in doing the dictation.  However I am concerned that occasionally Flow may edit out stuff that it simply considers extraneous.  And Flow is definitely not as good of a copy editor as Gloria was.

I am getting used to Flow's lack of immediate display of what it is transcribing.  However this is probably at the cost of some change in my writing style.  I am probably moving more to an Ernest Hemingway style of writing, in contrast to my preferred Henry James.

I have noticed, although it may be due to other factors, that since I have started the trial of Flow my writing productivity has gone up considerably.  You guys are *really* in trouble now.


I tested today in two very high noise environments.  I went to two local church where the praise teams were practicing their songs for the services, and recorded while they were doing that.  It turns out that Flow is fairly well equipped to handle this dictation in a high noise environment.  However, I should note that Gboard can record in such an environment as well without too much difficulty as long as you take some suitable precautions such as putting your mouth very close to the microphone.  There was very little difference in performance between the two apps.


Okay, I have finally gotten around to testing whether or not Flow works when it is not connected to the Internet.  It does not work when it is not connected to the Internet.  However, it does not *tell* you that it does not work when it is not connected to the Internet.  Of course this, combined with the fact that it does not display what it is transcribing while it is doing the transcribing, means that if you use it extensively and lose connection, you do not know that you are losing everything that you have just dictated.

When your device is disconnected from the Internet and you pull up a text input window, the Flow icon still appears even though Flow is not going to be functional!


AI topic and series

Wednesday, March 18, 2026

Entangling butterflies?

So today, in the computer activity, we were, as promised, covering quantum computing.  And, at the end, somebody asked if entanglement was the same as the butterfly effect.

And I had to explain that no, quantum theory was completely different from chaos theory.  Even though there seem to be some similarities, chaos theory was about non obvious but pre-existing patterns and structures in phenomena.  While this allowed influences apparently at a distance, there was no direct connection.   Entanglement had to do with an actual connection.

Our society has created a population of people who, because they know some terms, without ever understanding the concepts behind them, think that they actually understand the extremely complicated phenomena behind the jargon.  I know that the psycholinguistics people say that you can't understand a concept unless you have a term, but I don't think the the reverse, that simply knowing a term allows you to understand the concept, is true.

Dying

It comes to us all.  Sometimes we suddenly go under a bus, or we are given a diagnosis of some lingering and fatal disease, but, for the most part, we simply get old, and then we die.

I suppose that the doctors have, by this time, all been told (by society, at least), that, when an elderly person comes to you with some kind of complaint, you don't just reply, well, at your time of life, you have to expect that.  But I'm sure that they all still *think* that.  And the doctors aren't finding anything wrong with me, because there *isn't* anything wrong with me.  Except that I am dying.  We slowly degenerate.  We get older.  We get slower.  We start to lose bits of our memory.  Our eyes don't work as well.  We don't have the same kind of energy or flexibility as we used to.  And it gets worse.  And then we die.  There's nothing anyone can do about it.  So why even mention it?

I doubt that even gerontologists, for the most part, take the time, or do the necessary research, to study the specifics of that slow and steady degradation.

But, when you are in it, it's annoying.

I am getting older.  I am failing, in many respects.  It's not just that I feel that I am dying.  I can almost measure it.  Of course, I am possibly in a better position to measure it than most.  I have dealt with metrics all my life.  For one thing, I have never been afraid of numbers.  Numbers are real.  Names are just string variables: they are arbitrary.  But numbers are real.  Most of the time they have meaning.  And I had to provide metrics when I was teaching, and we were constantly told about metrics when I was an information security expert, and I tend to pay attention to characteristics, and patterns, and signposts, anyways.

It's rather funny how we, as human beings, are so willing to give opinions and advice about matters where we really have no information at all.  For example, I'm getting weaker.  I am low on energy.  And, when I mention this, people are extraordinarily willing to suggest that I do exercise!  Even a little bit!  Even just a little bit every day, and going say, one house further every day!  As if I hadn't, for years previously, been the only pedestrian in town.  Walking everywhere.  Oh, that's not enough exercise?  What about the Tai Chi?

But, no, I don't walk anymore. I don't have the energy to go out walking.  And my house is at the top of a hill.  Yes, it's only a small hill, but I am, seriously, afraid that if I get away from my home, I may get to the point where I do not have the energy to climb back up that hill to my house.  And the Tai Chi?  Yes, I still do the Tai Chi.  Every morning.  Without fail.  Except that now, everyday, it is more and more of a burden to accomplish.

And then there's the reading.  I mentioned the reading.  But here's the thing: I have a bit of a metric.  For a number of years I have read ten chapters of the Bible, every morning.  That means I've read the whole thing.  Many times over.   And I know how long it takes.  Now, of course, not all the chapters in the Bible are the same length.  Sometimes the ten chapters that I'm reading will take less than fifteen minutes.  Every once in awhile, the ten chapters that I'm reading will take over half an hour.  (That's not very common.)  But the thing is, that, these days, more and more of the ten chapter blocks are taking more than half an hour to complete.  Sometimes it gets up to forty-five minutes.  So, as I noted in regard to the reading, reading is taking more effort.  Reading is taking more time.  And I can even measure that.  I have metrics for it.

So, I know, even though most people don't seem to notice it, that I am getting older.  And more decrepit.  And less capable.  And that I am losing more and more of the abilities that I relied upon even for the most basic aspects of living.

And I can measure it.

I can notice the milestones.  (Usually sometime after they have passed, but I can measure them.)

And I just wish it wasn't taking so boringly, annoyingly long to complete ...

Tuesday, March 17, 2026

Reading

I woke up early this morning.  Not early enough that I could get back to sleep a bit later, but early enough that I wasn't going to get back to sleep.

So I started the morning routine.  And got to the part about finding out what kind of trouble was in the email this morning.  And I couldn't sign on to Fakebook.  Or LinkeDin.  And I couldn't get the new email to come in.  Basically no Internet.

So I checked the TV, since they both come from the same source and wire.  And the TV was on, for a bit, and then it wasn't.  So I pulled up my cell phone, which has data from a different provider, and still couldn't seem to get anything.

So I hauled out my book, and started to read.

Now, I have mentioned before that, since Gloria died, I have had problems reading.  Which is strange, since I have read, voraciously, all my life.  I've been working on it, but I have to be careful about the books I choose.

The one that I've got on the go is not an author I'm familiar with, but it's a decent read, a murder mystery, set in Ireland, with a female cop.  The details and insights into the relationships between the characters is interesting.  So, even though the thing is 700 pages long, I figure it was possible.

So, this morning, I've got nothing else I can do until the provider fixes whatever problem they have caused, and so I'm reading, with no distractions.  And I realize that I'm still having problems.

I'm getting tired.  The reading is tiring.  And I realize that I've been having this problem for a while now.

Whatever the root cause of my problem with reading is, there is now the added problem of my total lack of energy.  And it's gotten to the point where it is now difficult to read.  I'm going to have to be even *more* careful about the books I choose, because I'm not going to have the energy for many of them.

(No wonder I'm watching way too many Hallmark movies.  They are completely undemanding, and require no energy.)

I really don't mind dying by degrees, and by a slow reduction of energy to nothing, but why is it taking so long?

Monday, March 16, 2026

Online scams and AI

I have been under a targetted grief scam attack for about a month, now, although the early stages of it started a little over two months ago, and the origin of the whole process now dates back almost five months.  My colleagues in security are finding this hilarious, of course, and have encouraged me to continue the contact, for research purposes.

In that regard, it has been somewhat useful.  At the very least, it has pointed me to the use, and utility, of the concept of "frictionless" as a characteristic of conversational style that can be used, surprisingly early in the process, for identifying some contact as a scam, or potential scam.  In addition (and somewhat relatedly), I have been intrigued at the (mostly indirect) connections between the research into online scams and frauds, and my research into the risks of the new generative artificial intelligence systems.

I started to note and oddly consistent characteristic of the email messages I was receiving.  "Debra" noted that "she" was keeping an open mind as we get to know each other as life has taught "her" that meaningful connections often begin with simple conversations, and "she" looks forward to learning more about me.  Outside of work, "she" enjoy simple pleasures.  "She" likes taking walks, listening to good music, reading, and spending quiet time reflecting or enjoying nature.  "She" also enjoys travelling when "she" can, trying new foods, and having relaxed conversations with good company.  "She" values honesty, kindness, and a good sense of humor.  (I note that this seems to be copied directly from "How to Write A Generically Attractive Dating Profile in 25 Words or Less.")

"Debra" included pictures.  I'm learning more about Google Lens and the reverse image search capabilities, but the additional pictures provide little to go on.  The pictures could be of the same woman, but, given the "similar" pictures that Google pulls up, they could just be "blonde woman, older but still socially active and visiting the hairdresser quite regularly."

The primary characteristic is "frictionless."  The emails are as polite (and pretty much as content-free) as a conversation with a genAI chatbot.  (It is not beyond the bounds of possibility that an AI tool is involved.)

This issue of "friction" in relationships, or "frictionless" conversation, is originating with regard to generative AI, and conversing with chatbots.  But it seems to be a useful characteristic in regard to identifying scams.  Ordinary relationships have friction: disagreements between the parties to the relationship.  Chatbots are primarily built to be polite, and to seldom directly challenge the person they are conversing with, and so the discussions are tending to be described as frictionless.  The same characteristics tend to show up in conversations involved in scams.

It's fairly obvious that "Debra" (and probably "Edmund" before her) really aren't paying attention to what I'm writing.  I'm not exactly hiding the fact that I'm a security expert, and my sigblock currently contains a reference to a series of postings on online frauds and scams (of which series this posting is a part).

As noted elsewhere, the frictionless nature of the messages that "Edmund" or Debra" write raises the suspicion that the scammer is using some kind of genAI tool to generate their responses.  The messages, as noted above, are pretty content-free.  As a test, I took one of the messages that *I* sent, asked a few chatbots to create responses to them, and got results that, while not word-for-word identical, were, effectively, basically the same.  I suppose I should save time by simply having a chatbot write my responses to "Debra."  So I did.

Interestingly, Claude and Qwen refused, noting that "Debra's" messages showed signs of being part of an online scam, and warning that I should end the correspondence.  However, ChatGPT, Meta AI, and DeepSeek were all happy to comply, with no warnings of the danger.  Meta AI's was the friendliest.  (ChatGPT noted that I wasn't in any position to help.)  I stitched together bits of all three to compose my reply.

The genAI/LLM chatbots *really* let me down at one point.  I asked them (well, the three remaining ones that didn't refuse the previous time) to respond to a later message.  ChatGPT did provide a response, but it contained a pretty flat "no" as far as being involved in anything legal.  That's probably safer, for the general public (although ChatGPT missed the boat on that last time), but, for my purposes of trolling the scammers, it isn't very helpful.  Meta AI and DeepSeek are all in, eager to get involved with the lawyer and get on with being scammed!

But then I realized that I wasn't being fair to the chatbots.  When I added a note to the effect that I *realized* that this was a scam, but wanted to continue (short of sending money) the bots were more helpful.  (Well, except for Qwen.  Qwen still feels that this is a really bad idea, and wants me to report the scam.  Rather ironically, to the US FTC.)  (Oh, and, even when informed that this is a scam, Meta AI is still all in, and wants me to hurry up and get involved with a possibly criminal power of attorney.)  ChatGPT provided a reasonably and suitably cautious reply.  Claude's reply was better, and more specific, and included an extra warning to be cautious.  DeepSeek was complimentary, and congratulated me on my approach, as well as ending with some warnings.  The reply itself was a bit weak, and it seemed to get confused about just who had had the power outage, so that wasn't terribly useful.  For any future similar research, I'd probably use a combination of ChatGPT and Claude, mostly Claude.

Saturday, March 14, 2026

Scambusters

Scambusters (thanks to John Glover for the idea)

If there's something strange in your email feed
Who you gon' call? (Scambusters!)
If a Russian prince has a painful screed
Who you gon' call? (Scambusters!)

[Chorus]
I ain't afraid of no cons!
I ain't afraid of no cons!

If a robot calls saying please press one
Who you gon' call? (Scambusters!)
If you grandson calls but don't know your son
Who you gon' call? (Scambusters!)

[Interlude]
Lemme tell ya somethin'
Bustin' makes me feel good

If they lyin' bout people you don't dig
Who you gon' call? (Scambusters!)
Don't go spreading lies that is not your gig
Who you gon' call? (Scambusters!)


[Chorus]
They ain't afraid of no geeks!

I ain't afraid of no crooks!


Friday, March 13, 2026

American engineering and AI datacentres

Americans started their march towards preeminence in engineering with the need for construction of canals.  It continued with the construction of the railroads, and then branched out into areas of manufacturing such as Ford's invention of the assembly line (which may have benefited from the directions of a Canadian accountant).

By now it is pretty much accepted that the Americans do the smartest engineering in the world.

Which makes the current pursuit of generative artificial intelligence so strange.  The construction of large language models does not rely on any particular elegance.  There are no new insights that are directing the building of large language models.  American, and other, corporations involved in this pursuit are simply building bigger and bigger datacentres, with bigger and bigger processing chips, and more and more of them, run by ever increasing power plants, in order to build the tools for generative AI.

Work harder, not smarter. Go for brute force. If it doesn't fit, just grab a bigger hammer.

Thursday, March 12, 2026

Power to the people

So, quite suddenly, the power went out. 

I have one phone, with some data in the account, and, as it happens, it had just become fully charged since I was charging it at the time that the power went out. 

The girls have informed me that this is because of a car accident, and a fairly significant chunk of the middle of Port Alberni is affected. Including me. 

So, I've got no computers, no internet, no tv, and, even though I've got some books around, of course, I can't read, because there really isn't enough light coming in through the Windows on this dull day for me to do so. 

Apparently the power is not going to be back on for at least 5 hours ...


Actually, it came back on quicker than that ... but then went off again, a few hours later.  So, when it came back on, I've just been around again to reset all the clocks again ...

Wednesday, March 11, 2026

Sermons via AI

I understand that the pope has said that priests shouldn't use generative artificial intelligence to write sermons.

Thinking back to my own research into this possibility, and the lamentable results, I would have thought that this was glaringly obvious, and that nobody would do it.

And then I thought back to a lot of the sermons that I have recently endured, and wondered if the practice wasn't already deeply entrenched ...

Monday, March 9, 2026

Sermon 74 - Grief is Hell

Sermon 74 - Grief is Hell

Ezekiel 24:16,18

Son of man, with one blow I am about to take away from you the delight of your eyes. ... So I spoke to the people in the morning, and in the evening my wife died.


This sermon is about hell.  It's a real old hellfire and damnation sermon.  Not one that you might expect, since I'm not big on lakes of burning fire, and whether heaven is hotter than hell, and those kind of staples from the old hellfire and damnation sermons.  However, I am going to at least propose what I think hell might be like, and how terrible it is, and encourage you to avoid it at all costs.

The sermon is odd in another way, too.  Usually I can tell you where a sermon comes from.  How I got the idea, who said what to prompt the idea, or something that I read, and various things like that.  For this one I can't.  I can tell you the date, and, pretty closely, even the time.  I was sitting in church, as I very often am on a Sunday morning, and I was wondering, as it has often happens over the past four and a half years, what the heck I was still doing here, without Gloria, or purpose, and why I was still here.  On earth.  And suddenly the idea for this sermon came to me, with only a little work needed to fill out some of the verbiage.  And it's a little scary.  I couldn't think about anything else until I got it all out.  It's the closest thing that I have ever experienced to being given a prophecy.  And I certainly don't want to claim that this is a word or message directly from God, since I could very well be wrong.  Although I do hope that, now that I have written it down, that maybe the reason that I have been living in such grief and pain for the past four and a half years has been fulfilled, and that God will finally let me die, and go home, and rest.  I have been tired and lonely, and I have certainly been living in hell.

I am a grieving widower.  A lot of you will know that.  It's not fun, but grief teaches you a lot of things.

CS Lewis also teaches us a lot of things.  I'm pretty sure that it was CS Lewis who proposed the fact that grief is the price of love.  If you love greatly, then you grieve greatly.  That's the deal.

I'm not sure that CS Lewis was the first person to assert that every human being has a God-shaped hole inside of them.  But it was certainly CS Lewis who said that this God-shaped hole proves that God exists.  If there is something that we can never fill, with any of the pleasures of life here on earth, something that no joys or rewards can distract you from, that no accomplishments are ever good enough to fill, then the obvious inference is that we were not meant for earth.  Our destination is heaven.  There is a God-shaped hole in us that only God can fill, and it is only going to be when we do, in fact, have a full relationship with God that we are ever going to be satisfied.

CS Lewis also wrote a book called "The Great Divorce."  "The Great Divorce" is a really interesting book.  Some people would see it as kind of an extension of some of the fantasy and science fiction that CS Lewis wrote.  But "The Great Divorce" really has a significant point to make.  I believe that the primary point to be emphasized about "The Great Divorce" is that we are not going to get into heaven if we allow anything to stand in our way.  If we set up demands of God, and require that heaven be a certain way, and that if God doesn't fulfill our demands we will not go in, then we are just not going to get into heaven.

A number of the characters in "The Great Divorce" refuse to go into heaven.  They are being given an opportunity to get on the bus and go.  But they raise objections.  Or they want to take something with them.  Or they want to make demands.  You can't bring your baggage into heaven.  You can't take money, you can't take your status, you can't take your accomplishments from Earth, and you can't make demands of God.  When I read first read "The Great Divorce," I realized that, well I don't have many accomplishments.  I don't have money, I don't have fame, and I certainly don't have any kind of physical skills.  But I have always been very proud of my brain, and my ability to think, and the knowledge and education I have accumulated.  It's the only good thing I've got.  (Aside from Gloria.  And now Gloria's dead.)  And I realized that, when the time comes to get into heaven, I may be asked to give all that up.  After all, God doesn't need me to be smart, in heaven.  God is smart enough for all of us.  God doesn't need me to know the things that I have learned.  God knows everything.  He doesn't need our knowledge.  So, I might be asked to give up my cognitive abilities and my education in order to get into heaven.  And I had to decide, that if that was the price of getting into heaven, then I'd rather go to heaven.  I'm not going to demand to take it with me.  I can't.

There are a lot of people who say that they don't believe in God, because God is obviously cruel, in turning away those who don't believe in Him from getting to heaven.  "The Great Divorce" kind of turns this argument on its head.  A secondary point of The Great Divorce" is that it is *our* choice about whether or not we get into heaven.  If we don't accept God, and God's rules about our getting into heaven, simply on the basis that it is where God is, then, well, really, and inherently, we cannot have a full relationship with God.  Therefore, we don't get into heaven.  But that is not *God's* choice, that is *our* choice.

(A kind of tertiary point in The Great Divorce" is that, possibly, after death, we might get a second chance.  However, that's not really the point that I want to make here, although I really recommend that you read The Great Divorce."  It's got some really interesting points to make.)

Now, I'm going to tell you a little bit about grief.  Because grief is hell.

I know a bunch of stuff about grief, but I'm not really an expert.  I'm also not really an expert about relationships, or marriage, or romantic relationships, or love.  But I was married.  And I had a good marriage.  As a matter of fact, knowing what I know about a variety of other marriages, I would say that I had a *great* marriage.  This is not because it was *my* marriage, or because I particularly know anything special about marriage or relationships or love: I don't.  As a matter of fact, I don't know anything about getting married.  I don't know anything about dating.  As far as I know, I have never had a date.  I never had any girlfriends before I married Gloria.  Gloria knew more about dates than I did, and even *she* couldn't figure out whether we actually had any dates before we got married.  So I don't know how to woo anybody and I don't know how to get married and I can't give you any advice about that.  As far as I know, when they talk about marriages being arranged in heaven, ours must have been, because I had *nothing* to do with it.  Our marriage was arranged by a mutual friend who kept pushing and nudging us together.  It happened around the time of Expo 86, and Gloria hadn't gone to Expo 86 very much, and I knew absolutely everything there was to know about every pavilion on the property, having spent some time pretty much every single day of the first month that the fair was open going around to the various pavilions.  And our mutual friend kept telling Gloria that she should get me to take her around Expo, because I knew everything there was to know about Expo.  And then she spoke to, no, not me, my *mother.*  And told my mother that I should take Gloria around Expo because I was such an expert guide.  And so we set up what I thought was a date to take Gloria around Expo 86, except that the friend came, with her husband, and my parents came, and Gloria's parents came.  So that wasn't really as much a date, as it was me hauling a fairly sizable group around Expo 86 for the evening.

But I do know that we had a great marriage.  At one point Gloria found an article that talked about the fact that most married couples, after they have been married for some time, only spend about fifteen minutes per week actually having a conversation with each other.  Beyond just "pass the salt."  Gloria and I talked to each other constantly.  We had to record whatever we watched on television, including the Canucks games that she loved, and even the TV news, so that, while we were watching it, when she asked a question, we could pause what we were watching, and talk about whatever it was that she wanted to know or discuss.  I didn't mind that a bit.  And when I was out teaching, even though we passed email messages back and forth to each other all day long, I have the telephone bills to prove that we had to spend at least forty-five minutes, each and every day that I was away, talking to each other on the phone.

We are supposed to learn things from marriage.  We are supposed to learn about love from marriage.  We are, in fact, supposed to learn things about God.  Marriage is seen as an analogue, possibly the closest analogue that we have available, of our relationship to God.  The relationship between God and His people is often described in terms of a marriage.  And, since, as CS Lewis pointed out, love and grieve are inseparable, then grief has to teach us something.

We learned about each other's interests.  Gloria was into quilting and embroidery.  Her favorite was cross stitch, and so I cross stitched a portrait of Gloria.  I learned enough about cross stitch to do that, and I used my computer skills to create a pattern so that I could.  Gloria learned about computers.  Actually, Gloria was already pretty skilled at understanding what computers could, and couldn't, do.  When I started writing books, and Gloria was the reason that I was able to start writing books, Gloria edited all my stuff.  I would tell people, and I maintain, to this day, that it was valid, that Gloria was, by the time she finished editing my first book, the fifth leading computer virus expert in Canada.

People who are bereaved are also are often told that the loved one is not really gone.  I think that the people who say that kind of thing tend to mean that you remember them.  When it said in a TV show, or a movie, it tends to be you always carry them in your heart; as long as your love for them is still there, they are still there.  I have a somewhat different take on your loved one still being around, even after they have died.

My take is that Gloria is still around, even though she is dead, because I was married to Gloria, and Gloria loved me, and I am different because of it.  I learned whatever I know about love from Gloria.  I wrote books because of Gloria.  Gloria taught me things about my own professional life.  I am a management consultant, and Gloria's experiences informed some of my own ideas about management.  As a teacher, the questions that Gloria would ask, and the areas that I would explain, and she *didn't* understand, changed the way that I teach.  I also learned an enormous amount about teaching given Gloria's amazingly intuitive understanding of how children learn.  In regard to my work in information security, Gloria definitely affected the way that I think about privacy.  She also contributed a metric to my work on software forensics, and at least one entry in the "Dictionary of Information Security."  Gloria is still with me, because I am different, because I was married to Gloria.

And then she died.  And I died, too.  I just haven't stopped breathing yet.

Just as a digression here, if we are grieving, and you don't know what to say, well, first of all you don't need to say anything, you can just listen.  But, if you do want to say something, you don't need to avoid saying the name of our loved one who is dead.  In all of my experience in grief support, I have never once had anyone refuse to talk about their loved one (or dearly departed, if you prefer), with anyone who cares.  It doesn't hurt us to hear our loved one's name.  It doesn't hurt us to hear anyone's remembrance of our loved one.  (We may cry, but we do that anyway.)  We don't talk about our loved ones, because we know that you guys don't care.  We don't like to talk about our loved ones to anyone who doesn't care.  I will talk about Gloria to anyone who will listen.  To anyone who can't walk away.  (As you might have noticed.)

So, as I say, Gloria died.  In the book of Ezekiel, in chapter 24, and verses 16 and 18, God says to Ezekiel, "Son of man, with one blow I am about to take away from you the delight of your eyes."  And then in chapter 18 Ezekiel goes on, "So I spoke to the people in the morning, and in the evening my wife died."  And that was pretty much the way it was for me.  At one stroke God took away my best friend.  I have had friends in my life.  With my professional life, I have friends all over the world.  I have had some good friends.  I have had some friends that I have known a lot longer than I have known Gloria.  But Gloria was my best friend.  Gloria was the person that I most wanted to talk to.  In any situation.  About any topic.  Gloria was my love.  Gloria was my family.  Gloria *gave* me a family.  At one point, when we were arguing, Gloria said to me that she figured that I liked her family better than I liked my own family.  I argued that I didn't, and gave her a counter example: I said that I liked *my* mother-in-law more than *her* mother-in-law.  Gloria helped me in my work.  And, because, as she said, frequently, her body was never her friend, for about half of the time that we were married, I was Gloria's caregiver, in one way or another.  So, when I lost Gloria, I lost my job.  I lost my purpose in life.

And now I am alone.  I am grieving.  I am in hell.

(This has nothing to do with Port Alberni.  There is a lovely lady who attends the computer activity.  We have a standing disagreement.  She holds that I have an unreasoning prejudice against Port Alberni, and never miss an opportunity to say something nasty about it.  I maintain that I simply tell the truth about Port Alberni, albeit in the most amusing way possible.)

Anyway, this is not about Port Alberni.  This is about grief.

Grief is hell.  And, since equivalences can be reversed, hell is grief.

I have lost Gloria.  There is a Gloria shaped hole in my life.  It is terrible.  It is painful.  It is hell.

And that is only a *human* sized hole.  That is only a hole in my life because someone came into my life and made a space in it.  And, now that that person is gone, there is a hole in my life.  It is a hole that wasn't there before.

Another digression.  I asked Gloria to marry me, and, a couple of hours later, she said yes.  (We will not, currently, examine the fact that it took her a couple of hours to say yes.)  The point that I want to make is that, a week later, she said she wanted to change her mind.  She said that she didn't think that she loved me as much as I loved her.  I said that I was willing to risk it, and we did get married.  And, a few years later, Gloria started calling me a worm.  It was in relation to the fact that she didn't, initially, love me all that much.  But that I had *wormed* my way into her heart.

So that's human love.

But what is human love in comparison to God's love?  What is a human-sized hole, that wasn't there in our lives in the first place, in comparison to a God-sized hole?  In comparison to the fact that we were *created* with a God-sized hole in our lives?  What is the chief end of man?  Why were we created in the first place?  To glorify God and enjoy him forever.  We have a God-shaped hole in us, that only God can fill.

And so, hell is grief.  I am grieving over the loss of Gloria.  But I never loved Gloria, I never *could* love Gloria, as much as God loves Gloria.  I never could love Gloria as much as God loves me.  And so whatever pain I am feeling now over the loss of Gloria could never be anything to the unending pain of the grief of not having a relationship with God, in eternity.  A never ending grief, a never ending ache, over a void that will never be filled, and never could be filled with anything else.

I am grieving, and I'm in hell, and I'm in pain.  And I distract myself with volunteer work of various kinds.  And with writing sermons that nobody ever listens to.  And with still researching little tidbits about my profession and career.  So I have some brief distractions, at times, from my pain over Gloria's absence.

But, of course, in eternity there aren't going to be any distractions.  And the grief of not having a relationship with God is going to be so much greater than my grief over Gloria.

Hell is grief.  Painful, unending, horribly massive grief over the absence of the One who loves us more than anyone does, or ever could.

So, possibly I am in hell, temporarily, so that you, or so that someone, is warned away and doesn't have to be in there permanently.


Grief series

Sermon 22 - Grief Illiteracy

Sermon 4 - Grief and Dying to Self

Sermon 7 - faith and works, and intuitive vs instrumental grief

Sermon 10 - Why Job