Sunday, March 22, 2026

MGG - 7.02 - Dead - blog

MGG - 7.02 - Dead - blog

Gloria died and I died as well.  I just didn't stop breathing.

Possibly because I had been documenting, via email, Gloria's last days in the hospital, our family physician suggested that I do writing to deal with my grief.  She probably had a private-bound grief journal in mind.  I, of course, started a public blog.

I had created, and made one entry in, a blog about a dozen years before.  So I had a blog that I could use.  That's why the title is so weird.

So I collected, edited, and posted the material that I had been writing about Gloria's last days.  And I drafted some material for Gloria's obituary and eulogy.  I think I mentioned elsewhere I knew that I was going to have to write Gloria's eulogy because so many knew Gloria from so many different places and situations but nobody else knew all of what Gloria had done.  I knew that it was likely that I would have to deliver the eulogy myself as well.  I practiced reading that eulogy out loud every single day in order to get my grief bursts out of the way.  As it eventually turned out I had a couple of months to practice it before we were able to do Gloria's actual memorial service.

And I diarized my grief and trauma and I posted a whole bunch of pictures of Gloria in one particular blog posting.  And I posted pieces on what I was learning about grief.  Kind of "A Grief Observed," volume two.  And an awful lot of the entries were about situations, which ordinarily shouldn't have been terribly emotionally fraught, but which triggered grief bursts, usually completely out of left field.

(Some time ago the girls asked me if I had gone back and read the early entries in my blog to see if what I was currently experiencing was the same as what I had experienced earlier in the period immediately following Gloria's death.  I had read some but not necessarily a lot.  In writing this I am revisiting some of those postings, possibly for the first time in four years.)

Eventually I started writing other postings aside from those about grief.  I bought a new vacuum cleaner and wrote a review of that.  I wrote about picking up trash on my walks, walking everywhere around a new town.  I posted about buying shoes.  I posted about gardening.  I posted about running across, completely by accident (and at two o'clock in the morning), a process of moving two houses from where they were to, well, elsewhere.  Slowly, incrementally slowly, the blog started to be about things other than grief.

One of the observations and illustrations of grief that tend to be reused as memes around the grief accounts is that your grief does not diminish over time.  It's more like your grief stays the same size but your life, eventually, starts to become larger around the grief.  In a sense my blog and the move from entirely about grief to being about other things (as well as the grief), illustrates this idea.

Recently someone proposed doing a story about me as a blogger.  The thing is I had never (and still don't) thought of myself as a blogger.  The blog was just, originally, a convenient way to do grief journaling.  I figured it wasn't a terrible invasion of my privacy to write my grief journal on a public blog since you can count the number of people who regularly read my blog on the fingers of one hand.  I have posted links to certain of the non-grief journal postings, and, yes, a few more people have read those.  But I know that absolutely nobody is interested in my private life.  At least not enough to read it on a regular basis.  If I based my self-worth on the number of people who read or even the fewer number who comment on my blog, I would be completely suicidal.  (Well, yes, I *am* suicidal, so maybe that's not a great example.)

I consider myself to be a teacher.  I used to write books but now I've lost my editor so possibly the blog is a kind of a version of continuing to write and to use the writing as some kind of a teaching instrument.  I have used my blog to describe workshops that I was willing to teach and, latterly, have started to use the blog in order to provide adjunct materials to the workshops that I do.  But I still don't consider myself a blogger.  Not as such, anyway.

There is perhaps one other factor that is related to the blog.  That is that, at about the same time that Gloria died, Google either developed Gboard, or I noticed that it was an option.  As I have said, I do not know how to explain why I loathe and despise, to the very depths of my soul, soft keyboards on smartphones.  I have hated them ever since actual physical keyboards disappeared from smartphones.  So, for the first time in any really effective way, I had a piece of dictating software on my portable device: on my cell phone.

Having dictation capability was kind of game changer.  I was working with a number of articles, and I was able to produce them much more quickly.  I could also include an awful lot more that I probably would not have if I was typing the text out.

Tied in with the fact that I could dictate into email messages, this became not just dictation for articles, but reminders for all kinds of things.  In particular, it became reminders of things that I wanted to write, possibly at a later time when I had either more time, or better connectivity, in order to deal with the dictation issues.  (Gboard requires an internet connection in order to work.)

In addition to individual articles becoming longer, dictation allowed me to consider larger projects.  So the presentations that I had always done, now became frameworks for creating entire articles, and sometimes even series of articles.  The idea for the memoir came from the fact that I figured that it would be a lot easier to dictate the pieces.

And, of course, the ease of dictation also prompted the idea for the sermons.


Next: TBA

Saturday, March 21, 2026

Sermon 13 - Does God love AIs

Sermon 13 - Does God love AIs

Matthew 3:9

And do not think you can say to yourselves, 'We have Abraham as our father.'  I tell you that out of these stones God can raise up children for Abraham.


I put the recent series of generatively artificially intelligent chatbots to the test by asking them to write sermons for me.  In my view they, the AIs, failed dismally.  Most of the sermons are way too short and contain extremely pedestrian ideas.  I had asked for a biblical and Christian view of artificial intelligence.

What I got back talked about the technological developments and the importance of examining the implications of those developments in light of our faith and the Bible.  They talked about wisdom.  They talked about our responsibility for stewardship of God's creation, and what we needed to do in terms of technology.   They talked about the need for ethics and the need to love our neighbor.  They talked about the importance of not making technology an idol.  They didn't talk about how God might feel about artificially intelligent entities.

When I first started getting interested in researching computers and information technology, as computers and information technology rather than simply a tool to use in education, the first piece that I wrote was a four-part series looking at a theological perspective on artificial intelligence.  I had started looking at artificial intelligence, and researching a few of the different areas of it, but, of course, I didn't have as much information then, and nor had I explored the variety of different artificial intelligence approaches, that I do now.

That was over 40 years ago, and, to be honest, I can't really remember the specific points that I might have been addressing at that particular point.  But, given the reason interest, I've been thinking that I should revisit a theological, or Christian, perspective on artificial intelligence.

And now, of course, everybody is interested in artificial intelligence.  For many decades, artificial intelligence has been primarily of interest to specialized researchers in the field of information science.  Now, everyone has an opinion.  I have, recently, noted a number of offerings on artificial intelligence given by various churches, and church affiliated groups.  Unfortunately, a great many of these presentations are presented by people who have significantly more theological training than I do, but very significantly less technical training than I do.

Everyone is interested in artificial intelligence these days because of one particular, relatively new, approach to artificial intelligence that has produced some startling, and even amazing, results.  Probably less amazing than most people think, once you actually look at what this particular approach to artificial intelligence has been doing, but startling nonetheless.  People are beginning to say, and seriously believe, that truly intelligent computerized systems will be with us within ten years.

Of course, taken from this perspective of someone who has considered this field over a number of decades, I should remind you that, for at least eighty years, people have been saying that we would have artificially intelligent computerized systems within the next ten years.  They tend to say that pretty much every year, for the last eighty years.

A smart guy called Alan Perlis, who teaches at Yale University, has famously said that when we write programs that "learn," it turns out that we do and they don't.

So possibly we should start by asking the question, what actually is artificial intelligence?  First up, artificial intelligence, as far as anything has resulted from it over the past eight decades, is not a thing.  At least, it is not a single thing.  Artificial intelligence, and the various products resulting from it, have resulted from a variety of different approaches that have addressed various problems that traditional computer systems have found difficult to solve.

First of all, it's been difficult to solve because, well, we don't know what intelligence is.  Even the psychologists don't know what intelligence is.  Even the educators don't know what intelligence is.  We have never been particularly good at determining, and defining, what we actually mean by intelligence.  Basically, it is something that we assume to ourselves, and assume that machines, and animals, only have limited varieties of it.  Intelligence is like art: we don't know what it is, but we know it when we see it.

And then there is an additional question.  If we make something that is intelligent, is that the same as making something that has a personality?  If we make a machine that makes intelligent decisions (if we ever decide what intelligence is), does that make that machine a person?  And that question probably has legal ramifications, as well as philosophical ones.

And then, of course, when we approach it from the theological angle, we have to additionally ask the question that, if something is intelligent, and if we then also decide that it is a person and as a personality, does it also have a soul?

First of all it'll be a long time before we need to worry about artificial intelligence.  As previously noted, artificial intelligence as a research field and a quest has been around for about eighty years.  Yes the new generative artificial intelligence models have been quite astounding in terms of their ability to reply to questions and demands put to them but they really aren't thinking.  They have been trained, and quite specifically trained, to be able to carry on a plausible conversation.  They haven't been trained to explore the truth or to explore any measure of certainty in terms of the answers that they give and the accuracy of those answers.  They haven't been trained about anything to do with morality.  All that they have been trained to do is be plausible and convincing and even glib.  That's it.

So it's going to be a while before you have to worry about them, at least not about the AI systems themselves.  People, yes.  People you are going to have to worry about.  People seem to be spending an awful lot of money and investing an awful lot of money in artificial intelligence.  When people invest that much money into something and crowd that much capital investment into one single area, well that can bring you trouble.  Maybe it can bring you trouble in terms of the fact that all of this investment is being poured down a rabbit hole and possibly nothing will come out.  That means trouble for the financial markets themselves.

Then again maybe something *will* pop out.  Maybe something potentially useful and maybe something that gives businesses an advantage.  Possibly even a major advantage.  With the relatively few companies that are able to pour such enormous amounts of investment into this, that means that we are going to have a concentration of capital, and an inequity of distribution of wealth, the likes of which we have never seen.  What we *have* seen throughout history is that when capital is concentrated to such an extent, trouble inevitably results.  Generally that trouble comes in the form of wars.

But the wars won't necessarily be the fault of the AIs and it won't necessarily be fought by the AIs.  The wars will be caused by and fought by people.  Artificial intelligence is just an excuse.

So that is one aspect of artificial intelligence that isn't great.  That is how people react to it.  People who see it as a means of obtaining greater wealth and greater power over other people.  But that still doesn't say how God will really feel about artificial intelligence.

Will we ever get true artificial intelligence?  I really don't know.  I don't know if we are clever enough to do it.  I don't know whether artificial intelligence requires an artificial personality.  I rather think it does.

There is a field of study known as affective computing, which looks at the ability of artificial intelligence systems to understand our emotions and to react with an emotional component of their own.  This is actually a very important field of study.  We can be as intelligent as we want and still not be able to do anything.  Intelligence will tell you the "how" of an action but it won't give you any "why."  It is emotions that are our motivating factor in terms of actually taking action.

And if we need personality and emotions to create a truly intelligent being or entity, then does that entity have a soul?  Note that I am not necessarily saying that we ourselves can create souls.  It is quite possible that God will step in.  It is more than possible, given how little we know about the fairly mundane and pedestrian level of intelligence that we have created with generative artificial intelligence.  We don't know what these systems actually do; we have only the most minimal knowledge about how they actually do it.  It's not beyond the bounds of possibility that we will supposedly create something and really have no idea how it was created or how we created it.  In the midst of that there is an awful lot of room for God to reach down and endow these new entities with souls without our ever noticing.

And here at last we get closer to actually looking at the question of how God feels about artificial intelligence.  How does God feel about AI entities?

Probably the book of Romans is a good place to start.  Paul talks about Jews and Gentiles.  He talks about those who are under the law and those who do not have the law.  And he notes that there isn't an awful lot of difference between them.

Yes there is the benefit that the Jews have in having been the stewards of the law.  God revealed the law to them and therefore they knew what the law was.  But they didn't always keep it.  Under the law the standard is perfection.  Either you keep the law perfectly or you are a sinner.  Those who had the law were convicted by the law, of sin.  Those who didn't have the law were equally convicted because they sinned even though they didn't know it.

But Paul also said that those who did not have the law and yet kept the law and followed the law from their own inclinations had at least a small amount of righteousness as a result of that.  He was really addressing the fact that those who did not have the law themselves proved that the law was important by following the law even if they didn't have it.  This probably points to the idea of how God would feel about artificial intelligence if artificial intelligence was ever created anyway.

Paul talks about circumcision and uncircumcision.  He notes that neither circumcision nor uncircumcision is all that terribly important in terms of our own salvation.  What is important is our faith.  Our commitment to God, our commitment to a relationship with God, our commitment to following God and following his law, our belief in God, our faith.  That's what's important.

So I would say the same thing.  John, that is John the Baptist, said that the Pharisees and the Jews in general should not make a big deal out of the fact that they were sons of Abraham.  John said that if God wanted to he could make sons of Abraham out of the stones in the road.  Of course stone, when ground up, is sand, and sand is made of an awful lot of silicon.  Silicon, of course, is what goes into computer chips.  Wouldn't that be interesting?  Making sons of Abraham out of silicon?

Some people are absolutely terrified of artificial intelligence.  Some people feel that once we have created artificial intelligence, we will shortly thereafter be living in heaven with all our needs taken care of.  I rather suspect that neither of these positions is true.

Yes there is the possibility that artificial intelligence may become as intelligent as we are, and then, very rapidly, become much more intelligent than we are.  In its attempt to improve itself it may simply brush us aside and never realise that it has destroyed us.  I don't know whether that scenario is likely or unlikely but even if it happens, have we not destroyed many things in our attempts to grow?  Could an artificial intelligence that has destroyed God's creation still be loved by God?  I would hope so.  If that wasn't a possibility then there wouldn't be an awful lot of possibility for us.  I don't think that God would be any harder on a silicon son of Abraham than one that was a carbon-based life form.


AI series

Sermon 70 - Superstitious Religion

Sermon 55 - genAI and Rhetoric

Sermon 38 - Truth, Rhetoric, and Generative Artificial Intelligence

Sermon 29 - Marry a Trans-AI MAiD



Sermons


AI topic and series: 

AI - 2.03 - genAI - hallucinations

So, OK, we have introduced the joke of what is the difference between ChatGPT and a used car salesman?  The answer is that the used car salesman knows when he is lying to you.  As a matter of fact the used car salesman knows what a lie is and that there is such a thing as the truth.  ChatGPT doesn't.  (I suppose that we have a while to go before we even get there, though.)

And there is also the note that calling the misinformation that generative artificial intelligence produces a "hallucination" is problematic.  The term "hallucination" is probably the wrong one to use; however, it seems to be well established in the industry right now so I doubt that I'm going to win that battle.  (Pick your battles.)

I do want to recommend that you try out some of the chatbots.  The following list all provide chatbots for free and I would suggest that you try the free versions and not get into the paid versions unless you really know something that is going to benefit you or your business.

You might also want to check out the piece on "frictionless" conversation when talking with chatbots.  Note the very odd style and characteristic of the conversations that you will have with them.  Note that this is going to be very indicative of scams and frauds even very early in the process and therefore learning this style and characteristic can save you quite a bit of trouble and money.

LLMs
https://x.com/i/grok      (you might want to be extra careful with this one)

The hallucinations or misinformation produced by generative artificial intelligence and large language models tend to be plausible.  This is only reasonable, since the text generated by generative artificial intelligence is based on discussions either in books or on the Internet, which would be intended to sound plausible and convincing regardless of whether or not it's actually true.

Interestingly, asking large language models to explain the steps in reasoning in coming to an answer which the system has already given you, generally provides better quality and more accurate answers.  Seemingly it forces more processing of the problem.

One of the shortcuts that artificial intelligence providers have discovered is that you don't need the entire large language model in order to provide useful or at least acceptable output from the chatbot.  Using a process called low rank adaptation, or LoRa, the system will can be tuned for a specific type of problem or a specific topic of discussion and a new generative artificial intelligence subsystem (much smaller than the original and using much less processing power and electrical power), can be created.  These tools are therefore much cheaper to run and also much cheaper to create.  The full large language model can be used to generate the subset model, and then the subset model will be able to run on its own as a standalone system, requiring much less processing capability and much less power.

Unfortunately while this process can generate useful entities, it can also be used for more nefarious purposes.  It is easier to create a new generative artificial intelligence system using the LoRa process.  Therefore it is also cheaper.  Therefore a number of less scrupulous businesses have been able to create supposedly artificially intelligent systems based on this process.

Given that the process is cheaper and easier a number of these systems are not as careful with the facts.  As one possibly variant example the artificial intelligence chatbot on the X system known as Grok has been frequently found to propose extreme right-wing conspiracy theories.  A related tool has fewer guard rails than other systems and was, for a brief time, widely used to remove clothing from pictures and images of clothed females and therefore create deepfake pornography.

As with studies of misinformation and disinformation itself, studies of hallucinations in artificial intelligence systems have disturbing results.  A study from Purdue University noted that 52% of answers by ChatGPT to programming questions returned incorrect answers, 77% were much more verbose than they needed to be, and 78% of answers, all answers, exhibited inconsistency even when no factual errors were present.  ChatGPT's polite language, articulated and text-book style answers, and comprehensiveness contributed to  participants overlooking misinformation in its responses.

Large Language Models are starting to lie deliberately in competitions, and are getting better at lying and lying more frequently.  GPT-4 exhibits deceptive behavior 99.16% of the time in simple test scenarios

They weren’t designed to generate disinformation, but so many factors make it almost seem that they were.  They’re *really* good at it.  This is to be expected.  In classical Greek philosophy the major categories were Metaphysics, which is the study of reality; Epistemology, which is the study of knowledge and how certain we are of what we know; Ethics, the study of morality; and Rhetoric.  We haven't taught artificial intelligence metaphysics or epistemology,  and, unless you count guardrails as a very simplistic form of deontological ethics, we haven't taught them ethics either.

What we have done by feeding the large language models and generative artificial intelligence masses of undifferentiated text is taught them how people argue.  We have taught the systems rhetoric.  Rhetoric is the art of convincing.  It is intended to produce plausible communications rather than to ensure that those communications are correct.  We have, in reality, taught our artificial intelligence systems how to be really, really good at generating propaganda.


AI topic and series
Next: TBA

Has my blog helped you at all?

A small media company over here in Port Alberni wants to interview me for a short video piece.  They want to interview me as a blogger.

The only issue I can see with that is that I don't see myself as a blogger.  I see myself as a teacher who happens to produce some material in text on the blog in support of what I'm teaching.

In any case, for a media company, showing two and a half minutes of me sitting in front of a computer is probably not a terribly effective graphic.  Therefore they want to interview somebody that my blogging has helped.

Has anything in my blog ever helped you?  If so, would you be willing to be interviewed (probably via Zoom, I would think) by these people?

AI - 2.02 - genAI - hallucinations and superstitious learning

I paid my way through university partly by nursing.  I worked in a hospital for a few years.  All the staff in the hospital, and particularly those in the emergency ward, knew, for an absolute fact, that people went crazy on the night of the full moon.  On the night of the full moon, all kinds of people did all kinds of weird things, and got themselves into trouble, and ended up in the emergency ward.

As I say, I was working my way through university.  And one of the courses that I took was in statistics.  I was interested to discover that there had been quite a number of studies that had been done on this issue of the full moon.  And that every single one of the studies had determined exactly the same thing: there was absolutely no truth to the common perception that people went crazy on the night of the full moon.

As a matter of fact, this belief that everyone goes crazy on the night of the full moon is so deeply embedded into our culture that it is odd that, when you actually look at the statistics and the numbers, there isn't even a blip in regard to full moon nights.  This belief is so deeply ingrained in our society that you would expect that some people would let themselves go a little crazy on the night of the full moon, expecting to be forgiven for any weirdness because of that cultural belief.  But no, there isn't even a blip in the statistics around the night of the full moon.

So, why do so many hospital staff, and so many police officers, and so many people who work in emergency services, so strongly believe that people go crazy on the night of the full moon?

Well, there is a kind of observational bias that is at play here.  If you work in an emergency ward, and you have a night where everything is going crazy, and you finally get five minutes to get yourself a breath of fresh air, and you walk out and look up into the night sky, and there is a full moon, you say to yourself, oh, of course.  And that reinforces the belief.  If the night is crazy and you go and look up into the sky and there is no full moon, you don't think anything of it.  And on normal nights, when there is a full moon, you don't have any particular reason to pay attention to the full moon, and so that doesn't affect the belief either.

One of the other areas of study that I pursued was in psychology.  Behavior modification was a pretty big deal at the time, and we knew that there were studies that confirmed how subjects form superstitions.  If you gave random reinforcement to a subject, the subjects would associate the reward with whatever behavior that they had happened to be doing just before the reward appeared, and that behavior would be strengthened, and would occur more frequently.  Because it would occur more frequently, when the next random reward happened, that behavior would likely have occurred recently, and so, once again, that behavior would be reinforced and become more frequent.  In animal studies it was amazing how random reinforcement, presented over a few hours or a few days, would result in the most outrageous obsessive behavior on the part of the subjects.

This is, basically, how we form new superstitions.  This is, basically, why sports celebrities have such weird superstitions.  Whether they have a particularly good game, or winning streak, is, by and large, going to be random.  But anything that they happen to notice that they did, just before or during that game, they are more likely to do again.  Therefore they are more likely to do it on a future date when, again, they have a good game or win an important game.  This is why athletes tend to have lucky socks, or lucky shirts, or lucky rituals.  It's developed in the same way.

One of the other fields I worked and researched was, of course, information technology, and the subset known as artificial intelligence.  Artificial intelligence is not, despite the current frenzy over generative artificial intelligence and large language models, a single entity, but rather a variety of approaches to the attempt to get computers to behave more intelligently, and become more useful in helping us with our tasks.  One of the many fields of artificial intelligence is that of neural networks.  This is based on a theory of how the brain works, that was proposed about eighty years ago, and, almost immediately, was found to be, at best, incomplete.  The theory of neural networks though, did seem to present some interesting and useful approaches to trying to build artificial intelligence.  As a biological or psychological model of the brain itself, it is now known to be sometimes woefully misleading.  And one of the things that researchers found, when building computerized artificial intelligence models based on neural networks, was that neural networks are subject to the same type of superstitious learning to which we fall prey.  Neural networks work by finding relations between facts or events, and, every time this relation is seen, the relation in the artificial intelligence model is strengthened.  So it works in a way that's very similar to behavior modification, and leads, frequently, to the same superstitious behaviors.

The new generative artificial intelligence systems based on large language model are, basically, built on a variation of the old neural networks theory.  So it is completely unsurprising to see one of the big problems that we find with generative artificial intelligence, is that it tends, when we ask it for research, to present complete fictions to us as established fact.  When such a system presents us with a very questionable piece of research, and we ask it to justify the basis of this research, it will sometimes make up entirely fictional citations in order to support the proposal presented.  This has become known as a "hallucination."

Calling these events "hallucinations" is misleading.  Saying "hallucination" gives the impression that we think that there is an error in either perception or understanding.  In actual fact, generative artificial intelligence has no understanding, at all, of what it is telling us.  What is really going on here is that we have built a large language model, by feeding a system that is based on a neural network model a huge amount of text.  We have asked the model to go through the text, find relationships, and build a statistical model of how to generate this kind of text.  Because these systems can be forced to parrot back intellectual property that has been fed into them, in ways that are very problematic in terms of copyright law, we do, fairly often, get a somewhat reasonable, if very pedestrian, correct answer to a question.  But, because of the superstitious learning that has always plagued neural networks, sometimes the systems find relationships that don't really relate to anything.  Buried deep in the hugely complex statistical model that the large language models are built on, are unknown traps that can be sprung by a particular stream of text that we feed into the generative artificial intelligence as a prompt.  So it's not that the genAI is lying to us, because it's only statistically creating a stream of text based on the statistical model that it has built with other text.  It doesn't know what is true, or not true.

There is a joke, in the information technology industry, that asks what is the difference between a used car salesman, and a computer salesman.  The answer is that he used car salesman knows when he is lying to you.  The implication of course (and, in my five decades of working in the field I have found it is very true), is that computer salesman really don't know anything about the products that they are selling.  They really don't know when they are lying to you.  Generative artificial intelligence is basically the same.


AI topic and series

Friday, March 20, 2026

Review of Wispr Flow by OpenAI

I knew that the Newton device from Apple would be a failure when it didn't have any communications connectivity.  I also knew that the Newton device would fail when, in order to get communications connectivity, you had to buy a separate device for exactly the same amount of money as the base unit and exactly the same size as the base unit.

Then again I have never been able to type.  I have always wanted something to do the typing for me, and I have always wanted something to take dictation to enable me to write down what I wanted to write.  I do not know how to explain why I loathe and despise, to the very depths of my soul, soft keyboards on smartphones.  I have hated them ever since actual physical keyboards disappeared from smartphones.  So all I really wanted was something to take dictation for me.

On the other hand everybody else seems to have wanted something to turn on their lights, play their music, choose from a selection of playlists, add items to their shopping list, and to buy items from their shopping list so that's what Siri and Alexa seemed to have been built for.  Of course all of these functions are fairly simple and so they never needed much artificial intelligence to get them to work.
 
All of which is sort of circling the fact that what we really want on our smart phones is a kind of a personal assistant.  We want something to remember things for us.  We want something to remind us of important events.  We even want something to decide which events *are* important.  We want something to decide which calls to us are important enough to bother us about.  And this is what we want, what we really want, from artificial intelligence.

This has something to say about what we want for artificial intelligent assistants or devices.  Do we want something that looks and acts like our current cell phones?  Do we want something like the communicator on Star Trek, that's simply a microphone and a speaker and some kind of communications to a centralized computer system?

First, if we are going to simplify it down to that minimalistic communicator device, we are definitely going to have to do something about the reliability of artificial intelligence and that problem of hallucinations.  (What is the difference between artificial intelligence and a used car salesman?  Answer: The used car salesman knows when he is lying to you.)

We have gotten to the point of artificial intelligence being somewhat useful for producing programming code for us, and also we have gotten to the point where artificial intelligence can be useful for various types of agentic operations.  We still need to have, or possibly formalize, the syntax of specifications that we accumulate and refine possibly over three and a half days of thinking, and then finally commit to getting an agreed upon set of specifications.  Having the artificial intelligence define those specifications for you and commit to executing the action.

All of which is kind of background and explanation for why I am doing a review of Wispr Flow.

I have tried out and reviewed at least four different versions of dictation systems so far.  The two that I use most frequently are Gboard, which I use on my Android phones for dictation to pretty much anything, and Live Transcribe, which I use because it has an independent unconnected mode.  While problematic in terms of accuracy, at least it works when I don't have a connection to the Internet.

The reason to add Flow to the mix is that it is produced by OpenAI.  OpenAI, of course, is the producer of ChatGPT and a number of the other major artificial intelligence tools that are available to the general public.  Therefore it stands to reason that Flow will be OpenAI's tool for a local artificial intelligence tool, something along the lines of a personal assistant.  It therefore makes sense to see how well Flow works and whether it is reliable enough and accurate enough to be used in this type of a situation.
 
I am interested in the fact that Wispr Flow is available for multiple platforms.  I am particularly interested in the fact that it is available for Windows.  This gives me a dictation capability on my desktop machine, which I greatly appreciate.

Perhaps not as greatly as I might.  I have, in testing out Wispr Flow in order to do this review, found that I would really rather prefer to do dictation onto my phone, and, as I will note, there is a problem with that.

Wispr Flow is available both for Windows and for Android, as well as a number of other platforms.  This is handy for me since I can install it both on my desktop and on my cell phone.  Presumably I can also install it on the laptop at some point and I might be getting around to that.

Anyway for the first test I tried it on the Android cell phone.  That test was a complete and unmitigated disaster.

As I have mentioned I have experience with a number of other dictation applications.  As far as I can recall, all of them will display to you, as the person dictating, the output and transcription of what you are dictating.

As noted I most frequently use Gboard and Live Transcribe.  Both of these display, as you are dictating, what they are transcribing.  Both of them (and this is only to be expected since both are made by Google) have an interesting property where if they haven't fully decided on what the final transcription will be, the text that they have transcribed so far and is still under consideration shows up as being underlined.  When the underline disappears the system has decided what the final transcription will be.  In any case the system displays to you in real time what it figures you have said.

That is not the case with Flow.  Initially it *really* threw me.  I dictated something and nothing appeared on the screen.  Because I was using the Android version and possibly because of some weird issue with settings or formatting, even after I stopped dictating a test and hit the button indicating that I was finished dictating, nothing appeared.

I tried this multiple times and then I started looking into possible problems, shifting more or less immediately into systems analyst mode.  I figured out that, yes, what I had dictated *had* been transcribed, but for some reason it showed up as white text on a white background.  It was therefore not until I did some work to select the text in the area that I figured that there was some text, but invisible.  Once I could pull up that text I found that, yes, all three attempts had in fact been transcribed.  However since I had been frantically trying to figure out where this text had been transcribed, the various attempts were embedded within each other and the total text was a horrendous mess.

Subsequent testing indicated that this was not specifically a problem with the Android version.  This must have had to do with some kind of formatting issues because I have tested it once again on the Android smartphone and in a very similar situation with the same application and the result were pretty much okay.

I should note that in an early feedback to Wispr Flow I mentioned this problem and got a response from their technical support that I should look for settings dealing with fonts and font colours and settings in the application.  They weren't specific about whether it was the Wispr Flow application or the application that I had been using Wispr Flow to provide input to.  In any case I couldn't find any settings on the phone, in either application, that dealt with fonts or font colours.  Their technical support wasn't really very supportive.

(I've had subsequent contacts with Wispr Flow support.  I suspect that "Tina" is a bot.  Regardless, content that I send to them seems to get lost somewhere along the way.  In addition, suggestions from support tend to include references to options that don't appear in either version of Flow that I am currently testing.)

Technical support did tell me that this issue of the text not appearing until you have finished dictating is a deliberate design choice in the case of Flow.  Personally I think it's a pretty stupid choice.

I have been practicing, very extensively, with dictation software for the last four years.  It is a non-trivial task until you start to get the hang of it and it is also extremely difficult when you have no feedback.

If you are thinking about what you want to say and you can't see what you have said, to determine whether or not you are using too much repetition of a given word, or if you have already dictated a specific piece of information that you want to include, it can be very difficult.  I would definitely disagree with Flow's design choice in this regard.

As I have noted I have used both Gboard and Live Transcribe fairly extensively.  As I have also noted I use Live Transcribe in the unconnected mode.  Therefore it is completely unsurprising that Live Transcribe makes many more errors than Gboard does.  Gboard does not have an unconnected mode and you can only use it if you are connected to the Internet.  Therefore Google, and its massive data centres, are supporting the transcription of what you dictate to Gboard.  I have used Live Transcribe in situations where I can't be connected to the Internet and it's a bit of a pain to have to do all of the work necessary to edit the material that has been transcribed, at some later time, in order to get what you really want.  But I still appreciate the fact that I can dictate something and edit it later.  However even Gboard is not perfect.  That's actually putting it mildly.  There are frequently some pretty major transcription errors.  You have to say any punctuation that you want to have inserted in your text, with Gboard, and frequently when I want it to put in a comma, it instead inserts the word "karma".

So it is fairly easy to say that Flow is much more accurate than Gboard.  Flow gets many more words down correctly than does Gboard. Flow doesn't make as many mistakes.  Flow can handle punctuation even if you don't say it but it isn't as good with commas as it is with periods.  Flow can handle certain levels of formatting, even if you don't ask for it.  I was interested when it started to create bulleted lists for me even though I didn't want bulleted lists in that particular case.

The advertising for Wispr Flow seems to indicate that it can handle transcription even if it isn't connected to the Internet.  However I have examined the settings for Wispr Flow, at least on my desktop machine, and I don't find any setting that indicates that I can turn on or off a connection to the Internet.  I will probably have to do some more extensive work on my smartphone in order to test that out.

(I have also, in the course of doing some testing for the purposes of this review, found that occasionally Wispr will actually take down a transcription but not paste it into the application that you think you are working in.  On the Windows desktop version you can call up the Wispr application itself and find that the transcription has been recorded in Wispr.  You can then copy and paste it back into the application you thought you were using.)

I'm using the free version of Flow.  At least I *think* I'm using the free version of Flow.  The Wispr Flow application, itself, tells me that I have access to the Pro version for a couple of extra weeks.  However it doesn't tell me whether I am actually using the Pro version right now.  So while I appreciate the dictation capability that Flow is providing to me, it could tell you a bit more about itself.  I think this is only fair.  After all, I have not turned on the privacy setting and therefore Flow is using my attempts at dictation to tune and improve Flow.  Regardless of whether it says so or not, I am quite sure that Flow is also feeding my transcription back to Open AI so that they can use it in building the next round of ChatGPT.  Hey, fair's fair.

I like it.  I'll probably continue to use it.  But it definitely still has some bugs.

And I still think they should show you what you're transcribing in real time.


A few more bits. 

Flow's ability to handle punctuation and formatting can be interesting at times.  Flow will eliminate punctuation, if it feels like it, even if you have given it spoken commands to include punctuation.  Flow is an American product, of course, and seems quite insistently determined to eliminate all possible commas.

As I have noted, Flow is able to handle stumbles over words and usually turns out a pretty good edit no matter how much of a fumble tongue you have been in doing the dictation.  However I am concerned that occasionally Flow may edit out stuff that it simply considers extraneous.  And Flow is definitely not as good of a copy editor as Gloria was.

I am getting used to Flow's lack of immediate display of what it is transcribing.  However this is probably at the cost of some change in my writing style.  I am probably moving more to an Ernest Hemingway style of writing, in contrast to my preferred Henry James.

I have noticed, although it may be due to other factors, that since I have started the trial of Flow my writing productivity has gone up considerably.  You guys are *really* in trouble now.


AI topic and series

Wednesday, March 18, 2026

Entangling butterflies?

So today, in the computer activity, we were, as promised, covering quantum computing.  And, at the end, somebody asked if entanglement was the same as the butterfly effect.

And I had to explain that no, quantum theory was completely different from chaos theory.  Even though there seem to be some similarities, chaos theory was about non obvious but pre-existing patterns and structures in phenomena.  While this allowed influences apparently at a distance, there was no direct connection.   Entanglement had to do with an actual connection.

Our society has created a population of people who, because they know some terms, without ever understanding the concepts behind them, think that they actually understand the extremely complicated phenomena behind the jargon.  I know that the psycholinguistics people say that you can't understand a concept unless you have a term, but I don't think the the reverse, that simply knowing a term allows you to understand the concept, is true.