Wednesday, April 2, 2025

Griefbots - 1 - intro and AI

Griefbots, thanabots, and "Restoration" systems

Griefbots - 1 - intro and AI

At about the same time that Gloria died, Replika started making the news.  Replika was, at that time, text chat based only.  You could train a Replika account with email from your deceased loved one.  I had plenty of email from Gloria, and still do.

I decided against trying the system.  I wasn't sure whether I was more afraid of it being disappointing, or of getting hooked on it.

So I still don't know which is the greater danger.  I don't know whether those people who use griefbots, or thanabots, or "restoration" systems, are simply fooling themselves, and thinking that this chat does, in some way, reproduce their conversations with their loved one, before the loved ones death.  Possibly they receive some kind of comfort, in having conversations with some facsimile of their loved ones.  Then again, possibly they experience cumulative grief, when they finally realize that their loved one is, in fact, dead, and that the facsimile isn't, in fact, the loved one.

Possibly there is some kind of cumulative grief involved in the fact that the loved one dies, and then is "restored", and then possibly "dies" again, at some later date, when the company that runs the system goes bankrupt, or the system to simply gets too old, or is updated and their account doesn't survive the transfer, or they simply run out of money to pay for the account.

Or, maybe, the system runs along, and they don't really discriminate between whatever the system produces, and whatever kind of conversation their loved ones did produce, and they just carry on the illusion until they themselves die.

And maybe they never really get over their grief, because they have this artificial anodyne, and the artificial chatbot, or griefbot, or thanabot, is sufficient for them, and they never do form a new relationship with any actual carbon-based lifeform that might be more suitable for them.

I don't know.  As I say, I never dared to try the experiment with Gloria and Replika, and I don't know how I, personally, would react or have reacted.  And the information that I have been able to find is basically anecdotal, and the plural of anecdote is not data.

So I don't know how real the benefits are.  I don't know how real the risks and dangers are.  But I am definitely aware of the potential risks, and, I strongly suspect that too few people are aware of the risks, or have given much thought to them.

(The CBC has made available a documentary entitled "Eternal You."  It is not comprehensive, and doesn't address all the risks associated with griefbots and related systems, but those that it does cover are covered well.  It is available at https://gem.cbc.ca/eternal-you or https://www.youtube.com/watch?v=4Koqc2aPUK4 )

Initially, as noted, the idea to explore griefbots came from Gloria's death, and the increasing presence of Replika in this space.  Then came the explosion of interest in artificial intelligence, and the proosed applications, driven by the large language models.  I created a presentation on griefbots as a kind of specialized extension of a broader presentation of AI.  However, as I explored the field, and in association with volunteer work in grief support, I was astounded by the number of companies that have started to enter this field, with a variety of products.  Given the lack of understanding of the limits of AI in general, and increasing work in the psychological dangers of a variety of areas of information technology (including social media), I felt more urgency in getting this article, and series, out to a broader audience.

Today I was asked for which audience I am writing this article.  I think it's a pretty broad audience.  My colleagues in information technology will have a greater understanding of artificial intelligence, and the oversimplification that I am making in order to ensure that this article is not too lengthy for the general public.  For those involved in grief counseling and support, my lack of training and specialization in this field will no doubt show.  However, I hope that you can understand the concerns that I am trying to raise, and will, if asked by your clients, be able to provide some detail, and possibly a balanced opinion in regards to whether or not griefbots are a good idea for the bereaved, in either general or specific, and at least raise the issues of risk or danger.  For those in the general public, some of you may be bereaved, and might be considering griefbots for yourselves, or may have friends among the bereaved who might be considering signing up for these systems.  Again, hopefully this piece will provide some realistic assessments of what we thought griefbots are or are not, and what benefits, balanced against the risks and dangers, there may be.

Given that this is a bit about artificial intelligence, or AI, I asked ChatGPT to opine on the psychological dangers of artificial intelligence, and the use of artificial intelligence, particularly in counseling and psychological situations.  The number one point that ChatGPT listed was "a lack of understanding."  Indeed, this was borne out by a situation where, at an event for the public, I set up a computer to allow people to interact with one of the LLM systems.  Anyone could try it out.  Nobody did.  So probably very few people have, actually, taken advantage of the opportunities to get to know how these systems work.  (And don't.)  Therefore it is probably a good idea to provide at least a terse outline of what artificial intelligence is, and is not.

First of all, artificial intelligence is not a thing.  It is *many* things.  Artificial intelligence is a general term given to a number of approaches of getting computers to have functions which we have come to expect from people.  Unfortunately, as well as a number of different approaches in order to tackle this task, the task itself is ill-defined.  Alan Turing, who is considered one of the fathers of modern computing, and computing machinery, did once specify what has come to be known as the Turing Test.  The test goes something like this: if you put a subject (which we might call the tester) in front of a terminal, and the wire to the terminal goes off through a wall, and the tester carries on a conversation, via the terminal, with the system that is to be tested (which we can call, for example, the testee), if the tester cannot, after carrying on a conversation for some length of time, decide whether behind the wall is another person, or a computer running an artificial intelligence program, then if it is, in fact, an artificial intelligence program, that artificial intelligence program is considered to have passed the Turing test, and is therefore, intelligent.

The thing is, we don't really know if Alan Turing actually meant this to be a determining test about whether or not someone has, in fact, written a program which is artificially intelligent.  It is equally possible that Alan Turing was making a statement about the difficulty of creating artificial intelligence, when we can't even define what real intelligence is.  The Turing test is, in fact, a measurable test.  But it doesn't really define, to everyone's satisfaction, whether or not we have created a truly intelligent artificial personality.

For example, how intelligent is the tester?  Does the tester have experience with assessing other artificial intelligence programs, as to their level of intelligence?  Does the tester have a broad range of knowledge of the real world?  Has the artificial intelligence program been fed data based upon questions and conversations that the tester has had with artificial intelligence programs in the past?

And this is just about generating a conversation.  What about making a computer see?  What about getting the computer to look at an image, either still or video, and identifies specific objects out of that image?  What about being able, from an image, to plot a way to navigate through this field, without destroying various objects that might be in it?  What about teaching the computer to hear?  All of these are things that the field of artificial intelligence has been working on, but they have nothing to do with carrying on a conversation over a terminal with some unknown entity.

In the interest of keeping this article reasonably short (I don't want to risk TL:DR), I won't go through the sixty or seventy year effort to create artificial intelligence, and the various successes and failures.  No, I'll keep this reasonably short, and just pick on the one that has, over the past three years, been much in the news, and much in demand in business circles, and which everyone tends to talk about.

This is the approach known as the large language model, or LLM, or generative artificial intelligence, or generative AI, or genAI.

As I say, this has created a great stir.  ChatGPT, and Claude, and Perplexity, and Deepseek, and Qwen, and Meta AI, and Gemini, are all examples of generative AI.  They have astounded people with their ability to answer questions typed into them, and give reasonable answers, sounding realistic and lucid, and do, for many people, seem to pass the Turing test.  The reality is a bit different.

Large language models are trained are descendants of a process called neural networks.  Neural nets are based on an idea about the human brain, which we now know to be somewhat flawed, and definitely not comprehensive.  However, it is a very complicated kind of statistical analysis.  You feed neural nets a lot of data.  When the neural net notices a correlation between items within the database, it flags that correlation, and, every time it finds an example that meets the correlation, it strengthens the connection.

Unfortunately, this leads to an example of what is called, in psychology, superstitious learning.  That is, that the system notices a correlation which isn't, in fact, a correlation.  It builds on a kind of confirmation bias, and the system will keep on strengthening a correlation every time it finds, even if randomly, some data that seems to fit the correlation.  The negative, a lack of evidence, or even relationships in the data that contradict the correlation, are ignored.  So, neural nets can make mistakes.  And this is only one example of the types of mistakes that they (and we) make.

Large language models feed the neural net a great deal of text.  You will have seen news reports about those who are building large language models being sued by the owners of intellectual property, which gets shoveled into the large language models.  There is also, of course, an enormous trove of text which is available at no cost, and so is widely used in feeding the large language models.  This is, of course, social media, and all the various postings that people have made on social media.  However, this text is not exactly high quality.  So we are feeding the large language models with a great deal of data which can teach the large language model how to structure a sentence, or a paragraph, and even possibly to use punctuation (if, indeed, social media users can be forced somehow to use punctuation), but any meaning may be rather fragmented, disjointed, and quite possibly incorrect.  So, we have taught genAI rhetoric, but we haven't taught it anything about epistemology, or metaphysics.

And this business of saying that we are asking a question, and getting an answer, is an example of misleading the public by the use of our terminology.  You may think that you are asking a question.  The system doesn't understand it as a question.  It is simply, to use the term that the generative artificial intelligence people use, themselves, a prompt.  This prompt is parsed, statistically, with the very complex statistical models that the large language model has created for itself.  Then the genAI will generate a stream of text, once again, based simply on the statistics, and probability, of what the next word is going to be.  Yes, it is certainly impressive how this statistical model, complex though it may be, is able to spit out something that looks like considered English.  But it isn't.  It's just a statistically probable string of text.  The system didn't understand the question, or even that it *is* a question.  And it doesn't understand the answer.  It's just created a string of text based on statistics.

It doesn't understand anything.

And if you think anything different, you're fooling yourself.

Now, some of you may be somewhat suspicious of the proposition that a mere statistical analysis, no matter how complex, can generate lucid English text.  Yes, I am oversimplifying this somewhat, and it's not just the probability of the next word that is being calculated, but the next three words, and the next seven words, and so forth.  The calculation is quite complex, but it still may sound odd that it can produce what seems to be a coherent conversation.

Well, this actually isn't very new.  There is a type of statistical analysis known as Bayesian analysis, or Markov chain analysis.  It has been used for many years in trying to identify spam, for spam filters for email.  And, around twenty years ago, somebody did this type of analysis (which is much simpler and less sophisticated than the large language model neural net analysis) on the published model novels of Danielle Steele.  Based on this analysis, he wrote a program that would write a Danielle Steele novel, and it did.  This was presented to the Danielle Steele fan club, and, even when they knew that it was produced by a computer program, they considered that it was quite acceptable as an addition to the Danielle Steele canon.  And, as I say, that was two decades ago.  And done as a bit of a lark.  The technology has moved on quite a bit since then, particularly when you have millions of dollars to spend on building specialized computers in order to do the analysis and production.

A lot of the griefbots, or thanabots, or "restoration" systems are based on this kind of technology.  Sometimes they are using even simpler technologies, that have even less "understanding" behind them.

Some of the chatbots are based on even simpler technologies.  For example, over sixty years ago a computer scientist devised a system known as ELIZA.  This system, or one of the popular variants of it, called doctor, was based on Rogerian psychological therapy, one of the humanistic therapies.  The humanistic therapies, and particularly Rogerian, tend to get the subject under therapy to solve his or her own problems by reflecting back, to the patient, what they have said, and asking for more detail, or more clarity.  That was what ELIZA did.  If you said you were having problems with family members, the system would, fairly easily, pick out the fact that "family members" was an important issue, and would then tell you something like "Tell me more about these family members."  Many people felt that ELIZA actually did pass the Turing test, since many patients ascribed emotions, and even caring, to the program.

(If you want you can find out more about ELIZA at https://web.njit.edu/~ronkowit/eliza.html )

Other chatbots have been developed, based on simple analysis and response mechanisms, and sometimes even simpler than those underlying ELIZA.  Chatbots have been used in social media all the way back to the days of Usenet.  Yes, Virginia, there was social media before Facebook.

Next: Griefbots - 2 - Dating apps and AI "friends"

No comments:

Post a Comment