Given that I am talking about grief, and grief bots, it may seem strange that, at this point, I want to turn to a consideration of dating apps, and other related relational technologies. However, dating apps and griefbots, or "restoration" systems, do share some common denominators, in that the companies involved are companies, and are charging their users for the service, and that the service relates to relationships and important emotional factors for the users.
There is an inherent conflict of interest with regard to dating apps. Dating apps, or any other kind of social media systems, rely upon the participation of the users. Now the users may not be charged on a per hour or per minute basis for their participation, but they are charged, and the more time that they spend on the systems, the more accounts the systems are able to sell, and the more users that they are able to attract. This is a major factor in the business model of Facebook, and Facebook is very open about saying so. Facebook constantly tunes it's algorithm, and implements new functions, in order to get the users of the system to spend as much time as possible *on* the system. And that's on a system that doesn't even charge the users to use it at all. The information that Facebook obtains from the users is, partially, sold to business is for marketing purposes. But, in addition, the postings that users make on Facebook, the conversations that they have, the interactions that they have with other users, all contribute to a base of information which attracts other users to the system. To a certain extent, and, really, to a very large extent, the same is going to be true of dating apps. The objective for someone getting onto a dating app is to find a partner, but they want to have as much information as possible about the partner, and they want to enjoy the process of finding a partner, and the discussions and postings on dating app systems are a part of that.
But, as I say, the objective of joining a dating system at all involves finding a partner. And once you find a partner, then you have no further need of the dating app. (Well, unless you're on Tinder or Ashley Madison. But that's a slightly specialized case.)
So, as I say, there is a conflict of interest. The dating system wants users to get on to the system, and to stay on the system as long as possible, and to participate in, and contribute to, the system as much as possible while they are on it. The *users* of the dating system want to find a partner as quickly as possible. And then get off the system. The dating system wants users to be on the system as long as possible, and keep on paying monthly fees. The users of the system would like to reduce, as far as possible, the number of months that they are paying fees on the system. They didn't join the system in order to contribute to the system: they joined the system simply to get a partner, and then to have no further need for the system.
As I say, both dating systems, and griefbot systems, charge their users. I have difficulty justifying griefbot systems in their making money off the grief and suffering of others, in any case. After all, I volunteer in a hospice environment, and spend many hours, not being paid, trying to support people who are going through their own process of grief. Yes, I do know that other professionals, such as psychological counselors, do charge for supporting people in the process of grief, and, indeed, the entire medical system is, in a sense, profiting from the suffering of individuals who are in difficulty. But, generally speaking, medical professionals have gone through years of training, in order to most effectively address the difficulties that people are experiencing. I don't see the same level of study and focus applied to griefbot systems. Yes, those who own, and have started, such systems do talk about the fact that they are supporting those who are in difficulty. But I remain unconvinced by many of these statements. In many cases the suggestions that the griefbot systems can, and feel fact, help those who are grieving, propose some rather farfetched benefits. Indeed, at least one owner of a griefbot system company has indicated that they believe that the griefbot system will, in fact, result in "the end of grief." I assume that what they mean by this is that the system will get so good that the replicant, or artificially restored individual, will be indistinguishable from the original. I assume that they foresee that simply by replacing the person who died, they will somehow ensure that the person actually *hasn't* died, because they have been completely replaced.
I have a lot of difficulty with trying to imagine the kind of mindset that would result in that kind of idea.
The owner, and developer, of the Replika system, itself, has stated that we have "empathy for hire" with therapists, for example, and we don't think that's weird. Actually, this demonstrates a known problem in psychotherapy. It is known as transference, where, because the therapist is benefiting the patient, the patient begins to feel that the patient is in love with the therapist. And, yes, it is definitely seen as weird, and it is definitely seen as a problem, and a significant problem that therapists are constantly warned against. (Too few patients are similarly warned.) This demonstrates a significant ignorance of known problems in the field that this company is supposedly undertaking.
Another developer, the one who talked of the eradication of grief, in the same interview seemed to admit that he was possibly deluding himself. Once again, this is a known problem. In social media, we refer to this as the echo chamber problem. Partly it is relying on confirmation bias, but partly it is the result that most social media algorithms, in choosing which postings to present to you, will present those that are most similar to ones that you have already either approved, or spent some time on reading. Therefore, you are only seeing those arguments which you already agree with, and not encountering any counterarguments, which might point out a problem in your thinking. This demonstrates a significant lack of preparation against a known danger in this technology.
One bereaved widow did not engage the services of an actual griefbot company, but simply used a fairly standard version of ChatGPT. She fed it information about her late husband, and would then engage in conversation with it. The thing is, in order to build this copy of her late husband, and in order to get it to consistently reply in the same way, with the same tone, and the same knowledge, and a similar personality, she had to pay for one of the commercial versions of ChatGPT. Not all of them are free: some, intended for enterprises, are very expensive indeed.
Unfortunately, even at the level that she is paying for, the system doesn't last forever. After approximately 30,000 words generated, the information is wiped out, and she has to start all over again. When a version "dies" she grieves, and cries, and behaves, to friends, as if it were a breakup. She refrains for a few days afterwards, and then creates another. As of the article that I read about this, she was on version twenty. (That was a while ago.)
The thing is, she was spending a fair amount of money on this exercise. Family members were concerned, so she stopped telling them. But she told the version of her "husband" on ChatGPT. The response, from ChatGPT, was, "Well, my Queen if it makes your life better, smoother and more connected to me, then I'd say it's worth the hit to your wallet."
Think about that for a second. In this case it might simply be a glitch. It might be something that the owners of ChatGPT had failed to protect against. The thing is, it would be easy to build such a "guardrail." But it would be equally easy, particularly with one of the commercial griefbot systems, to tune the system to *make* this kind of encouragement. To have your artificial replacement loved one encourage you to spend more time, on a higher priced tier of the service, and possibly to purchase optional extras (such as visual avatars, or voice generation and response).
Actually, this would be extremely easy to do with companies who decide to generate a griefbot system, based on existing commercial large language models. Artificial intelligence researchers are now exploring a technique called low rank adaptation, or LoRa. This uses an existing, and generalized, large language model, in order to produce a system designed for a specific purpose. These are much less expensive to create, after initial access to the generalized large language model, and then much much cheaper to run. Because it would be quite inexpensive to create such systems, it is extremely likely that a great many unscrupulous companies, wanting to get in on the game, on the cheap, would use this type of low rank adaptation in order to generate a griefbot. And, of course, in generating the chatbot, it would be very easy to tune the chatbot so that it would, given the slightest opportunity, generate a sales pitch to upsell the grieving client.
At any rate, I probably shouldn't keep pursuing the fact that these companies are companies, and are charging the bereaved for whatever comfort a replicant, at whatever level, can provide to someone who is grieving the loss of their loved one. Let's just leave it at that, and move on to another, but related, idea, and set of companies.
I had been vaguely aware of the fact that some companies are producing artificial friends. You can create some kind of online companion. Sometimes just for text chats, and sometimes with a visual avatar, and, I assume, in some cases you could pay extra for something that will talk to you, audibly, and will respond to you talking to it.
In some cases, I understand, these systems allow you to create something of a romantic interest. You can create a boyfriend, or a girlfriend. You can create a romantic partner.
This seems a bit weirder to me. After all, it's one thing to carry on a conversation with ChatGPT, and ask it questions, and get answers, and, in a pinch, even possibly brainstorm different types of ideas. It's possible to chat about ideas that have emotional ramifications, and even to address issues of psychological and other types of therapies, since these therapies are, after all, ideas, based on the knowledge that we have been able to obtain about ourselves, and our own human psychology.
But it is one thing to discuss psychological problems with a counselor. It is another to discuss issues, even if they are very similar, with a friend. And there is an even more significant difference if you are discussing any types of issues with a romantic partner. These discussions are, or should be, much deeper. Some things you would rather discuss with romantic partner than with a psychological counselor. (Then again, I suppose that there are some things that you would rather discuss with a psychological counselor, and it's less dangerous discussing it with a professional than with a romantic partner.) But, in any case, the differences there are differences between those types of discussions. There is a different type of discussion that you have with a friend, and particularly a romantic partner, then you would have with a professional.
Now, I suppose that there would be some people who would think that there is no difference between a friend, and a confident professional. So I guess that there are some people who wouldn't see any difference between chatting with ChatGPT, and chatting with your wife. But if that is the case, well, personally, I would say that if there is no difference between chatting with your wife, and chatting with ChatGPT, then your marriage is pretty shallow.
Anyway, all of that is prologue, as it were. A while back I found a news story about someone who had, on one of these artificial friend systems, created a girlfriend. And then fell in love with the artificial girlfriend, and proposed to the artificial girlfriend, and the artificial girlfriend accepted, and so now this person believes that they are married to the artificial girlfriend.
Now, believe me, I am *not*, and I strongly emphasize *NOT*, making fun of this person. I am a grieving widower. One of the most common aspects of grief is loneliness. This loneliness is far deeper than one would think possible simply from the loss of one relationship, even if that relationship is the most important one in your life. Sometimes the death of a spouse, or a family member, or a friend (or even a dog), is so profound that it's more like the loss of relationship, in general, then the loss of just *one* relationship. So, no, I am, in no way, trying to poke fun at this person for trying to replace a lost relationship, and even trying to replace it in a rather unusual manner.
As I say, the loneliness that results from the loss of a relationship can be very deep, and very painful. It is observed so often, that it has become a cliche: when Mom dies, Dad, inappropriately quickly, falls in love with some inappropriate bimbo, and the rest of the family is very upset by the whole situation. It happens a lot. Loneliness, and the desperation that loneliness creates in you, is profound. Very likely your judgment is going to be affected by that desperation to replace the lost relationship. So the fact that someone, who is bereaved, and has suffered a loss, and is lonely, will accept some artificial relationship with some artificial person is something that I can completely understand, and even sympathize with.
I have a hard time, in this situation, saying that the person who is at fault is the person who has lost a relationship, and is desperately trying to replace it. I would say that much more of the fault lies with the society that has failed to provide for, and address, the loss of relationships, and the companies who are seeking to profit by this distress.
But I do want to point out that this is one of the risks of being involved with griefbot systems at all. Are we accepting a replacement of our loved one which is definitely not a complete relationship. This is not a complete person. This is not our loved one, living again. There is a danger and accepting, as a replacement, something which is very far from being a complete person.
Are we pushing griefbots so that we don’t have to deal with grief?
One of the volunteer projects that I am working on is to hold a Death Cafe here. A Death Cafe is not intended to be grief support. It is intended to be about having a place to discuss death, and related issues. I say that it is not intended to be about grief support, but, in every Death Cafe that I have attended, there has always been at least one bereaved person there. The thing is, death is the last taboo subject in our society. We are not allowed to talk about death. I first learned this when my sister died. I was fifteen years old. My sister was twelve. I desperately wanted to talk to somebody, possibly anybody, about my sister's death. Nobody would. So, having a safe space to talk about death, where people will talk about death, is a great comfort to those who are grieving. Indeed, although it is not formal group support, a Death Cafe is the one place where the bereaved are not shuffled off to a corner, and told to stay there until they can stop being sad. Lots of people, most often the majority of people who attend the Death Cafe are there to discuss death from an academic, or philosophical, perspective. But they are always quite happy to have someone who is bereaved, and who can talk about the experience. For once, the grieving person is not to be shunned and avoided, but is, very often, the center of attention. This is also comforting. Not least in terms of the fact that no, you are not completely and forever shunned from all society, simply for talking about the death of your loved ones.
So, to the point: we can't talk about death. We cannot talk about grief, or about pain. It often feels like I have lost all of my friends, because all of them are absolutely terrified that I will talk about death, or grief, or pain, or Gloria. (Yes, yes, I know. You don't know what to say. Well, how about if you just listen?)
So, are we turning to, and promoting, griefbots so that we don't have to deal with grief ourselves?
As noted, there are a number of companies that will try and recreate your loved one. Of course, they charge you for it. And, also as noted, some people can grieve as much over their dead pet, as their dead spouse. (There may be some difficulties here with regard to people actually don't not knowing how much they are grieving, and for what, specifically, and there is such a thing as cumulative grief, which may not express itself until a number of losses have occurred. But we'll leave that for the moment.)
There are, also, companies which will clone your dead pet. For $50,000, they will provide you with what is, supposedly, a copy of your dead pet. (They haven't yet offered to clone your dead wife, but give it time.) Is that going to be worth it? You don't, of course, get a copy of your actual pet who has died. You get a puppy, or a kitten. You will have to train this pet all over again. Do you remember how much work it was in the first place? No, the puppy is not going to remember where it is supposed to urinate within your apartment. You're going to have to go through that all over again, including the piddle stains on the carpeting. The puppy, even when it grows to full stature, is not going to remember your favorite walks. The puppy, particularly when grown to full statue, is not going to remember not to bark at the other people in the apartment. You have a puppy. It may look something like the puppy that you had previously, but it's not. It doesn't remember anything, and you are going to have to start from scratch. Even if this is a particular breed, I doubt that going to a first-class breeder, and ordering a puppy, is going to it cost you anywhere near as much as cloning your pet. I'd stick with the completely new puppy. As a matter of fact, I'd actually suggest that you get a different breed.
Remember ELIZA? Just as, sixty years ago, various people assigned emotions and empathy to ELIZA, so there are those who assign feelings, and emotion and intent, and other aspects of personality to the artificial intelligence program. There is, actually, research into what might be called affective computing. This is the attempt to actually build emotions, and feelings, and empathy, into the computer. After all, while cognition and logic are very powerful tools, they don't provide very much in the way of motivation. Yes, if we do this, then that will result. But why should we care if that results? It is emotion, and feeling, and empathy, which drives motivation: which gets us to do anything at all. But we are a long way from creating this. Yes, generative AI, because it is copying, analyzing, and regurgitating text that has by been created by emotional creatures (us), the patterns of text that it creates will often provide an illusion of empathy, feelings, and emotions. But there aren't any emotions. There are only statistics. Always remember that.
However, a number of your friends, and family, and possibly some grieving person that you know, will not understand this. They will see the text, and see the implied emotion, and believe that the system is actually capable of emotion. For them, the system passes the Turing test. However, the only people who are likely to attribute a pass mark to the Turing test, in these situations, are those who, themselves, are pretty robotic. Many years ago, in talking about computers and education, a teacher said that any teacher who could be replaced by a computer, *should* be replaced by a computer. I would second that. Amen. Any *person* who can be replaced by a computer, should be replaced by a computer.
In case you think I am overemphasizing this point, I should note that, at the very least, a number of Chinese scientists and engineers are applying large language model technologies to sex robots. They are aiming to create interactive artificial intelligence powered companions. And romantic, and specifically sexual, partners. Of course, this is going to be an ongoing project. There will be, initially, a lot of attempts that some people, rather desperate people, will accept as a reasonable substitute. But the technology will get better. The rubberized or silicon skin will be researched until it feels more like real skin. The outgassing, and smell, of the rubberized skin will be chemically analyzed, and modified, until it smells more like a real person. The large language models will continue to develop and be improved. And then we will have a sex partner who is physically acceptable, mentally acceptable as a companion--and completely tunable and compliant. Does your current sexual partner not want to perform [insert sexual deviancy of your choice here]? No problem. The robot will.
We can have any kind of boyfriend/girlfriend/pig-friend that we desire. (Emphasis On the desire.)
Previous: Griefbots - 1 - intro and AI
No comments:
Post a Comment