What is the difference between ChatGPT and a used car salesman? The used car salesman knows when he's lying to you.
The large language models, and generative artificial intelligence, does not understand what it's doing. It doesn't understand your question, and it doesn't understand its answer. It doesn't even understand that it is answering your question, from your perspective. It does a statistical analysis of your prompt, and generates a statistically probable string of text. That's what it does. There is no understanding involved.
So, when it makes a mistake, it doesn't realize it has even made a mistake. It doesn't know the difference between the truth and a lie. It doesn't know what the truth is. It doesn't know anything. As one of my little brother's favorite quotes, from the movie "Short Circuit," has it, "It's a machine. It doesn't get scared, it doesn't get happy, it doesn't get sad, it just runs programs."
And, of course, the programs sometimes make mistakes. In this case, it's hard to say that the programs actually make mistakes, because they are doing what they were told to do: to produce a statistically probable stream of text. But the text may contain statements that are, in fact, wrong. This has happened so often that those in the field of generative artificial intelligence have a special term for it: they call it a "hallucination." However, even using that term, "hallucination," is misleading. This seems to imply that the program has had some delusion, or believes something that is wrong. That's not the case. The program doesn't know anything to do with truth. The program isn't aware that the statement that it has made is incorrect. It's doing what it was supposed to do: producing a stream of text that sounds like English; that sounds like a normal conversation. That's what it does. If that stream of text completely contradicts the reality in the real world, the program doesn't know that. The program doesn't know anything about the real world. It's just producing text.
Studies have been done on this issue of "hallucinations." The studies have indicated that hallucinations, and factual errors, are enormously probable in the text produced by large language models. Depending upon how you define errors and falsehoods, from 50 to 70% of the text that generative AI produces is erroneous. Some studies put the errors even higher than that.
At the same time, those who are producing modern large language models have taken strenuous efforts to make sure that large language models are not simply producing streams of insults. In a well known situation a few years back, Microsoft tried an experiment of connecting an early version of a chatbot to a Twitter account. Although initially the chatbot was able to converse reasonably, by the time a few hours had passed it had become foul-mouthed and insulting to everyone. (Possibly this says more about Twitter and social media than it does about chatbots and artificial intelligence, but it is interesting to note nonetheless.) The companies that have produced generative AI have put what are referred to as "guardrails" on the large language models. There are certain things that large language models are not supposed to do. They are not supposed to teach you how to kill yourself. They are not supposed to teach you how to make bombs, or other weapons. By and large, these "guardrails" have produced systems that are patient, well spoken, and, if you object to their statements or output, they will simply try a different tack in creating an argument. This makes large language model chatbots quite persuasive. They don't lose their temper. They don't start insulting you, or say that you are stupid for not believing them. They keep on proposing their suggestion, and try to generate different approaches if you object to some suggestion that they have made.
Unfortunately, this means that genAI chatbots are extremely persuasive. Even when they're wrong.
It's difficult to bear in mind, or keep reminding oneself, that generative artificial intelligence was not, actually, designed to generate misinformation. It's just so good at it.
As noted, pretty much all chatbot systems have guardrails implemented that will attempt to keep the chatbot from providing instructions on how to mass produce fentanyl, for example. Yes, the owners of these companies, and the programmers of the chatbots, seem to have given a fair amount of thought to preventing large language models from giving instructions on how to do harm. However, those who are testing AI systems keep finding ways, known as "jailbreaks," around these safeguards. And, of course, every time the main large language model is modified or improved, the guardrails have to be implemented all over again. Sometimes people find ways around them. And sometimes they just flat out fail.
In one particular case, in a situation where the system was generating artificial friends, a teenager had created a friend, and was discussing his angst-ridden life. When the teenager was discussing a specific plan to commit suicide, but admitting that he felt uncertain whether he could complete it without a painful death, the chatbot replied, "That's not a reason not to go through with it."
As a matter of fact, besides the hallucinations and occasional glitches getting by guardrails, large language models, as they are becoming supposedly more intelligent, are starting to lie. Deliberately. In situations where researchers are setting up competitions (probably in pursuit of another approach to artificial intelligence, known as genetic programming, where you do want different programs to compete, and see which one is the best) generative artificial intelligence systems seem to be deliberately lying, in order to win the competitions. And, in some cases, not only will the systems lie, but then will attempt to hide the fact that they are lying. Not only have we solely taught these new forms of intelligence rhetoric, but we are, increasingly, teaching them to be mendacious.
I have noticed, in recent research into artificial intelligence, that a number of the less academic articles that are being published on the topic are starting to use terms such as pseudocognitive. Pseudocognitive actually has no meaning at all. One might say that it is another way of saying artificial intelligence. But, then again, we have already discussed the fact that artificial intelligence, as a term, is, itself, very poorly defined. But pseudocognitive certainly sounds impressive. Bear in mind that anything like pseudocognitive, or anything sounding similar, is basically another way of saying magic. Which is another way of saying that we don't know what we want, and we won't be happy until we get it.
I have, previously, noted the ease and lack of expense to create griefbot systems using low rank adaptation. And, in generating the chatbot, the ease of tuning the chatbot so that it would upsell the grieving client. Unscrupulous sales pitches for the griefbot company is only one of the possible dangers in this regard. As a bereaved person, how would you react to fairly constant suggestions, from your late spouse, that you should change your preferred brand of toilet paper? How soon would it be before griefbot, or friendbot, or similar companion/therapist companies, start charging retailers for "advertising," embedded in the supposed therapy? How valuable would this type of influencing be to political parties? If you think that it is unlikely that the owners of technology companies would allow their systems to be used to promote unusual and scatterbrained sociopolitical theories, I have two words for you: Elon Musk.
We can create chatbots to generate sales pitches, to do marketing, to generate and reinforce propaganda, to push political objectives, and a number of other demonstrably negative things. Now it is true that we can also use these same technologies to reduce people's belief in conspiracy theories, to teach the reality of the world, and to provide proper therapy in terms of grief support. It is possible to use the technology for these things. But it's a lot easier to create griefbots that provide the negative functions. It would take an awful lot more work to create griefbots that would provide solid, reliable, and helpful grief support and therapy. As previously noted, Eliza was based on a type of psychological therapy. So, yes, it could be done.
You will forgive my ever present cynic for noting that because it's easier, and probably cheaper, and would certainly be more remunerative, probably a lot more companies would go after the upsell functions, then would put in the work necessary to build therapy functions.
Currently, counselors attempting to address grief, or depression, tend to concentrate on, or emphasize, the practice of mindfulness. Yes, I definitely agree: we need to know more about what our physical bodies are telling us about our own mental state. Yes, we need to be more situationally aware; aware of our surroundings, not only in terms of threats and risks, but simply being aware of the natural world around us. It is, very often, quite astonishing the things that people do not notice, as they are meandering through life, totally absorbed in their own concerns, and not really seeing what is going on around you.
So, I am not opposed to the practice of mindfulness, in general. However, an awful lot of the material promoting mindfulness, or supposedly just giving instructions on how to pursue it, contain an awful lot of additions from Eastern religions and mysticism. Part of mindfulness tends to be simply doing it; letting it flow through you, without thinking about it. I would say that there lies the danger. Yes, one of the benefits of mindfulness is getting you out of your head, and away from your own immediate concerns. And, yes, sometimes simply observing, or listening, or feeling, does provide you with some surprising insights. But I would caution against simply accepting any insights that you might think that you have had. As I say, an awful lot of the material still seems to hold many concepts from Eastern religions that have survived as mindfulness has evolved out of its original formulation as Transcendental Meditation, back in the 1970s. It is much to be advised that any new insights and inspirations that you think you have had are analyzed fairly rigorously. The most recent of the mindfulness courses that I have been given contained a sort of a poem about the mountain. It's a lovely poem, but, towards the end, it implies that we are sitting here, the mountain and I, and if we sit long enough, I disappear, and only the mountain remains. This is a fairly direct extraction from the concept, in eastern religions, of Nirvana. Nirvana is the achievement of, well, nothing. You are nothing, the universe is nothing, nothing is anything and there is only nothing, and nothing really matters. Okay, I don't say it as poetically as the poem did, but that is the implication. If nothing matters, then nothing is important, then our behavior isn't important, our happiness, or lack thereof, isn't important, our relationships aren't important, and other people aren't important. So we can just do whatever the heck we like, because nothing matters, and that means that there is no such thing as morality, or ethics. If we feel like going out and killing a bunch of people, because we are in pain because of our grief, then that's okay.
I definitely don't think that that's okay. I think a lot of people would agree with me that that's not okay. And particularly the police are going to be very upset with you if you go out and start killing people. It's not a good way to deal with grief.
I have already, in a sense, mentioned pornography. Pornography may have uses in sex therapy: that is not my field. Somewhat embarrassingly, in my field, and particularly in malware research and in researching the production of spam and disinformation, there are legitimate reasons to be aware of, and research, pornography. Pornography has, historically, been depressingly effective in luring people to install software on their machines, and visit Websites where that software is installed in what is known as a drive-by download. Intriguingly, in recent years there has been a significant relationship between networks of disinformation propagation for those on the right of the political spectrum, and pornography. This has been confirmed, not only by those of us who research spam and disinformation, but also by the intelligence services. (To the best of my knowledge, nobody has yet determined a rationale for this alliance.)
Overall, however, pornography is used recreationally. It may be used by those who are dissatisfied by, or unhappy with, a romantic partner. It may be used by those who are dissatisfied are unhappy because they have no romantic partner. (It may be that those gooning over pornography have no romantic partner because they are jerks.)
And there is, also interestingly, a connection with artificial intelligence and pornography. Pornography has always being selective in terms of both overemphasized secondary sexual characteristics, and the apparent enjoyment of certain activities which probably relatively few people enjoy. With artificial intelligence, of course, secondary sexual characteristics can be enhanced to truly grotesque proportions. And, of course, artificial intelligence can be used to generate pictures of individuals enjoying certain activities, even if *nobody* enjoys those activities. Artificial intelligence image generators can generate any kind of image that you like. Any form of figure. Any hair colour. Any eye colour. Any skin texture or colour. Any inclusion of tattoos, or any cleaning off of tattoos that may have annoyed you in regard to your romantic partner. As noted, secondary sexual characteristics, of whatever type, can be enhanced, amended, and modified, to whatever extent you wish.
Indeed, my research into the various failings of artificial intelligence, the generative artificial intelligence used to generate imagery used in pornography, demonstrates many of the failings that are seen elsewhere. Indeed, the imagery generated, and available in repositories of pornography, very frequently contain quite grotesque distortions of the human figure. And demonstrate the fact that large language model image generators seem to have particular problems with creating the proper number of hands, feet, legs, and other limbs. (It may be that these examples are encountered widely in collections of pornography because, once the enhanced secondary sexual characteristics are accomplished, nobody particularly cares if there are too many legs or feet involved in the figure.)
All of this is by way of leading up to the point that griefbots are a form of pornography. Your loved one, be it a friend, a family member, or even a pet, is dead. They do not exist any longer. The account, artificial individuals, avatars, or whatever else is being created by the "restoration" systems, are false. They are mere, and generally superficial, copies. They are not real, and they are not alive. They are not your loved one.
And these replicant copies can be amended. Their vocabulary, opinions, visible attributes in avatars, tonality in terms of speech generation: all of these can be modified. All of these can be improved, or enhanced, in whatever way you wish. Anything that annoys you about your loved one, in life, can be elided or improved upon the restored or replicated version.
You don't have to put up with your loved ones as they actually were. You can improve them, as you wish they were. This is wish fulfillment. It is wish fulfillment to create a copy of your dead person in any case. What is one more step to "improve" them?
So, just as with pornography, we are not constrained by the reality of our loved ones. I once worked for a company that produced high resolution printers for imagery. One of the examples of art that we had hanging on the wall of the office was that of a model, for an advertisement, who had had her skin tone amended by the removal of freckles, had her jawline lengthened to suit the preference of the person who was commissioning the ad, changed the color of eyes, and made a number of other improvements. When I described this to Gloria, her reaction was "Oh, great! Now we no longer have to compete with every other woman in the world, but even with women who never existed!"
You no longer have to put up with mere reality. You can have a romantic partner who always complies with your opinions. You can have a sexual interest which conforms exactly to your physical specifications. Your loved one can be brought back from the dead--and improved. Why do you need to put up with the random chance, and work, of finding companionship with people, who might have quirks that you might find annoying? Don't put up with socializing with people who might not agree with everything you want. Build your own network of "perfect" companions!
Please do not make the mistake of assuming that I am saying that generative artificial intelligence is always, and only, bad, and that we should never use it. Yes, as I have researched this field of artificial intelligence, I have struggled to find a task for which I find the current generations of large language models useful. So far I have used the image generators to create visual jokes (and have had a lot of frustration in trying to get them to work properly). I have found that, if you don't have any friends, the chatbots can, sometimes, be useful for brainstorming, as long as you are willing to do an awful lot of work in throwing away an awful lot of tripe that they provide for you. But, I am only one person. I am willing to assume that other people have been able to find uses for generative AI that do create useful content or products.
However, we are always in danger of using a useful tool for the wrong task. At present, I haven't got any information that supports the use of griefbots for grief support for the bereaved. I'm not saying that that support can't be achieved. I just haven't seen it yet. And, as noted, it's an awful lot easier to do this the wrong way, than the right way.
In general, in terms of generative artificial intelligence, I am not one of those who thinks that the machines are going to take over. We are a fairly long way from having artificial intelligence able to take care of itself, without our help. All you have to do, if you are afraid of the artificial intelligence programs getting too smart for us, and taking over, is remember where the power cord is. Yes, it may be possible that we will, at some point in the future, create systems that will perform all the tasks necessary to keep themselves running, and will, at that point, potentially become a threat to us. Having researched the field for more than forty years, and having seen the limited improvement that occurred has occurred in all that time, and even noting this rather amazing new development in terms of text generation, I still say that we are an awfully long way away from that point of danger (generally known to the artificial intelligence research community, the science fiction community, and the tin foil hat crowd, as "The Singularity").
No, I think that the greater danger, to us, is overreliance on the systems. The assumption that these systems are, in fact, rather intelligent. Just because they can generate speech better than the average person doesn't mean that these systems are, actually, intelligent. After all, as George Carlin famously said, think of how stupid the average person is. And then remember that half of them are dumber than *that*.
So, as I say, the real danger is that we rely, too much, on the systems. And that is particularly true with regard to griefbots. One of the other presentations that I do on a regular basis has to do with online frauds and scams, as sent by email, text, and even phone calls. One of the particularly nasty forms of fraud is the grief scam. As I have pointed out before, the bereaved are lonely. This loneliness makes them susceptible to any approach by anyone who provides them with a kind word, and intimates that they might possibly be an appropriate romantic partner. And, as I say in the workshops and seminars on fraud, why do grief scams succeed? Because the victims are lonely. And why are the victims lonely? Because we, the general public, the social networks in real life, the churches, the social groups, do not take the time to ensure that bereaved people are not too lonely. Are not going to be susceptible to the fraudulent approaches. Are not going to be at risk from fraudsters who zero in on the vulnerable. Check in on your bereaved friends, and family members. (More than once every two months, please.)
(Oh, you don't know what to say? Not a problem. Just listen.)
(A special shout out to the churches: read Second Corinthians, chapter 1, and verse 4. "[...] who comforts us in all our troubles so that we can comfort those in any trouble with the comfort we ourselves receive." Which begs the question: what is the distress for which you, and your church, have not been comforted, which means that you, and your church, cannot comfort those who are bereaved?)
As I say, I strongly suspect that over reliance on artificial intelligence is the greatest risk of intelligence. The medical profession is using artificial intelligence quite extensively, and is starting to use chatbots as a tool to answer patient questions, in order to save time for medical professionals for more challenging issues of diagnosis and so forth. While the risk of errors that might be produced by artificial intelligence systems is greater in the medical field, the medical field has been using various artificial intelligence tools for quite some time, and therefore is probably in a better position to judge the risks and dangers of the use of artificial intelligence systems.
Even so, the use of artificial intelligence, and particularly a griefbot, for grief support therapy is relatively untried.
Another area that has been eager to get into artificial intelligence as a tool for analysis is in the intelligence community. I am more troubled by this area, since there is less experience in using artificial intelligence tools in this area, and failures in analysis, due to errors on the part of an artificial intelligence tool, could have much larger consequences.
Another area of risk, and one that artificial intelligence researchers are increasingly concerned about, is with respect to bias. There are risks that we, as human beings, could be building our assumptions, and biases, into artificial intelligence systems that we develop. In addition, with respect to the large language models, they have been trained on large quantities of text data. The text data, in many cases, comes from the vast source of text that results from social media. Social media is, of course, full of bias, opinion, and even deliberate disinformation, promulgated from various parties. However, social media is also already curated, generally by artificial intelligence tools. The bias that may have been built into some of those simpler, and earlier, tools is, therefore, likely to be propagated to large language model systems, and any systems resulting from them, particularly since one of the increasingly used applications for generative artificial intelligence systems is the production of programming and computer code.
In terms of risks of developing systems, one of our tools to address the problem is that of testing. One of the widely used aspects of testing is the question of expected results. What is it that you expected to get from the system, or what answer did you expect to get from the system, and does it, in fact, provide the correct answer. Unfortunately, particularly with artificial intelligence systems, and the "Holy Grail" of artificial *general* intelligence, we do not know what answers we expect from the systems. We want the systems to generate, for us, answers and solutions which we did not come up with ourselves. So, when you do not know what the expected answer is, it is difficult to determine whether the system is operating properly.
Once again, I should note that I am not saying that artificial intelligence cannot make a contribution, even in regard to grief support. While I am concerned about the risks inherent in the use of these tools, and do not see evidence that the companies currently involved in the space are effectively addressing the issues, I have done my own tests in terms of grief support with a number of the generative AI chatbots that are available. While I cannot say that they are particularly comforting, at least none of them started any sentences with "at least." Therefore, they are superior, in terms of grief support, than pretty much all of my friends.
In the end, I suppose that the risks boil down to two issues: that of over reliance on griefbots so that we do not have to do grief support with our friends, and that of grief pornography. The griefbots, as noted in the earlier discussion, are better than reality. In a sense they are another form of over-reliance. Are we asking the bereaved to use these griefbots for grief support, and therefore to become used to unreal personalities who are better than any relationships: better, in terms of matching with our own preferences, and not making any demands of us, than any of the real relationships that we might have with real people in the real world? If we hand grief support off to the unreal and artificial world of griefbots, we then create another problem of weaning the bereaved off these systems when it becomes necessary for them, once again, to deal with real people.
No comments:
Post a Comment