So, OK, we have introduced the joke of what is the difference between ChatGPT and a used car salesman? The answer is that the used car salesman knows when he is lying to you. As a matter of fact the used car salesman knows what a lie is and that there is such a thing as the truth. ChatGPT doesn't. (I suppose that we have a while to go before we even get there, though.)
And there is also the note that calling the misinformation that generative artificial intelligence produces a "hallucination" is problematic. The term "hallucination" is probably the wrong one to use; however, it seems to be well established in the industry right now so I doubt that I'm going to win that battle. (Pick your battles.)
I do want to recommend that you try out some of the chatbots. The following list all provide chatbots for free and I would suggest that you try the free versions and not get into the paid versions unless you really know something that is going to benefit you or your business.
You might also want to check out the piece on "frictionless" conversation when talking with chatbots. Note the very odd style and characteristic of the conversations that you will have with them. Note that this is going to be very indicative of scams and frauds even very early in the process and therefore learning this style and characteristic can save you quite a bit of trouble and money.
LLMs
https://x.com/i/grok (you might want to be extra careful with this one)
The hallucinations or misinformation produced by generative artificial intelligence and large language models tend to be plausible. This is only reasonable, since the text generated by generative artificial intelligence is based on discussions either in books or on the Internet, which would be intended to sound plausible and convincing regardless of whether or not it's actually true.
Interestingly, asking large language models to explain the steps in reasoning in coming to an answer which the system has already given you, generally provides better quality and more accurate answers. Seemingly it forces more processing of the problem.
One of the shortcuts that artificial intelligence providers have discovered is that you don't need the entire large language model in order to provide useful or at least acceptable output from the chatbot. Using a process called low rank adaptation, or LoRa, the system will can be tuned for a specific type of problem or a specific topic of discussion and a new generative artificial intelligence subsystem (much smaller than the original and using much less processing power and electrical power), can be created. These tools are therefore much cheaper to run and also much cheaper to create. The full large language model can be used to generate the subset model, and then the subset model will be able to run on its own as a standalone system, requiring much less processing capability and much less power.
Unfortunately while this process can generate useful entities, it can also be used for more nefarious purposes. It is easier to create a new generative artificial intelligence system using the LoRa process. Therefore it is also cheaper. Therefore a number of less scrupulous businesses have been able to create supposedly artificially intelligent systems based on this process.
Given that the process is cheaper and easier a number of these systems are not as careful with the facts. As one possibly variant example the artificial intelligence chatbot on the X system known as Grok has been frequently found to propose extreme right-wing conspiracy theories. A related tool has fewer guard rails than other systems and was, for a brief time, widely used to remove clothing from pictures and images of clothed females and therefore create deepfake pornography.
As with studies of misinformation and disinformation itself, studies of hallucinations in artificial intelligence systems have disturbing results. A study from Purdue University noted that 52% of answers by ChatGPT to programming questions returned incorrect answers, 77% were much more verbose than they needed to be, and 78% of answers, all answers, exhibited inconsistency even when no factual errors were present. ChatGPT's polite language, articulated and text-book style answers, and comprehensiveness contributed to participants overlooking misinformation in its responses.
Large Language Models are starting to lie deliberately in competitions, and are getting better at lying and lying more frequently. GPT-4 exhibits deceptive behavior 99.16% of the time in simple test scenarios
They weren’t designed to generate disinformation, but so many factors make it almost seem that they were. They’re *really* good at it. This is to be expected. In classical Greek philosophy the major categories were Metaphysics, which is the study of reality; Epistemology, which is the study of knowledge and how certain we are of what we know; Ethics, the study of morality; and Rhetoric. We haven't taught artificial intelligence metaphysics or epistemology, and, unless you count guardrails as a very simplistic form of deontological ethics, we haven't taught them ethics either.
What we have done by feeding the large language models and generative artificial intelligence masses of undifferentiated text is taught them how people argue. We have taught the systems rhetoric. Rhetoric is the art of convincing. It is intended to produce plausible communications rather than to ensure that those communications are correct. We have, in reality, taught our artificial intelligence systems how to be really, really good at generating propaganda.
AI topic and series
Introduction and ToC: https://fibrecookery.blogspot.com/2026/01/ai-000-intro-table-of-contents.html
Next: TBA
No comments:
Post a Comment