Saturday, September 14, 2024

Listening (4)

He kept insisting that he *was* listening, and he *wanted* to hear, and I was finding it very odd that, even in the midst of insisting he was listening, he obviously wasn't.

And in trying to parse out what it was that proved that he wasn't listening, I realized he was like ELIZA.  ELIZA is an extremely simplistic attempt at artificial intelligence (more than six decades old, now), which appears to converse by discarding most common words in English, picking up only keywords.  So, it discards any detail or nuance.

But, whereas ELIZA uses the keywords to prompt the other side of the conversation to "Tell me more about [keyword]," he uses it as a prompt to his own version of generative AI.  Gen/AI uses keywords to generate a stream of text which is glib and plausible, even if it is wrong.  It is completely unconcerned with meaning, so it never knows when it is wrong.  He, basically, is always confident that he is correct, and it never occurs to him that his (often partial) idea may not be what the other party means.  As long as he is able to generate a stream of plausible words (with toxic positivity guardrails), he is doing his job.

It has been said that when we try to teach machines how to learn, it turns out that they don't, and we do.  I am aware that some of the pointed failures of large language models and generative AI have prompted those interested in neurology to question the neural network models which have informed both artificial intelligence, and neurobiology, for at least the past four decades.  The success of neural networks in some areas has prompted a fairly firm belief in the truth of the model.  In fact, there is relatively little actual evidence for the model, other than the fact that the black box that is the brain does seem to operate similarly to the output of neural networks, in very simple cases.  Science, however, is in the business of testing a hypothesis against all cases, and even a single failure may indicate that the hypothesis is flawed.  Sometimes fatally so.

But there is a possibility that we can learn from even our flawed attempts to understand the human brain, and psychology.  The errors and hallucinations that we are now seeing from gen/AI may point out common errors in psychology.

I have studied listening for more than five decades now.  I have attempted to put it into practice.  I can't say that my early practice and experiments were particularly good, but, even back then, they definitely provided benefits.  However, I can say that ELIZA, itself, even in those early days, provided me with some valuable lessons, and tools, in listening.  When I realized how very simple ELIZA was, and yet how effective it was, I used the utility of this simple matter of picking out keywords, and encouraging the respondent to tell me more about one or more keywords.  I also learned to identify multiple keywords, and to keep some in reserve, when a particular line of prompting remained unfruitful.

But that was early days, and simple stuff.  This recent experience, coupled with the enormous interest in generative AI, has provided new opportunities.  In the same way that we are trying to debug generative AI models, by identifying and examining the errors that generative AI make, we should be able to identify our own errors in the practice of listening, and other relational and psychological tools.  These tools are not yet a science.  Or, rather, the use of these tools is not yet a science.  It is still very much an art, and there are practitioners who are good at using the tools, such as listening, and those who, despite great professed interest in the tools, such as listening, are still unable to use them effectively.  If I had a nickel for every person who told me that they were a good listener, and then, immediately and very profoundly, demonstrated that they had no skill in listening at all, I would be an extremely rich man.

So it may possibly be that, probably much to the dismay of the individuals, venture capitalists, and corporations who are pouring billions of dollars into generative AI, that this is, once again a matter of finding out that the machines don't, and we do.  It may be that the greatest benefit of generative AI is in teaching us where some of our models of psychology are incorrect, and in prompting other, more useful tools, or instructions on the use of existing tools, such as listening and active listening.

No comments:

Post a Comment