Wednesday, January 28, 2026

AI - 0.04 - intro - who

AI - 0.04 - intro - who

So, why me?  Well, for one thing, I was asked.  I am a teacher, so I know how to design courses and material to provide what people need to know, rather than just a whole bunch of random facts that might be related to the topic.  Also, I'm a writer, so I know how to write.

I am old, and therefore crotchety and curmudgeonly.  In addition, I am bereaved, and a depressive.  That means that I am an unhappy person, and therefore unlikely to be swayed by any promotional puff pieces by those who want to promote the artificial intelligence industry.  I test things.  To destruction, if necessary.  I have no problem with pointing out problems.

However, I also know what I'm talking about.  I have looked at at least one version of the programming code for ELIZA.  I have studied functional languages, the programming languages used to create expert systems.  I know about neural nets, and the weaknesses that that model of the brain has.  I know about a number of the problems in setting up programs for genetic programming.  While I am not an expert in the field, I know the different approaches to artificial intelligence, and that artificial intelligence is not a singular thing.

I have been learning, programming, supporting, testing, teaching, troubleshooting, securing, and researching computers, communications, and information technology for over five decades.  I have taught about the field on six continents.  I was on the Internet before it was called the Internet, when only about a thousand people were on it.  I understand the field very deeply, and can take a box of transistors and build a working computer.  I understand the implications of the technology: what it can do, and what it cannot do.  Because I understand it at such a foundational level, I can understand the dangers and implications of a new technology, such as quantum computing, and generative artificial intelligence, very quickly.  I also understand people, social engineering, human factors engineering, and how people and technology interoperate.

Given the complexity of the hopes and fears that people have about artificial intelligence, quite apart from any objective realities of what the field actually doesn't is, I suppose that my personal beliefs also come into this.

It certainly would be nice to have a reliable friend, who would never be exasperated at being asked to listen to, and supportively critique, our ideas, thoughts, beliefs, or opinions.  It would be nice to have someone who was smart enough to assist us with our work, but would not necessarily be a challenge, in terms of stealing our ideas and running away with them.  So, I understand the hopes that people have about artificial intelligence.  It would be nice to have someone, or something, who could reliably be counted upon to assist us with all kinds of mundane tasks that we don't want to have to bother with ourselves.

But I know what the realities are.  This hope has been around since ancient times, when one of the gods had a kind of mechanical owl as a friend or helper.  It has certainly been around ever since we had machines that would do some addition for us.  And, pretty much for exactly that long, the idea was that we would have some kind of artificial intelligence resulting from our computers, certainly within the next ten years.

We have believed that for eighty years now.

So, I am not holding my breath.  Someone once said about artificial intelligence that, when we try to make machines that learn, it turns out that they don't, and we do.  So, yes, the attempt to create artificial intelligence has taught us an awful lot, and continues to teach us an awful lot.  Sometimes more about psychology, than it does about computers.

There are also a great many fears about artificial intelligence.  There are always those who are afraid of anything that is not us, and they are, very often, terrified of the possibility that the machines will rise up and kill us.  We have created many works of fiction, both books and movies, that express this fear.  I think that this particular fear is just as unlikely as the possibility that, within the next ten years, we will have helpful and reliable artificial friends readily available to us.

At the moment, what I see as the greatest risk and danger to us, from artificial intelligence, is that, in our desperation for reliable artificial helpers, we will come to rely on imperfect, unreliable, and just plain bad tools that the artificial intelligence industry chooses to foist upon us.  We are already seeing AI slop flooding social media; wasting our time, and really giving us neither entertainment nor education in return.  I fear that we will see the same type of production infiltrating all aspects of our lives, and flooding out and depriving us of thought, consideration, value, and actual fact.

At any rate, I have been asked to help warn you, all of you, about what the real risks are, and the reality of what you might be able to expect, and probably should never expect.

Oh, you guys want a bio?  Recently, when I was doing a presentation on AI, the group wanted one, too.  So I thought it appropriate to ask the chatbots to do that for me.  This is a compilation of what they came up with:

Robert Slade is renowned, with a career spanning several decades, has made significant contributions to the field of cybersecurity, authoring numerous books and papers, with a solid foundation for his expertise, is influential and his publications have served as essential resources for both novices and seasoned professionals, gives engaging presentations with an ability to demystify complex security concepts making him a sought-after speaker and educator, with a career marked by significant achievements and a commitment to advancing the field of information security, his work has been instrumental in shaping the understanding of digital threats and has left an indelible mark on the information security landscape.  His legacy serves as a testament to the importance of dedication, expertise, and innovation in the ever-evolving landscape of information security.

You will note that none of these claims are really verifiable, and so they are also basically unchallengeable.  This is the kind of quality and content that genAI currently produces.  We'll go into details elsewhere.



AI topic and series

No comments:

Post a Comment