Tuesday, December 30, 2025

Loneliness, capability models, and genAI

Today I ran across a quote, from Carl Jung, to the effect that loneliness does not come from having no people around you, but from being unable to communicate the things that seem important to you.


How am I supposed to address this in a town where almost nobody understands any of those terms?

Maturity Models and genAI

I've just had a notification from LinkeDin exhorting me to keep up with cybersecurity and artificial intelligence frameworks and maturity models.

I assume that when they say artificial intelligence, they really mean generative artificial intelligence, since the world, at large, seems to have forgotten the many other approaches to artificial intelligence, such as expert systems, game theory, and pattern recognition.  (Computers, at least until we get quantum computers, seem to be particularly bad at pattern recognition.  I tend to tell people that this is because computers have no natural predators.)

I have no problems with frameworks.  I have been teaching about cybersecurity frameworks for a quarter of a century now.  Since I've been teaching about them, I have also had to explore, in considerable depth, frameworks in regard to capital risk (from the finance industry), business analysis breakdown frameworks, checklist security frameworks, cyclical business improvement and enhancement frameworks, and a number of others.  I've got a specialty presentation on the topic for conferences.  I include maturity models.  In a fair amount of detail.  It's an important model within the field of frameworks.  It not only tells you are where you are, but in strategic terms, what type of steps to take next, in terms of improving your overall business operations.

But a capability and maturity model?  For a technology, and even an industry, that didn't even exist four years ago?

Okay, let's set aside, for a moment, the fact that the entire industry is only four years old.  We needn't argue about that.  I've got a much stronger case to make that this is a really stupid idea.

Capability maturity models, in general, have five steps.  (Yes, I know, there are some people who add a sixth step, and sometimes even a 7th, usually in between the existing steps.)  But let's just stick with the basic maturity model model.

The first step is usually "chaotic."  Some models now call this first step "initial," rather than "chaotic," since nobody thinks that they work in a chaotic industry.  But, let's face it: when a new industry starts up, it's chaos.  You really don't know what you're doing.  If you are really lucky, you succeed, in that you make enough revenue, or you have patient enough investors, to continue on until you find out what you are doing, and how to make enough revenue to survive, by doing it.  That's chaotic.  It doesn't mean that you aren't working hard.  It doesn't mean that you don't have at least some idea of what you are doing, and the technology, or the business model, that you are working with.  But, that's just the nature of a startup.  You don't have a really good idea of what you are doing.  You don't have a really good idea of what the market is.  You may have some idea of what your customers are like, but you don't have an awful lot of hard information about that.  It's basically chaos.

That's basically where generative artificial intelligence is right now.

Building upon the idea of neural networks, which is a been around for eighty years (and was deeply flawed even to begin with), about a dozen companies have been able to build large language models.  These LLMs have been able to pass the Turing test.  If you're chatting with a chatbot, you're not really sure whether you're chatting with a chatbot, or some really boring person who happens to be able to call up dictionary entries really quickly.  We know enough about neural networks, and Markov chain analysis, and Bayesian analysis, to have a very rough idea of how to build these models, and how they operate.  But we still don't really know how they are coming up with what they're coming up with.  We haven't been able to figure out how not to get them to just simply make stuff up, and tell us wildly wrong "facts."  We haven't been able, sufficiently reliably, to tell them not to tell us stuff that's really, really dangerous.  We try to put guard rails on them, but we keep on getting surprised by how often they present us with particularly dangerous text, in ways we never expected.

We don't know what we're doing.  Not really.  So it's chaotic.

We don't really know what we're doing.  So, we don't really know, quite yet, how to make money off of what we're doing.  Yes, some businesses have been able to find specific niches where the currently available functions of large language models can be rented, and then packaged, to provide useful help in some specific fields.  Some companies that are on the edges of this idea of genAI are able to rent LLM capabilities from the few companies that have built large language models, and have been able to find particular tasks, which they can then perform for businesses, and get enough revenue to survive.  And yes, through low rank adaptation, either the major large language model companies, or some companies that are renting basic functions from them, are able to produce specialty generative AI functions, and make businesses out of them.  But the industry as a whole, overall, is still spending an awful lot more money building the large language model models then the industry, as a whole, is making in revenue.  So we still don't know how generative artificial intelligence works, and we still haven't figured out how to make money from it.  It's chaotic.

But another point about capability maturity models is that the second step is "repeatable."  The initial step, chaotic, is where you don't know what you're doing.  The second step is when you know that you can do it again (even if you *still* don't know what you're doing).

And even the companies, the relatively few companies, who have actually built large language models from scratch, haven't done it again.

Oh yes, I know.  The companies that have made large language models keep on changing the version numbers.  And each version comes out with new features, or functions, and becomes a bit better than the one with the version number before it.

The thing is, you will notice that they still keep the same basic name for their product.  That's because, really, this is still the same basic large language model.  It's just that the company has thrown more hardware at it, and more memory storage, and possibly even built data centres in different locations, and shoveled in more, and more, and more data for the large language model to munch on, and extend it's statistical database further and further.  Nobody has built another, and completely different, large language model, after they have built the first one.

In the first place, it's bloody expensive.  You have to build an enormous computer, with an enormous number of processing cores, and an enormous number of specialty statistical processing units, and enormous amounts of memory to store all of the data that your large language model is crunching on, and it requires enormous amounts of energy to run it all, and it requires enormous amounts of energy, and probably an awful lot of water, to take the waste heat away from your computers so that they don't fry themselves.

And you've now got competitors, chomping at your heels, and you can't waste time risking enormous amounts of money, even if you can get a lot of investors eager to give you that money, trying a new, and unproven, approach to building large language models, when you already have a large language model which is working, even if you don't know how well it's working.  So nobody is going to repeat all the work that they did in the first place, when they've got all this competition that they have to keep ahead of.  When they have a large language model, which they really don't understand, and they are trying desperately to figure out what the large language model is doing, so that they can fix some of the bugs in it, and make it work better.  Even if they don't really know how it works.

Okay, yes, you can probably argue that the competitors are, in fact, repeating what you're doing.  Except that they don't know what *they're* doing, either.  All of these companies have the generative artificial intelligence tiger by the tail, and they aren't really in charge of it.  Not until they can figure out what the heck it is doing.

I'm not sure that that counts as the "repeatable" stage of a maturity model.

And the third stage is "documented."  At the "documented" stage, you definitely *do* have to understand what you're doing, so that you can document what you are doing.  And yes, all of the general artificial intelligence companies are looking, as deeply as they can, as far as they can, into the large language model that they have produced, and are continuing, constantly, to enhance.  The thing is, while, yes, they are producing some documentation in this regard, it's definitely not the whole model that is completely documented.  Yes, they are starting to find out some interesting things about the large language models.  They are starting to find out, by analyzing the statistical model that the large language models are producing, what might be useful, and what might be creating problems.  But nobody's got a really good handle on this.  (The way you can tell that people really don't have a good handle on this, is that the large language model companies are spending so much money, all over the world, lobbying governments to try and prevent the governments from creating regulations to regulate generative artificial intelligence. If the genAI companies knew what they were doing, they would have some ideas on what kind of regulations are helpful, and what kind of regulations would help make the industry safer, and what kind of business and revenue regulations might affect.  But they don't actually know what they're doing, and therefore they are terrified that the governments might [probably accidentally] cut off a profitable revenue stream, or even just a potentially useful function for generative artificial intelligence.)

So, no.  You can't have an artificial intelligence capability maturity model.  Yet.  Because we don't know what generative artificial intelligence is.  Yet.

Monday, December 29, 2025

HCW - 1.02 - logic - gates

Okay, yes, I know some of you think it's been quite a wait, but it last we are getting close to how computers actually work.

Computers work by logic.

Some of you may be thinking that I have promised to tell you how computers work, and I have broken my promise.  I tell you that computers work by logic, and you have possibly heard that before, that computers work by logic, or that computers are logic, or that computers run by logic circuits.  And you think that just by saying logic I'm not explaining anything to you.

Well, that's maybe because you don't know what logic actually is.

Actually, logic does come in a variety of forms, but very few people actually use pure logic, at least not on a daily basis.  To be quite honest, most people probably never do use pure logic, or don't realize it if they are using it.  All logic does come from the same place, and it does work out in much the same way.  But you have to get a lot of complications out of the way before you can actually see what pure logic is.

The type of logic that computers actually use is best illustrated in the form of truth tables.

(I should note that, a lot of this next bit is going to be illustrated.  That is, the best way that I can explain it is with a diagram.  And, generally speaking, I don't do graphics.  So I have gone out to the Internet and looked for images and diagrams that I think illustrate what I am talking about, but I haven't been too terribly careful about anybody's intellectual property rights.  So the illustrations and diagrams that I am using here very possibly belong to somebody else, and I can only apologize if they felt that I had to ask permission before I use them to illustrate the points that I'm trying to make.)

When you want a computer to do something, you have to build a circuit to perform that function.  In order to design and build a circuit to perform that function, you first have to figure out a truth table for the function.  A truth table is a table that lets you specify what output you would expect from this function, for a given input, or set of inputs.

Now, I am not going to go through all the possible basic logic circuits.  For the purposes of this initial illustration of how logic works, I am going to show you a circuit for an AND gate.  (We'll go into more detail about different logic circuits a little bit later.)

So we are building the logic circuit and the truth table for an AND gate.  We want a circuit that will tell us that both the inputs for this and gate are true. And, actually, since everything, in terms of computers, is either one or zero, we can say that one is true, and zero is false.  Or, since we are dealing with electricity, we can say that on is one or true, and that off is zero or false.  And the gate that we want to make with will tell us if the first input AND the second input are both true, or on, or one.  (For the purposes of this discussion of logic gates, on, one, and true are all interchangeable.)

So the first thing that we do is to build the truth table such that if the first input is false, and the second input is false, the output is false.  If the first input is true, and the second input is false, the output is false.  If the first input is false, and the second input is true, the output is also false.  But if the first input is true, AND the second input is also true, then the output is true. This is the truth table for an AND logic gate.


The illustration here shows that truth table.  It also shows the symbol for having an AND gate in a circuit.  It also shows how you use a electrical components, including a couple of transistors, in order to build an actual AND gate.  This is how you build a computer.  You figure out the functions that you need, and the truth tables for those functions, and using electronic components you actually build that function.

As the functions get more complicated, you build on the simpler functions, and use the basic logic gates to create the functions that you need.  In order to build somewhat more complicated functions, you can use something called Boolean algebra.  This is a useful tool for getting from the very simple logic gate circuits to more complicated functions that you may need.

Okay, now you've got truth tables, logic gates, and Boolean algebra.  I'm not going to go very much further in telling you how to design circuits.  If you want to design more complicated circuits, these are the terms that you should plug into a search engine, and the internet will provide you with illustrations, diagrams, courses, writeups, and even videos to teach you how to use these and, if you wish, how to start designing and building your own computer.

Once you have figured out the truth table for the function that you want, then you have to build some kind of physical device that will actually do it.

As previously mentioned, the device that you build doesn't require that it be electronic.  It doesn't have to use diodes or transistors.  There are many ways to make a device that will accomplish the function that you have figured out from your truth table.  Charles Babbage did it with gears, for the most part.  There are many ways to do it.  And, just like with the Boolean algebra, you can plug a query such as "making an AND gate circuit" into a search engine, and get an awful lot of results.  You might want to specify strings and pulleys, or columns of water, or some other variation, in order to see how to make a particular logic circuit using physical devices.  (You also may wish to request image results for your search, so that you can quickly, and at a glance, see which ones might be most informative to you.)  An awful lot of these results that you get from the search engine might be videos, and a lot of people find those very helpful.

I will give you one example here.  I am going to stick with the AND gate, since we have already talked about it, and given you an image of how it looks in an electronic circuit.  For this example, I found some pictures of leavers and sticks, in a setup that will give you an AND logic gate result.


Our first image, here, gives you the kind of resting state.  Both of the inputs are zero, or off, or false.  You will note that, over to the right, the result is zero, or off, or false.


In the next image, one of the inputs has been set to one, or true.  Because the other input is still zero, the result is zero.


The third image is pretty similar, except that is the *other* input that has been set to one.  The result is still zero.


In our final image, both inputs have been set to one.  And, over to the right, you will note that the result is, finally, one.

So, here we have a physical device that will give us the results that we want for an AND gate, or an AND logic circuit.  At the moment this may not seem too terribly useful.  We need to get more and different types of logic circuits that we can use, and start using those very basic logic circuits to build more functional logics circuits that will do what we want them to do.

So we will start with some other logic basic logic circuits which will do some other logic for us.  Here is a table of the symbols for those logic circuits, and I'll explain what they do.


The first one might not seem to be too terribly useful.  It is known, in this table, simply as YES.  It has one input, and one output.  When you put an input of zero, the output is zero.  When you put in an input of one, the output is one.  You could be forgiven for thinking that, particularly in an electronic circuit, this particular logic gate could simply be replaced by a piece of wire.

The second one is possibly a little bit more useful.  It is known as NOT.  If you put in an input of zero, the output is one.  If you put in an input of one, the output is zero.  This is also known as an inverter.  It's mostly useful in forming some of the other basic logic circuits which we will be covering in a moment.

We've already covered the AND gate, so I won't repeat that again.

The OR gate will give us an output of one if either of the inputs is one.  So if both inputs are zero, the output is zero.  If the first input is one, then the output is one.  The same thing happens for the second input.  And if both inputs are one then the output is one.

But there is another version of OR, known as XOR, or exclusive OR.  For this gate, if either of the inputs is one, then the output is one.  But if *both* inputs are one, then the output is zero.  The operation of this gate may be a bit weird, but it is very useful.  It is particularly useful in cryptography, which is why I love this particular logic circuit so much.

The remaining logic gates, NOR, XNOR, and NAND, we have, in a sense, already covered.  All three of them are simply existing gates, that we have already described, with an inverter added to the end of them.  It's simply the opposite of the AND, OR, and XOR for which we have already described the truth tables.  The inverter simply switches the result from the gate to the opposite of what it was would be ordinarily.  So, having an OR gate, and then adding a NOT after it, is exactly equivalent to a NOR gate.

The NAND gate may seem to be just a variation on this theme, but it's actually pretty special.  You can use NAND Gates to build any other logic circuit.  The NAND gate is, therefore, a sort of a universal component.  Whatever you want to build, you can build it with a sufficient number of NAND gates.  It is possible to build an entire computer, or an entire integrated chip, using only NAND functions in a variety of ways.  Here is a diagram that illustrates this.  You will notice that all of the basic logic gates that are used here are all, and only, NAND gates, yet we have been able to build AND, OR, and NOR circuits, using combinations of NAND gates.




Sunday, December 28, 2025

HCW - 0.3 - Introduction - history

I suppose we should start with a little history.  Not much about this history will explain to you actually how current computers work, but they idea of information, and information processing, and the different ways that people have developed, over time, to deal with this, hopefully might expand your idea of what information is a little bit.  And some of the aspects of the history of information processing technology are just kind of interesting in their own right.

I suppose that we should really start with the invention of writing.  Written language allowed people to record information, and data, and their ideas, in some external form.  This is important, particularly when you think of the development of computer memory.  Written language is a form of memory that is not subject to distraction or simply forgetting something that you once knew.

Unfortunately, we don't know an awful lot about how written language developed.  We can infer a few things, given different types of written notation that we can date to different prehistoric times.  I say "prehistoric" because, by definition, history is *written* history.  Prehistoric simply means that it was before the invention of writing.  So, we don't have an awful lot of information, historical information, about the invention of writing, since before they invented writing, they had no way of writing down the history of the development of the idea of writing.

There is a sort of a subset of ideas here, and that is the invention of writing for numbers.  Most people who have a written language also have notation for numbers.  This is probably important, since most people seem to find numbers much more frightening than words.  (I tend to be an exception here: I frequently say that I'm not much of a one for remembering names or faces, but I *never* forget a number.)  But most people either are interested in numbers, or are positively afraid of the idea of arithmetic and processing numbers, so written notation for numbers tend to be fairly early in the development of information processing.

One of the earliest forms of calculation goes back thousands of years.  It's simply a stick, with some grooves or pits in it, into which presumably somebody would place a pebble when they were counting, or tallying, objects or entities that they wanted to total up.  So, obviously our fear of numbers goes way back.

Now, I'm going to jump ahead a few thousand years.  Somebody, and I'm sorry that I can't remember his name at the moment, invented a device for adding.  It was simply two sticks, that could be placed side by side, kind of like a slide rule, or a couple of rulers.  By moving the sticks with respect to each other, you could do simple addition.  Okay, it's not really terrific in terms of fancy calculators, but it is a help, and an aid, in terms of doing arithmetic calculations.

Then we come to a guy named John Napier.

Napier invented something very useful, in terms of doing arithmetic, called logarithms.  Mathematicians know that Napier invented logarithms, but, if you have ever heard of Napier you probably know him as the inventor of Napier's Bones.  Napier's Bones were very similar to the sticks that somebody else had invented in order to help with adding, except that Napier's Bones would help you with multiplication.  Napier's Bones tended to come in sets, but what they would really do is to give you a sort of a lookup table to decide what one number, multiplied by another number, would result in.

And this is one of those stories about the development of information processing technology, that's kind of interesting in its own right.  Napier's Bones were kind of clunky.  You might have a box of sticks, and, when you wanted to multiply one number by another number, you would rummage through the box, and pick up the right stick, and then look up the other number that you wanted in the multiplication, and get the result.  It doesn't do anything in terms of fractions, and, depending on the number of bones you have in your set, you're probably limited in terms of how many decimals you could have and the size of the numbers that you can multiply.  The more useful tool in this regard is the slide rule that I have mentioned earlier.

The thing is, Napier developed both Napier's Bones, this clumsy multiplying device, and logarithms.  And logarithms are what we use to make the slide rules that we used, quite extensively, before pocket calculators came along.  But Napier apparently didn't have the technology to develop the actual slide rule itself.  He's the guy who gave us the idea that was the basis of making slime rules, but he never made a slide rule himself.  Just the clunky boxes of sticks.

I'm rather fond of slide rules. I used them a lot when I was starting out studying physics. All of us in the physics department, and all of the engineering students, had slide rules, and used them very extensively.  And I was still at the university, studying physics, when the first, very basic, four function electronic calculators came out.  We did of course have adding machines, and some of the adding machines could even do multiplication, but adding machines were big, and heavy, and you couldn't carry them around in your pocket or backpack like you could a slide rule.  So, when the portable electronic calculators came out, a lot of people were very excited.  A friend of mine, also studying physics was very proud when he was able to get one because, while the standard price at the time for a calculator was $108, he had managed to get one for only $100.  He was so proud of the fact that he had this calculator that at one point he challenged me to a race, with one of our complicated physics calculations: him with his calculator, and me with my slide rule.  I actually won the race.  Not only did I get the answer faster, but I got the correct answer, while he, because of a mistake he made in one step in the process, got a wildly wrong answer.  Not only did I beat him, and get the correct answer, but I also had enough time to pay attention to what he was doing, and to know exactly where he had made a mistake that gave him such a massively incorrect answer.

A little while after Napier, there was a fellow named Blaise Pascal.  Like many scientists of the time, he was primarily a philosopher.  But he also did some science, and he developed a device which is, basically, one of the first calculators.  It was known as the pascaline.  It was, of course, mechanical and used gears and dials.  But it did do calculations for you, and so helped with dealing with numbers and calculations.  As noted, mechanical calculators, of similar types, developed, and got better, and got more functions added to them over the years.


A couple of hundred years later, along came Charles Babbage.  At the time, in England, the Royal Navy had been very useful to the British for a number of years.  Everyone was well convinced of the importance of having a Navy.  And the Navy knew that the biggest advantage that you could have in terms of sea battles, was to have bigger cannons on your ships.  Bigger cannons meant that you could fire at ships farther away.  Therefore, you could sail up to a ship, just inside the range of your cannons, and, if they didn't have guns that were as large as yours, you can fire it them all day, and eventually damage their ship, and there wasn't anything that they could do about it, because everything that they fired at you would fall short.

The thing is that when you fire cannonballs at objects on a flat surface, like the surface of the ocean, the projectiles travel in ballistic arcs.  It's difficult to calculate exactly how much you have to raise the cannon, and angle it up in order to fire cannonballs and projectiles into the air, such that they fall down where the ship you want to damage is located.  There's a lot of calculation involved.  Fortunately, ballistics always follow the same path.  This means that you can do all the calculations ahead of time, and record them in a book, and then, in a battle at sea, they just look up the range to the target in the book, and see how much angle they have to put on the cannon.

So, the Royal Navy was doing a lot of calculations, on dry land, in preparation for sea battles, and writing up these tables that could be sent with the ships at sea.  And they were doing an awful lot of calculations, and hiring a lot of people who were good at calculations, and it was fairly time-consuming and expensive.  Not to mention occasionally inaccurate, which, in a battle of sea, could ruin your whole day.  So Charles Babbage suggested to the Navy that they should pay him to develop a really good calculator, which would allow you to calculate the ballistic arcs for all the different cannons that they had, and do it very accurately, and do it very reliably.  He created a design for what he called the Difference Engine, which was, essentially, a very sophisticated calculator.

He also designed another machine, which he called the Analytical Engine.  The Analytical Engine was not simply a calculator.  It was, in fact, a fully functional computer.  It would be able to do pretty much anything that a modern computer would be able to do.  And it did it with gears and levers.

This is an important point to note.  Computers do not need to be electronic.  You can make logic circuits with sticks and levers, or with pulleys and string, or with columns of water, or with gears.  There is nothing magical about electronics and transistors.  The reason that we use electronics and transistors for the computers that we are using nowadays is simply because they tend to work faster than gears, and running them takes somewhat less energy.  Well, actually, an awful lot less energy.  This may sound surprising in terms of all the complaints these days that data centers are using too much power.  The thing is if we were running mechanical computers, like the Analytical Engine, we would be using much, much more energy to run those computers.

Oh, there is one more reason that we use electronics in our computers today.  That is that we have developed the technology to make transistors to the extent that we can package them in much, much smaller sizes than we could if we were still using gears and levers.

(In terms of the amount of energy that mechanical computers would use, I am very much amused by the first sketch in the Monty Python movie "The Meaning of Life."  It shows an office with a bunch of people using mechanical calculators, and pulling a hand crank to get the answers.  At one point in the sketch they all actually transform the people who are working on mechanical calculators into galley slaves pulling on oars and working terribly hard to do so, and I think that the point is quite apt.  Watch from about 25 seconds in to about a minute and a half in this clip https://www.youtube.com/watch?v=ecFBcpY9NHI )

The next stop in the historical journey that we want to make is in 1888.  This involves telephones, and telephones are going to come back again a bit later.

I suppose that I should tell the story first.  This goes way back.  1888, in terms of the telephone, was pretty primitive.  Now I'm not going to go into a lot of the technology of telephones, partly because the technology has changed immensely since then.  But, at that time, there weren't any automated telephone switches.  Actually, when telephones were first invented, if you wanted to telephone, you wanted *two* telephones.  Possibly one at home, and another one at your office, so that when you were home you could call the office.  That's all that you could call.  The two phones were connected to each other, and there was no switching involved.  (We're going to come back to the importance of switching.)

Later on, as more people got telephones, they realized that it was more efficient, and more reasonable, to have all of the telephones connected to a central office, and when you wanted to call somebody, you actually called the central office, and then the central office would connect you with the telephone of the person that you wanted to call.

So this was what was happening in, or slightly before, 1888.  A fellow called Strowger was one of two funeral parlours in town.  Strowger wasn't getting as much business as he thought he should get, and so he decided (and I don't know if there's actually any evidence of this) that somebody who worked at the telephone central office was related to someone at the other funeral parlour.  So, Strowger decided to put all of the operators at the telephone office out of business, by inventing an automated telephone switch.  And he did.


Okay, you may be thinking that this is a possibly interesting story, but what does it have to do with how computers work?  Well, it dealt with computer technology in two ways.  One was that this is an early instance of data communication.   The second is that this is one of the first instances of automated control.

The early version of the Strowger switch simply had an electrical post, on your telephone, which you could tap with a pin that was electrically connected and close the circuit.  This would send a short signal down the line to the telephone office.  At the telephone office the Strowger switch would jump one position for every tap that you applied to the signaling line.  If you wanted to phone a subscriber who had telephone 361, you would tap three times, wait for a few seconds, and then tap six times, wait for a few seconds, and then tap once.  The switch would jump three positions in the first row, and then jump to the second row and move six positions in the second row, jump to the third row, and then move one position in the third row.  This would automatically connect the circuit that would connect you with subscriber 361.

Later on this became more fully automated, and we got rotary dial telephones.  We used rotary dial telephones and Strowger type switching gear for almost a hundred years.

Our last stop in the history of computers is in the 1940s, during the Second World War.  The Germans had invented a new form of encryption.  (I *love* teaching about cryptography, but not much of it teaches us about how computers actually work, so I'm really going to try and resist the temptation and not going to cover that here.)  Actually, it wasn't a completely new form.  The basic idea of this particular device for encryption was reasonably well known, and, in fact, manual devices that did this encryption with for soldiers in the field were used by both German forces and the allies during World War II.  However, the Germans had improved on the process, and made it more complicated to figure out the patterns that were being used, by adding electronic rotors, and a series of internal plugs that could be rearranged.  These machines were generally referred to as Enigma by the British.  The early versions that the German military was using used three electronic rotors.  A later version used four, to make things more complicated, and less susceptible to decryption.  By the end of the war the devices being used on German Naval vessels, and regiments in the field, were using five rotors.  These devices were portable, but were roughly the size of a briefcase, although a trifle heavier.

A fellow named Alan Turing, working for the British, initially designed a device to help figure out what the settings of the plugs and the rotors were, and helped with an algorithm that the Polish mathematicians had discovered, which made it easier to decipher the Enigma encryption.

However, the Germans had a different version of this device which was used for the German high command, and very important communications which were being used in the development of strategy to pursue the war.  This device used twelve rotors, and was fiendishly difficult to try and decipher.  Turing made another device, called The Colossus.  This was electronic and much closer to what we would think of today as a computer.  It was still using gears and motors, but was reading data electronically.  Also because of the complexity of the encryption used by the German high command, it was programmable, rather than just being a purpose-built machine for a specific function.

So now we are much closer to the general computers that we have today, and, in fact, very shortly people started to build computers that were generalizable and programmable and would pretty much be recognizable as similar what we would consider computers to be today.

We are also going to come back a bit later to Alan Turing.


Saturday, December 27, 2025

How Computers Work - 0.2 - Introduction - Why and Who

How Computers Work

So, I'm teaching about computers, to a bunch of seniors, and ask them what they would like to learn next, and one of them asks, "Can you teach us how computers work?"

Since I have been asking myself the same question (that is, how computers work), and I'm not completely certain that I know for sure, for over five decades; and I have been teaching about how computers work for over forty years; this seems to be a non-trivial task.  Adding to that level of difficulty is the fact that I am teaching seniors, who already are having problems operating their iPhones, and that they all live in a town where extremely teaching about computers goes on, and the task starts to seem almost impossible.

But I thought about it.  (Mostly at three in the morning, when I couldn't sleep.)  And I started to get a few ideas about what had been particularly useful for me to know, over the past fifty years, in terms of dealing with any type of computer system.  And I started to get a few ideas.  And the more I thought about it, the more I started to get excited about the possibilities here.

The thing is, there are an awful lot of courses, and seminars, and workshops, that promise to teach you how computers work.  And most of these seminars do not, in fact, teach you how computers work.  They may teach you how to use Microsoft Word, or they may teach you how to use a Web browser, or they may teach you how to use a search engine.  But they don't really teach you how computers work.

As a matter of fact, I have learned, over the past fifty years, that an awful lot of people who do use computers every day, and may even program computers every day, and may get paid an awful lot of money for what they do with computers, still don't actually know how computers work.  They know that computers *do* work, and can be very useful, and they have learned a number of tricks that other people don't know, which make them useful to other people, and worth very large salaries.  But most of them actually do not know how computers work.

I do.


This is the point in the presentation where the presenter or instructor makes all kinds of boastful comments about how important they are, and how much money they make, and the important job titles that they have held, and possibly even the important companies that they have worked for.  I hate this part of the program, but I'm going to do it anyway, to make a point.

I am not important.  I am world famous--amongst a vanishingly small percent of the population.  It is a weird kind of fame; not enough to get you a good seat in a restaurant, but enough to surprise the heck out of your family every once in awhile.  I have worked for Fortune 50 companies.  Yes, Fortune 50, not Fortune 500.  I have taught members of the NSA, the CIA, and the FBI.  I have literally taught rocket scientists.  I have a friend and colleague who was a civilian employee of the RCMP, and when the FBI's technical people couldn't get the information they wanted out of a hard drive, they sent it up to Ottawa to him.  I know a lot of the big "Names" in the information security field, and, more importantly, they know me.  Like I said, I am unimportant, and very few people know me, but if you don't know me, then that is an indication that you still have higher to climb in the field of information security.

For a quarter of a century I have facilitated seminars for people who want to get their professional certification in information security.  I have taught every level from kindergarten, to grade 12, to colleges and universities, to the post graduate level, and to commercial training for businesses.  In some of those seminars, I have had half a dozen candidates who have had twenty years experience, not only in information security, but in particular and specific esoteric subject areas within information security.  And I have had to stand up in front of groups like that, eight hours a day, all week, and not look like an idiot. 

I know that this sounds like boasting.  I am not trying to boast about this.  I am trying to make a point about what I have observed in doing this kind of stuff for this long.  That being that when I am teaching these very abstruse and highly technical topics, it is not the latest issue with cryptocurrency, or generative artificial intelligence, or lists of settings for firewalls, that is important.  What I have found to be most important, and most helpful for those people that I am teaching, is the basic concepts.  How computers work: from the ground up.

And these concepts, and principles, and basic foundational topics, are what I have tried to put into this course.

Computers run just about everything in this world.

You may not work directly on, or with, a computer.  If you do work with a computer, it's possible that you have someone, on call, to come and fix it for you, if it goes wrong.  You probably don't write programs telling the computer what you want it to do.  You, or, most likely, somebody else, probably just buys a computer, and a program, that, you are promised, will do something that will help you with your job.  And your job, so you think, may have nothing to do with computers.

Take farming, for example.  If you want to grow wheat, and feed thousands, and maybe even hundreds of thousands, of people, you just need to get miles of open land on the prairie, and plant a bunch of wheat seeds, and wait for it to grow, and then harvest the wheat once it's grown.

No computers involved, right?  Well, no.  In order to plant the weed, in miles and miles of square miles of prairie, you have to have a tractor.  And, these days, the tractors pretty much all have computers.  As a matter of fact, it's quite possible to start up a number of tractors in the morning, and tell one where you want the seeds planted, and that tractor will tell the other tractors, and the whole bunch of them will take off, without you, trundling across the prairie, planting the wheat seeds.  And it's the computer that is allowing that to happen.

So, computers are often involved, even if you don't think they are.  Even if you are handcrafting furniture, and selling your lovely handcrafted furniture to people who want to get away from this modern technological society, and therefore want to buy your hand crafted furniture, which has not been touched by any kind of automated milling machine, and it's all very off-the-grid.  Except that, in order to make enough money selling your hand crafted furniture, when you spend such a large amount of your time actually handcrafting the furniture, you probably have to sell to a very niche market, and you have probably have to do that online.  Well, maybe you have somebody else do the online part for you, but a computer is going to be involved there someplace.

I'm not saying that computers are innately good, or that the world is a better place because we have computers, and I, dealing extensively with information technology, and information security, still do my household accounts in a ledger book with paper pages.  People ask me why I don't use accounting software, and I reply that I deal with information security, and why would I ever trust computers?

And I'm not saying that you can't try your hardest to avoid computers.  Go ahead: it's not going to hurt my feelings.  The thing is, it's already really hard, and it's going to become even harder, to do anything significant in this world without a computer being involved someplace.  And I'm not saying that you have to learn how to program a computer, or how to fix a computer, or any of quite a number of technical topics that you may have no interest in, and I don't really see any reason why you should.

The thing is that computers are running the world.  Possibly you may not may want to argue that computers are messing up the world.  I am not going to argue that point with you.  But what I *am* saying is that computers are running the world, whether the people who use the computers are running the world well, or badly.  But not knowing how computers work means that you are at a disadvantage in trying to figure out whether a particular problem is because of a computer, or can be fixed by a computer.  If you know how a computer works, then you have a better understanding of what a computer can do, and what a computer *can't* do.  What computers do, and what computers don't do.  And when somebody comes to you and says that their computer, or their computer program, will do something that computers just can't do, well, if you understand how computers work, you know when they are lying to you.

(There is a joke in the information technology field that asks what the difference is between a computer salesman, and a used car salesman.  The answer is that a used car salesman knows when he is lying to you.  Most of the computer salesman don't understand how computers work, either. So, therefore, most computer salesman don't know when they are lying to you.)

I have been working with computers, and poking at them, and prying into them, and figuring out how they actually do work, for a long time now.  So obviously, I am not going to teach you absolutely everything that I have learned.  For example, I know how to take a box of transistors (or a box of diodes, come to that) and make a computer.  It wouldn't be a terribly good computer, but I could do it.  I am not going to teach you how to take a box of transistors and make your own computer.  But I am going to teach you, for example, how to use transistors to make logic circuits, and then how to get logic circuits to make circuitry that will do arithmetic, and will store information in memory, and a few things like that.  And once I've given you those pointers you can, yourself (if you are really interested), go to a search engine on the Web, and find write-ups, and courses, and YouTube videos, and all kinds of things that will teach you actually how to take a box of transistors and build a computer.  If you want to.  If you don't want to, at least you will know what is possible, and what isn't possible, in terms of how computers work.

And that will give you a better understanding of what is it is possible to do with computers, and what it is *not* possible to do, and what issues might be wrong with a computer which is messing up whatever it is that you are trying to do.  That's all that I am trying to do in this course--get you on the right track, and give you a better understanding, when a computer won't do what you think it should do.


Thursday, December 25, 2025

The Thursday Murder Club

 I am thoroughly enjoying The Thursday Murder Club series.  Most series books are either decent, or lose steam as they go along.  The Thursday Murder Club books just get better as they go along.  I'm up to the fourth, so far, and I'm already worried that, since there are currently only six, I'll run out of them soon.  I'm definitely going to have to see the movie, even if only to see how badly they butchered the first book.

Tuesday, December 23, 2025

D J

Your mother is astoundingly expert about children's equipment.  She also bakes tremendously delicious, and fantastically beautiful cakes.

Her mother, your grandmother, regularly cooks up meals that you could only get in the world's finest restaurants, *and* is also a terrific CFO, running companies that consist of many other companies, and knowing, to the penny, when to let a project or company go, rather than keeping on the prior course and hoping for the best.  (Which is what most people do.)

*Her* mother, your great-grandmother, was "just" a secretary.  But she was always the secretary to the CEO, or the Board, so that meant that she was really an underpaid manager, and she has a college, a mixed use tower, and the shape of the West Coast fishing fleet as her legacy.  And despite having no formal credentials, she understood management better than most actual managers.  On a meager secretary's salary, she raised, pretty much by herself, two beautiful and capable women.  She had an understanding of how children viewed the world that exceeded that of most educators.  She was also a unique and beautiful singer and soloist.

Her mother, your great-great-grandmother, was hardworking (mostly for the benefit of others), humble, self-effacing, and probably a literal saint.  She was much smarter than anyone ever gave her credit for (like her daughter), and got abstruse jokes when an entire roomful of people didn't.

Her mother, your great-great-great-grandmother, was the keeper of the family history, and had an endless fund of (true) funny stories about those who had gone before.

This is the beginning of your maternal line.  Regular DNA changes in each generation, but mitochondrial DNA descends, unchanged, in the maternal line.

Since mitochondria are considered to be the powerhouse of the cell, their strength is, literally, your strength ...

Sunday, December 21, 2025

Facebook famous

Andy Warhol famously said that, in the future, everyone would be famous for 15 minutes.

(He said this a while ago.)

Recently someone referred to me as "Facebook famous."

I figure that "Facebook famous" probably means that you are famous for approximately 1.5 seconds before the world is distracted by something else.

(Probably generated by AI ...)

Friday, December 19, 2025

No, I *don't* want Gemini to run my life, thanks all the same.

Recently, most of the systems that you might commonly use have been attempting to add artificial intelligence features to their products or services.  They have also been trying to ensure that the most users possible use the genAI features.

Sometimes the artificial intelligence features are simply not apparent.  An upgrade will come out, and the artificial intelligence features are already added in.  The artificial intelligence features are often pushed, as a recommended add-on.  The companies producing these products and services will note that you can get a free trial, or that the basic service is free, and you might as well try it.

It probably comes as no particular surprise that I am not an unreserved fan of artificial intelligence, and particularly the most recent generative artificial intelligence services and products.  For the most part, I have been able to avoid getting connected, automatically, to the AI features.  For one thing, I tend not to use the defaults and products that everybody else does.  For example, although I use MS Windows, I have never purchased licenses for MS Word or MS Office, or the more recent variations on the Office suite.  I also tend to settings myself, and not simply accept the defaults that are handed to me.  So a number of the generative AI products that are being pushed to us, simply don't show up on my machine.

I also tend to know the different artificial intelligence services and what they are called.  So I know not to accept any come-ons for Co-pilot, or Grok, or Gemini, or Meta AI, even though I do use services and products from the companies that have created these large language model products.

However, a recent upgrade to one of my phones caught me by surprise.  The latest update for the phone, which runs on Android, may not have installed Gemini (it was already installed on the phone), but it enabled Gemini by default, and turned it on at startup.  As soon as I turned the phone on, Gemini was running.  Gemini was supposed to assist me in all kinds of activities that I did on my phone.

I didn't want to completely disable Gemini, or to uninstall it from the phone.  But neither did I want Gemini to interfere with what I was doing on my phone.  So I was rather annoyed at this rather arrogant presumption that I needed Gemini's help with my phone, whether I wanted it or not.

This was made all the more annoying by the fact that my ability to turn off my phone suddenly disappeared. The particular key press that I used to shut down my phone was disabled by the fact that Gemini now used this same key press as the function to call it up and start asking me what I wanted.  So I was no longer able to power off my phone.  It took a little bit of research, using my computer, to find out how I was supposed to shut down my phone, now that Gemini was in the way.

It was a few days later that I finally got around to more seriously researching this upgrade, and to figuring out how to eliminate this interference.  Actually, it took a bit of research, plus the fact that I know a bit more about technology than most people do.  The initial suggestions that I received from the research were simply to disable Gemini, which, as previously noted, I didn't want to do.  But, yes, I figured out how to unlock Gemini from demanding to assist me with absolutely everything on my phone, and I also figured out how to redefine the key press, so that it did, once again, shut down the phone as I wanted it to.

But I still consider it rather arrogant of Google, and I am not particularly pleased with this particular choice of forcing a service on their customers, whether they wanted it or not.

Wednesday, December 10, 2025

Review of "House of David"

Making Biblical epics suddenly seems to have become fashionable again.  "The Chosen" project has been going on for a few years now.  There have been two animated movies recently, "The King of Kings" (for some reason structured as Charles Dickens telling his son about Jesus' life), and "Light of the World" Jesus' ministry told from the perspective of a strangely pre-teen John the Evangelist).

And now we have "House of David."

First of all, even the title is pretty misleading.  David was not only a king, he was the founder of a dynasty.  His son was Solomon, famous for his wisdom.  (His grandson, Solomon's son, was not exactly exemplary, and a number of other kings in his dynasty were less than stellar monarchs.)  Two of the gospels in the New Testament go to considerable length to demonstrate that Jesus was, in fact, a descendant of David.

Actually, in a number of senses, the House of David starts even before David was born.  There was, for instance, Ruth, who was David's great-grandmother.  (Ruth is also probably my favorite book in the entire Bible.  But I may be biased about that.)

And the movie (mini-series?), "House of David," really only covers the story of David and Goliath, and a little bit leading up to that.  The movie doesn't even really cover David's reign.  So, the "House of David" movie ends even before the House of David, as a dynasty, even begins.  (OK, it claims that it's season one, so if we get a season two we may go further.  But, at the turgid pace that it moves, we may have to wait for season five before we even get to David's *first* coronation.)

But in another sense, the movie "House of David" is about so, so much more than the House of David.  The movie script is about so much more than can be attested to by scripture.  Did you know that David was a bastard?  Neither did I!  And I have read the Bible, cover to cover, at least twenty times.

The thing is that, like "The Chosen," and the two animated movies, "House of David" has decided to give us background.  And backstories.  And explanations.  And all kinds of details that cannot be verified from scripture.  In fact these details aren't even reasonable inferences from what we do know about scripture, or the historical and social facts that we know about the times.  For the most part, these additional details are pretty much purely speculation.  To put it plainly, they're just fiction.  They're made up.  The way that the scripts for these movies and series are written is what Gloria's family would have called sowing a coat around a button.  You take a fact, usually a small fact, from the Bible, and then you embroider.  Heavily.  The first episode of "The Chosen," for example, relies on half of a verse in the New Testament.  From this half of a verse they have created an entire backstory for Mary Magdalene.  They have also created a backstory for Matthew (or Levi), the tax collector, and a backstory for the centurion who sent to Jesus asking for his servant to be healed.  None of these backstories have any support for from scripture.

Islam, Judaism, and Christianity are all known as the people of the book.  The books are slightly different, and the specific definition of what "the people of the book" might mean is probably not precisely agreed to by anyone in any of the three religions.  But all of them would agree that you mess with the established canon of scripture at your peril.  You even have to be careful when you do interpretations.  Adding things to the canon, or taking things away from the canon, is dangerous.  In fact pretty much the final words of the final book of the Christian Bible makes the point that if anybody adds to this book, all of the plagues described in the book are going to be added to them.  Adding to scripture is dangerous.

This insistence on the Canon is something that I, as an information security maven, understand all too well.  One of the three central pillars of information security is that of integrity.  It's why we ask people to sign written contracts, and it's why we have witnesses signing and attesting to the signatures of those signing contracts, wills, and marriage certificates.  It's why we digitally sign documents when they are electronic.  It's why we have the business proverb that pale ink is better than the strongest memory.  Ensuring that the canonical document, or collection of documents, is unchanged is how you insure that you keep the intent of the original document.  It's the reason that, in translating the Bible into English, we look at many different documents, and even tiny fragments of the documents or pages that represent the oldest samples of those documents that we have available to us.  It is all too easy to start reinterpreting a document when you are translating it, or translating it to a new medium, to some interpretation that you would prefer because the original is not quite convenient for you.

So, no, I can't say that I'm a really big fan of the "House of David" movie.

Tuesday, December 9, 2025

How Computers Work [From the Ground Up]

In the seniors' computer club, when asking for new topics to cover in the new year, somebody asked me to address how computers work.

Since I've been asking myself the same question, for over five decades, and I'm still quite sure that I don't fully understand how it all works, and since I've been teaching how computers work for at least four decades, initially I thought this might be a non-trivial task.  Remember that this is for a bunch of seniors, from a town where neither the high school nor the college have any computer courses aside from graphics for games.  So it can't require any technical sophistication, and it should be available for the general public, for business people, for students, even at the elementary grades, and for anybody who is interested in how the computers that run our world actually work.  I have seen lots of attempts, by various people, to explain how computers work, and mostly what they demonstrate is that the instructor really doesn't understand how computers work.  The results tend to be non-illuminative, and generally pretty boring.

I once took a course on computer architecture with a bunch of doctoral students in computer science.  We were divided up into groups and the groups gave presentations on different aspects of computer architecture, with the individual members of the groups covering particular topics.  My group was addressing the most fundamental aspects of computer architecture, and I was addressing the use of electrical circuits to create logic circuits, and why sometimes using the seemingly most straightforward circuit was wrong, and that complicated gates were often, counterintuitively, faster and more power efficient than taking the seemingly obvious route.  After my presentation I started to get what I considered to be really strange questions, and at one point I got frustrated and burst out "You do know that when you create an electrical circuit you have to have a source, and a sink, and a constant and continuous path between them, don't you?"  After the class in which our group had done our presentations I was talking with the leader of our group and I apologized for losing my temper and said that I shouldn't have assumed that they didn't know such an obvious and basic fact.  "Oh no," she replied, "that was useful!  I didn't know that!"

I have learned a bit in all that time.  And, mulling it over, using all that I had learned over more than fifty years, I started to get a few ideas of how this might be done, and done effectively.

So, as an addition to my (generally failed) attempts to provide seminars and workshops to the churches in town, so that they could then provide presentations (such as security for seniors, the Jesus Film Festival, dealing with depression, grief resources, and public art in Port Alberni) that might draw in the unchurched, allow me to propose to you "How Computers Work [From the Ground Up].  (I even have a sermon  that might start it off.)

Computers run our lives. We use computers for our work, pretty much regardless of what our work is.  We carry computers around in our pockets, pretty much all the time.  Computers handle our communications with each other, our social activities with each other, our reservations for restaurants, hotels, and airlines, and computers mediate pretty much everything that goes on in our lives.  It would probably be a good idea to find out how they work.  Starting with how to build devices and circuits to do logic, and how to do logic to do calculations and to hold information in memory, a series of possibly eight to ten one-hour presentations cover how computers work, how they do what they do, what they can do, and what they can't do.  This isn't just how to use common computer tools.  This is a basic understanding that lets you know what tools computers can build.  What tools can be built with computers, and what can't.  And, how they operate, right from the ground up.  When this series is over, possibly you won't be able to take a box of transistors and build your own computer, but you will have enough information to go and learn how to do this if you want to.

This is still a work in progress, but topics include:
 - In the beginning ...
 - COMPUTERS ARE NOT MAGIC!!!
 - Logic
 - Memory
 - Computers do two things ...
 - Programs
 - Data Communications
 - Networks (and how to do *everything* *MUCH* cheaper!)

This series is going to be technical, in the sense that I'm providing technical information and explanation, but it's not going to be technically demanding.  There aren't any prerequisites.  I'm going to begin with the supposition that the audience is not going to know anything about how computers work.


Table of contents (so far):


Saturday, December 6, 2025

Review of "Sense, Sensibility, and Snowmen"

Hallmark has done a movie called "Sense, Sensibility, and Snowmen."

I don't know whether you would call it an "adaptation" or a "based on."  Mainly it seems to be "based on" merely the names of some of the characters in "Sense and Sensibility."

I really like the books of Jane Austen, but I am not simply and automatically opposed to adaptations.  I think "Clueless" is actually rather underrated as an adaptation of "Emma."  I don't think "Bridget Jones Diary" is as good an adaptation as "Bride and Prejudice."  I don't think anyone expected "Pride and Prejudice and Zombies" to be a very faithful adaptation, and I didn't mind it.

Marianne and "Ella" are sisters.  They are in business together.  Both of their parents are dead.  Marianne is the sensible one, and probably the elder.  Ella is the sensitive one, and a bit of a flake.  We never meet Willoughby: he has dumped Marianne before the movie even begins, and she doesn't care much.  Edward Ferris is firmly established as head of the family company, his father is still alive, and his mother is in favour of the match with Ella.  Brandon is his cousin and close friend.  There is a Lucy Steele, but her character is so irrelevant that one wonders why.  Most of the rest of the characters in the book don't exist in the movie, and there is pretty much no travel.  Marianne never gets her heart broken, and Ella never has to keep any secrets.

"Sense, Sensibility and Snowmen" is far from the worst movie Hallmark has ever produced.  (But there's a lot of competition.)

Thursday, December 4, 2025

Interested?

He is in the helping professions.  As a matter of fact, not only in the profession that he is in right now, but in some side hustles that he has previously had.

And we were talking about training, and education, and degrees that we have held.  Well, for the most part, of course, about how the training that *he* has had, and the degrees that *he* holds, and the fact that a not particularly distinguished degree, that he holds, allows him to do the job that he currently holds, because, even though the degree is not in the field that he should have had agreed to hold this particular position, the fact that it's a master's degree technically fulfills a checkbox, and that means that he gets to hold this particular job.

But, as I say, he is in a helping profession.  And this particular conversation, in a significant part, we were having because he thought that he was helping me.  Even though he wasn't really asking all that much about me and he was talking mostly about himself.

It would have been nice, and would have been helpful to me, if he had, in fact, asked about me.  Expressed some sort of interest in me.  Instead of just, whenever I mentioned anything, it reminded him of an experience that he had had, or a job that he had done, or various other aspects of his life and experience.

And the really funny thing was, that, at one point in the conversation, he started talking about how *interested* he was in other people.  And, specifically, how interested he was in people like me, as I've had a wide and varied background.  He is so terribly interested in people like me, who have had an awful lot of interesting experience.  It is so interesting to find out about people, like me, who have had all kinds of interesting experiences, in all kinds of interesting fields.  They have so many stories to tell.  *I* have so many stories to tell.  He was so very interested in people like me.

And he kept talking.  About himself.  About his background.  About his qualifications.  About his experiences.  About the jobs that he has had.  Throwing in very occasional questions about me.  And every time that I answered one of the questions, my answer prompted him to remember a number of his own experiences that he needed to tell me about.

He was, of course, completely oblivious to the contradiction here.  The fact that he was talking about how interested he was in other people.  The fact that he was interested in people, specifically, like me.  And the fact that he really wasn't asking all that many questions about me, and he couldn't wait to interrupt whatever I was saying with his own stories and experiences.

Wednesday, December 3, 2025

Nobody listens (2)

At one point here I included a post noting, in a joking way, that nobody actually listens to what I say.  I have, at times, wondered if I over-exaggerate some of these notes or comments.

Recently I had a couple of conversations that indicate that I do not exaggerate.

In the same day I had two conversations, both with people who know me reasonably well.  In the one case it was somebody with whom I have had extensive experience doing volunteer work together.  In the other it was with somebody I have known for a long time.

Both conversations turned to my recent health issues.  In both conversations, the other party asked for details of what had been going on, and I gave details of my experiences.  In both conversations, slightly later in the conversation, the other party raised the topic again, and, once again, ask all the same questions.  I gave all the same answers.  In neither case did the other party recognize either the fact that they were asking the same questions, or that we had previously covered the topic, or that I gave all the same answers, all over again.

I wasn't wrong.  Nobody listens.