Wednesday, March 25, 2026

AI - 2.06 - genAI - what not taught

AI - 2.06 - genAI - what not taught

I have, elsewhere, noted that, inherent in the process by which we have built large language models, and therefore generative artificial intelligence, is the fact that we have taught these systems rhetoric, but not the other, fundamental, classical fields of philosophy: that of logic, metaphysics, epistemology, and ethics.  This points out one of the many possible things that we can do to address our own use of artificial intelligence.  We can press for research and exploration into the areas of artificial intelligence that we have not yet explored.

I was the one with the formal qualification in education, but Gloria had a greater and deeper native understanding of the way children viewed the world then anyone that I have ever met.  Gloria always insisted that, at every possible opportunity, we pay attention to children, particularly young children, to see how they see the world.  She said that this would be the only way in which we could get a new perspective on the world, a new viewpoint.  This is an absolutely salient position to take with regard to artificial intelligence.

When you pay attention to it, the way that children learn is nothing short of miraculous.  Some of us like learning: I do.  A great many of the human species do not enjoy learning.  We make every possible excuse to continue to do what we have been doing, without changing either what we do, or the way we do it.  A great many of us try to avoid learning at all costs.

Babies are learning all the time.

Human babies, interestingly, are born with a number of capabilities, which they very quickly lose.  If you have access to a newborn child, you can verify this for yourself.  A newborn human infant has, almost immediately, a grasping response.  If you put your finger in a newborn babies hand, it will grasp your finger.  It will grasp it hard enough that you can lift the child using only your finger.  (I must warn you that, should you make this experiment in order to verify what I have said, that you do so extremely carefully, and make sure that you have the full and informed permission of the parent, particularly the mother, of the child.  I am not responsible for any injuries you may incur if you fail to follow this advice.)  This grasping response is found in other primates, but in humans usually disappears after a day or so.

Newborns do have other capabilities, which they, generally, very quickly lose.  I was able to see one grandson within a few hours after he was born.  At that point, I was able, allowing his hands to grasp my fingers, to have him stand upright, with me really only providing balance, and not lifting force.  He was also, at that point, able to hold his head erect, and to turn his face to different sounds in the room.  Once again, this capability disappeared within a couple of days.

Newborn infants are unable to focus their eyes.  They seem to be drawn to the shape of a face, even if they can't fully focus that image.  Within weeks, newborn infants learn what focus is, and how to focus, and then start to focus on objects in their field of vision, moving their eyes, and eventually their head, to focus on, and study, certain objects within their visual field.

How do they do that?  How do they learn to do that?  Even knowing what focus was, knowing what optics were, knowing the importance of vision in identifying objects, it has taken us more than seven decades to figure out how to get computers to do it.  It still takes up an enormous amount of computing power, and we can't yet do it anything like as usefully, or as reliably, as any human child learns to do it, without assistance, in about six months.

So, here are a few things that we should start to teach our artificial intelligence systems, in order to make them actually intelligent.  We need to teach them the eighty percent of philosophy that doesn't involve rhetoric.  We need to teach them how to learn.  And so we should probably put it to the enormous tech giants, and the massively expensive generative artificial intelligence corporations, that these are some areas they should look into.

When recruiting for an artificial intelligence company, don't just look for the latest bright spark who can code really quickly.  Make sure that that bright spark, as well as a number of computer courses, has taken some courses in philosophy.  Have ongoing education within your corporation, that teaches these fields.

You're probably going to be hiring young people.  They will probably be a marriageable age.  They may even be married.  They may be having kids.  If so, make sure they have time to spend with their kids.  Do you provide daycare?

There are enormous amounts of money invested in artificial intelligence companies these days.  Yes, there are a great many demands upon that money.  There is a bidding war, going on in order to poach talented individuals from one company to another.  There is massive investment in data centers.  There is even enormous investment in power plants to power the data centers to run the computing necessary to build large language models, and then to run them.  But amongst all those billions, do you have a daycare?  Do you have a daycare for your employees?  On site, within your company campus?  So that your employees, your young married employees, who may have small children, can occasionally drop by and spend time with their children.  And observe their children.  Observe how their children start to learn.

(They may also spend more time at work, in that case.)

Do you take any of that massive investment, in power plants, and data centers, and high-priced talent, and invest it in education?  In education in general, in terms of supporting schools in the areas around you, so that you can recruit educated employees.  But also invest in educational research.  Particularly, and probably unusually, in the area of early childhood education.  Fund research into how infants and children actually learn.  Infant psychology.

Yes, these areas of research are going on.  But they don't get anywhere near the funding, the billions, and even trillions, of dollars that are going into artificial intelligence.  Yes, they promise of artificial intelligence is a big one.  And, if we ever *do* get actual and genuine and reliable artificial intelligence, then it is likely that the artificial intelligence will repay that investment.  But aren't we more likely to achieve artificial intelligence that much sooner, if we are using educational, and psychological, and philosophical research and study in order to direct our own search for, and production of, artificial intelligence?

While isolated visionaries have idly speculated about emotion in computers, the vast majority of the computer using, and non computer using, populace sees technology as cold, mathematical, and ultimately objective (if occasionally in error).  The fact that this assessment is an emotional one gets conveniently forgotten.

One of the possible divisions in the study of artificial intelligence is in the approach taken.  The brute coding approach simply strives to make programs more and more intelligent, the definition of "intelligent" being left as a problem to be dealt with once we have something that is at least marginally useful.  This strategy has been demonstrably successful in producing entities like Deep Blue, genAI, and techniques such as expert systems.  The alternative route is to observe that we already have at least one agreed upon model of intelligence, and to seek to apply what we know of the human mind to some form of programming.  While that course suggests interesting tactics like neural networks, spectacular triumphs have not been forthcoming.

Pursuing this modelling approach Rosalind Picard divined a potentially revolutionary concept in computing in producing the book "Affective Computing."  Even those who praise Picard and the book tend to see affective computing as only a means to a superior user interface, and miss the proposal that affect is key to intelligence itself.

It has been proposed that the AI goal of reproducing human intelligence is a chimera and a false trail.  Machine intelligence, so the thesis suggests, is different in kind from human intelligence, and the attempt to make one copy the other would be better directed to finding the differences between them and assigning work appropriately.  If this latter hypothesis is true then Picard's recommended line of enquiry would be futile in terms of producing better machine intellect--but would still be valuable in determining the dividing line.



AI topic and series
Next: TBA

No comments:

Post a Comment