Monday, March 30, 2026

AI - 2.09 - genai - deontological ethics

AI - 2.09 - genai - deontological ethics

Philosophy and religion complains that philosophy and religion is not addressing technological questions because it has little social capital.  I would say that philosophy and religion is addressing ethical questions of technological developments, but without first studying what the technological developments are.

Why should they listen to us?

We have things to say about ethics and morality, but few of us have ever studied any of the technologies that are involved in generative artificial intelligence.  How many of us study high tech?  How many of us study even at least one of the possible fields of artificial intelligence?  How many of us even know how many fields there are in artificial intelligence?

You cannot demand that an AI system not do unethical things when you have not even taught the AI system the meaning of words, or the fact that words have meaning, separate from the fact that there is a certain statistical probability of what words are most likely to be generated.

And even at that point, you then have to try and teach the AI system ethics rather than simply rhetoric.

It is true that generative artificial intelligence developers are not paying sufficient attention to ethics in their work.  (It is true that Anthropic is the best of a bad bunch in this regard, which is not saying much.)

It is true that philosophy and religion has centuries more experience in dealing with ethical issues than do the AI developers.

Unfortunately the philosophy and religion cannot address the primary problem with generative artificial intelligence developers until we know what their worst issue is.

The worst problem is that these developers are pursuing a chimera.  They do not know, and have not defined, what intelligence actually is.  Alan Turing's famous test of machine intelligence is not so much a definition of machine intelligence as it is an indicator that we have not yet defined what intelligence is.  These developers, like many before them, are simply hoping that "emergent properties" will somehow, magically, provide us with some kind of intelligence that will assist us.  (Emergent properties is, essentially, just another way of saying "magic.")

Asking that the developers and the products adhere to the principles of dignity, embodiment, love, transcendence, and agency, is laudable.  However, it is unlikely to be effective.  The "guardrails" that the developers are attempting to impose upon large language models are not even principles of deontological ethics.  They are complicated statistical weightings which the developers have imposed upon the models in order to hide the fact that the developers do not know what the models are doing.

The large language models which have produced generative artificial intelligence are simply extremely complicated statistical models of patterns of text.  That they produce readable and plausible streams of text is astonishing.  All the more so when and if you actually understand that no understanding is involved on the part of the models or generative artificial intelligence.  The models do not know what the words mean.  The models do not know what meaning is or that words have meaning.  Generative artificial intelligence does nothing more than predict the most likely next word in a stream of text in response to the prompt, which is itself only seen as a stream of text to be statistically analyzed.

(Image and video generation is somewhat different as text is only peripherally involved, but the process, based on statistics and statistical analysis, is essentially the same.) 

The problems identified; overinvestment in an untried pursuit, headlong pursuit of an undefined goal, the general pursuit of an objective without regard to the ethical considerations surrounding that objective; remain.  The framework might very well be proposed to the developers and to society at large as guiding principles in regard to this overall pursuit and objective.  Indeed the overinvestment itself is leading to concentration of wealth on a scale that can only do damage to society at large, and the DELTA (Dignity, Embodiment, Love, Transcendence, and Agency) framework could assist in addressing that very major issue.

But we are not going to succeed in addressing the ethical considerations of the pursuit of artificial intelligence if we are asking the developers to take actions that are inherently impossible.

These models and Gen AI overall do not understand what ethics is.  They do not understand what truth is.  We cannot hold them to account in terms of ethical considerations when they have absolutely no understanding of meaning, or truth, or ethics.

Teaching ethics to generative artificial intelligence systems is going to be problematic at the very least.  For one thing, we probably don't know what ethics and morals are in the same way that we still don't know what intelligence is.  Therefore, our pursuit of ethics, and instilling ethics into artificially intelligent systems, will be much more like much more legalistic and much more like legal systems than ethical systems.

If we are to approach ethics with artificial intelligence systems at all, it will, initially at the very least, have to be on deontological basis rather than teleological.  After all, deontological ethics are primarily based on sets of rules.  There can be codes of conduct.  There can be codes of professional ethics.  But it is always going to be based on actions rather than beliefs or understandings.  And, of course, in terms of the current level of artificial intelligence systems, there is no basis for understanding.  There can only be a basis of actions which are forbidden.

Here is a first problem in regard to instilling ethics into artificially intelligent systems.  The first problem is that current artificial intelligence systems do not have any concept of meaning.  They do not know what is the truth.  They do not know what corresponds to reality and what is simply a string of words that has no meaning.

How do we explain the complicated concept of ethics or morality when we can't actually explain *anything* to artificially intelligent systems?  They don't understand what they are doing now.  We cannot teach them as if we were expecting them to understand anything about ethics because they don't understand anything about anything. 

The closest thing that we have to ethics for artificial intelligence systems is the system of guardrails that there have been attempts to instill in artificial intelligence systems.  Guardrails are going to have definite similarities to deontological ethical systems.  There are certain combinations of words which are forbidden.  There is a reduced ability to interpret situations or the forbidden texts.  The clearer this can be made, the more effective the guardrails are going to be.

However, given that we do not have a solid understanding of how generative artificial intelligence systems actually work, it is extremely difficult to install completely effective guardrails.  The artificial intelligence systems are constantly finding new ways to get around the guardrails.  Sometimes the guardrails are circumvented by users who get creative with the wording of their prompts.  But there are definitely instances of the systems themselves finding ways to explore loopholes and get around the guardrails that are installed on the systems.

In addition even deontological ethics are going to be extremely difficult for AI systems.  The thing is that deontological ethics are about sets of rules and the rules are primarily about forbidden activities.  Don't *do* this.  The thing is that so far artificial intelligence systems are primarily about words, just words, and the words do not necessarily have any meanings to the AI system.  So we have an initial first step that is going to be a barrier to explaining any forbidding of activities: what is an activity and how is it that it is accomplished in the real world?


AI topic and series
Next: TBA

No comments:

Post a Comment