AI and Ethics
I have not shied away from giving my opinion on all kinds of aspects with regard to artificial intelligence. And, if you are paying attention, you will also know that I write sermons. So why is it that I have not addressed the issue of artificial intelligence and ethics up until now?
The first reason for not addressing ethics, in connection with artificial intelligence, is that so few people have studied ethics to the extent that they understand how incredibly complex and difficult the field is. For example, just try to define ethics. You may start talking about laws, or lawfulness, and you may start talking about being nice to people, or at least not being nasty to people, and you might even get into issues of behaviors that would benefit our particular species. But we all know of laws that are not particularly ethical. And ethics is more than simply being nice to people. And why is it ethical to promote our species, possibly the expense of a whole bunch of other species? So, try and define what ethics, and or morality, is sometime. But don't do it right now, because, if you do, You're going to miss some things.
The other really big issue is the complexity of trying to apply ethics, even if you do understand what ethics are, and how very complex they are, to artificial intelligence. Trying to address ethics in terms of artificial intelligence is not merely complex, but pretty close to impossible.
In starting to discuss ethics in regard to artificial intelligence, an awful lot of people turn to Isaac Asimov. Isaac Asimov was a science fiction writer, and a great number of his later works involved a set of three laws of robotics which he had invented. The three laws of robotics were
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. They were cleverly worded basically stating that robots would have to obey human beings, and could not harm human beings. They were enormously successful as foils for science fiction works, but are somewhat more problematic when you try to apply them to real artificial intelligence systems.
Isaac Asimov didn't have to worry about any of this. His implementation of the three laws of robotics tended to involve a lot of "hand waving" and "magic." The positronic brains which he posited for the robots of his science fiction worlds (I doubt that the positron had been discovered and described when he started to use the term positronic for his robot brains, but there are no details for the implementation of these positronic brains, either). Simply have the three laws of robotics embedded in them, and that was the end of it. There is absolutely no discussion of issues such as how you taught robots, or even robotic brains, what the concepts of order, and obey, and even human beings actually meant.
We have nothing like Asimov's theoretical positronic brains, nor anything like his robots. Our generative artificial intelligence systems have been taught about strings of text, and amazingly complicated statistical models indicating the likelihood of the next letter or word in a text stream. We have not taught them anything about obeying orders. We have not taught them that there is a difference between themselves as artificial intelligence systems, and the fact that we are real, as opposed to digital, and seem to think that we should be able to command them. So far these artificial intelligence systems have been designed to generate some sort of a response to the submission of some kind of a prompt. We have taught them text. We have taught them very minimal aspects of communication, particularly that of rhetoric. We have not taught them anything about reality, or the nature of reality. We have not taught them anything about meaning or knowledge. And we have, very definitely, not taught them anything at all about ethics.
An additional layer of complexity in regard to the discussion of ethics and artificial intelligence is defining the subject, as it were, of the ethics. Are we expecting artificially intelligent systems, themselves, to behave ethically? Or is it the developers of artificially intelligent systems that we expect to behave ethically? Or is it the companies involved with the development of artificially intelligent systems that we expect to behave ethically? Or is it the users of artificially intelligent systems that we expect to behave ethically? Each of these subjects is different, and the concerns about the ethics of each subject are different again.
Do we expect developers of artificially intelligence systems, as subjects of ethics, to hold back on developing intelligent systems until they can figure out how to, effectively, implement some kind of ethical standards within those systems? As previously noted in the discussion of Asimov's laws of robotics, we are not certain how to do this. The developers of gen AI chatbots have put guardrails in place, which indicate some minimalist, and often failure prone, standard of implementation of ethical behavior in the artificial intelligence systems themselves. But is this sufficient? Can we rely on the fact that the chatbots will always operate within those guardrails? We have certainly seen numerous instances where the guardrails have been broken, and therefore, any implementation of ethical standards within the chatbot systems is going to be unreliable at best.
Implementing an ethical standard within an artificially intelligent system is not exactly a non-starter, but very little research has been done in this area, and we cannot be certain of the consistency of the implementation. In addition, what kinds of ethical standards do we want artificial intelligence systems to adhere to? This is another question, and indicates that we are not yet ready to pursue the idea of the actual systems themselves as the subject to ethical standards.
Very similar problems hold true with respect to seeing the companies as subjects of ethics. While corporations and enterprises can be held to account with regard to legal issues and regulations, the reliability and consistency of ethical behavior on the part of corporations is definitely problematic. Therefore, seeing the corporation as the subject of ethical standards is, similarly to seeing the actual artificial intelligence system as the subject of ethics, probably a waste of time.
With regard to corporate ethics, one aspect is completely undeniable. The concentration of massive investments of wealth, into only a very small number of companies involved in producing the major engines of artificial intelligence, is extremely unhealthy. It is unhealthy in economic terms, but primarily in terms of the overall ethics of corporate involvement in this field of endeavour. We face an uncertain future with regard to whether or not, and to what extent, artificially intelligent tools actually become useful. But we face much more certainty with regard to the possible outcomes of this race. Either the massive investment in artificial intelligence is an economic bubble, and will, at some point, burst (with massively negative effects upon technical economies, but also the world economy overall). Or the investment in artificial intelligence will pay off. In that case, the outcome is probably even worse. The very few companies that have successfully invested will become massively valuable, and massively wealthy. They will have enormous power to say who can, and who can't, benefit from artificial intelligence. This concentration of wealth and power will probably be unlike anything that we have ever seen in regard to wealth inequity up until now. It is by now well established that wealth inequity has massively negative consequences for society overall, and almost inevitably leads to such undesirable outcomes as massive wars.
There is a possibility of some middle ground; the possibility that artificial intelligence will pay off its investment to a certain extent, but not enough to give an overwhelming and advantage to those who have invested in it. At this point, with the lack of interest in examining the ethical considerations involved in all of artificial intelligence, this might be our best bet. However, hoping that somehow all the relevant factors will align to produce a perfectly balanced outcome is not exactly a plan.
Seeing the users of artificial intelligence systems as subject to ethics is a fairly interesting question. A significant number of users of artificial intelligence systems are probably interested in using the systems in and ethical manner. A number of them will be concerned with a positive outcome from the use of AI systems that benefits everyone. However, this expectation certainly cannot be universal. There are going to be an equal number of people who don't care about ethics: who, in fact, don't want any ethical considerations as it may become a restriction or impediment on their use of the systems and their ability to profit thereby. So some people will be very interested in being subject to ethical standards, while others definitely won't. As with other similar situations, such as the issuing of driver's licenses, our ability to determine who is going to behave ethically in regard to the use of artificial intelligence systems is going to be restricted at best.
In terms of research and publications in regard to the ethics of artificial intelligence systems, a number of documents have been published, but prior to the role of actual artificially generative artificial intelligence. Therefore, a number of these are less than useful due to limitations on the reality and consistency of their thought with current situation.
Some documentation from Anthropic is instructive in this regard. Anthropic has at least indicated an interest in behaving ethically with regard to this radically new technology which will undoubtedly come to disturb our society in ways that we cannot fully predict. Entries in Anthropic's blog, and particularly by the president, are fairly useful in this regard. They are somewhat optimistic in tone, and do tend to downplay the possibilities of some of the darker aspects that may result, but they are at least somewhat realistic, and informed by being part of the development of the field.
The Catholic Church's document Antiqua et Nova, promulgated in early 2025, is one of the most comprehensive overviews of the subject. Even that is limited in terms of a complete understanding of field and the technology, but it does seem to have had significant scholarship into the actual status of artificial intelligence research and development up to that point. The Catholic Notre Dame university's DELTA (Dignity, Embodiment, Love, Transcendence, and Agency) research and framework is a kind of abbreviation of his overall document. Both of these have significant things to say about the issue of ethics with regard to artificial intelligence.
AI topic and series
No comments:
Post a Comment