I just came across a piece where someone is noting that a certain type of innovation may suffer because genAI tools will not have enough data on novel approaches to accept them.
The thing is, that concern is not limited to the one specific area the piece covers. The AI industry has been pushing all kinds of tools to make "suggestions" about creative work, such as research. Or to do a "first pass" elimination of ideas that might not bear fruit. We use AI tools to relieve ourselves of the tedium of paring down a pile of applications for a job. The AI tools do statistical matches to patterns that have been found in data that they have been fed. But if the data that they have been fed hasn't been extremely carefully curated in the first place, it won't return the best results.
And if anything is truly novel, it won't appear in the old data at all. And so anything truly innovative will be rejected out of hand.
I am fond of the quote that creativity is allowing yourself to make mistakes, but that art is knowing which ones to keep. Large language models make mistakes. Some of the mistakes are so truly awful that they are actually kind of amusing. But those error instances are exceptional and rare. It is not for nothing that AI generated content is coming to be universally referred to as "slop." And, even so, "amusing" can be a far cry from "useful."
But my concern is not just that genAI produces much that is useless. It is that in using AI to prune, pare, correct, and edit our *own* creative work, we may be allowing AI to restrict us to what has gone before, and has been fed into the large language model.
If we do that, we may well ensure that there is nothing new under the artificially intelligent sun.
No comments:
Post a Comment