TL;DR: We should not compare the abilities of AI with the abilities of Humans, because computers can’t take responsibility for their actions. Every AI system should have a clear name behind it.
Recently I have been starting to think that I’m too grumpy about AI/ML research. Sometimes it feels that every technology news fills me more with dread than wonder, and new research makes me feel more tired than excited. I have been chalking this off to attitude, maybe even burnout. But a recent hype release made me think about why it is actually important to be grumpy from time to time (or more often than not, actually).
Recently a young student and their advisor made waves on twitter by announcing that they had “discovered” a secret language on the Dall-E image generator. Of course, in the end, it was much ado about nothing, encouraged by an advisor who wanted to get hyped and avoid getting scooped. They even “published” their original twitter thread as a paper on arxiv (which tastes sour when arxiv is picky with actual research from people)
Ok, so some people were silly on the internet again. Why does this matter? Well, seeing a lot of people running around saying things like “Dall-e has its own language!” made me think a bit about why it is actually be grumpy about how much we hype AI research in public spaces (AI has language! AI can do Art! AI can replace professionals in field X!).
The thing is that when we don’t make clear the limitations of our AI systems, people naturally start to see AI as human (see the recent talk about consciousness of deep learning systems), forgetting one VERY important thing that AI systems can’t do: Take responsibility for their actions.
Although there is of course a component of corporate greed in that, it seems that a lot of people honestly think that we can leave AI-based systems to do things that influence us without a human being ultimately being responsible for their actions. To put it another way, quite a few people seem to think that we can actually just let AI-based systems go wild at our problems, and use their results unquestioningly.
To give a concrete example, in my university we often discuss what to do to increase the amount of information available in English for international students (and to an extension in Chinese as well). Almost always someone will honestly suggest just using machine translation to everything that we want to make available in other languages.
Now I love machine translation. I use deepl every day to make sure that the Japanese sentences I write are somewhat correct. BUT, it is far away for being the solution for this kind of problem — maybe part of a solution, but definitely not all of it.
Why is that? Any machine translation will get a LOT of small things wrong. When we’re talking about translating official materials, we’re talking about things that are going to be used by some student to decide their life path, and there needs to be someone there to make sure the translation is correct and, more importantly, take responsibility for its correctness. In the end, a human is still needed in the loop.
This reminds me of a question I made during ALIFE2018 to a professor giving a keynote about androids. I asked him what were some ethical issues that he thought was important in android research. He answered that he thought we should make more robots that make purposeful mistakes. For example, that we should make vending machines that sometimes gave the wrong change back (Of course you could ask it to give you the correct change). This would help break the stereotype that machines are always correct, fair and neutral. It would make people trust machines less, check them more, which is super important as AI systems take over more and more parts of our social systems.
One thing that I like a lot about this answer and thought experiment is that it reminds me of how we should think about evolutionary computation and other search-based optimization algorithms. These algorithms are able to solve problems that are too hard for traditional optimizers by abandoning the idea of finding the *optimal* solutions, and settling for a “best effort” solution.
In practical cases, best effort solutions are actually quite good enough! But, sometimes they’re not, so the user needs to make sure to check the solution that comes out of the algorithm. I think this is a very positive thing, and should be standard procedure for any service that is empowered by AI.