AI Responsibility

TL;DR: We should not compare the abilities of AI with the abilities of Humans, because computers can’t take responsibility for their actions. Every AI system should have a clear name behind it.

Recently I have been starting to think that I’m too grumpy about AI/ML research. Sometimes it feels that every technology news fills me more with dread than wonder, and new research makes me feel more tired than excited. I have been chalking this off to attitude, maybe even burnout. But a recent hype release made me think about why it is actually important to be grumpy from time to time (or more often than not, actually).

Recently a young student and their advisor made waves on twitter by announcing that they had “discovered” a secret language on the Dall-E image generator. Of course, in the end, it was much ado about nothing, encouraged by an advisor who wanted to get hyped and avoid getting scooped. They even “published” their original twitter thread as a paper on arxiv (which tastes sour when arxiv is picky with actual research from people)

Ok, so some people were silly on the internet again. Why does this matter? Well, seeing a lot of people running around saying things like “Dall-e has its own language!” made me think a bit about why it is actually be grumpy about how much we hype AI research in public spaces (AI has language! AI can do Art! AI can replace professionals in field X!).

The thing is that when we don’t make clear the limitations of our AI systems, people naturally start to see AI as human (see the recent talk about consciousness of deep learning systems), forgetting one VERY important thing that AI systems can’t do: Take responsibility for their actions.

Although there is of course a component of corporate greed in that, it seems that a lot of people honestly think that we can leave AI-based systems to do things that influence us without a human being ultimately being responsible for their actions. To put it another way, quite a few people seem to think that we can actually just let AI-based systems go wild at our problems, and use their results unquestioningly.

To give a concrete example, in my university we often discuss what to do to increase the amount of information available in English for international students (and to an extension in Chinese as well). Almost always someone will honestly suggest just using machine translation to everything that we want to make available in other languages.

Now I love machine translation. I use deepl every day to make sure that the Japanese sentences I write are somewhat correct. BUT, it is far away for being the solution for this kind of problem — maybe part of a solution, but definitely not all of it.

Why is that? Any machine translation will get a LOT of small things wrong. When we’re talking about translating official materials, we’re talking about things that are going to be used by some student to decide their life path, and there needs to be someone there to make sure the translation is correct and, more importantly, take responsibility for its correctness. In the end, a human is still needed in the loop.

This reminds me of a question I made during ALIFE2018 to a professor giving a keynote about androids. I asked him what were some ethical issues that he thought was important in android research. He answered that he thought we should make more robots that make purposeful mistakes. For example, that we should make vending machines that sometimes gave the wrong change back (Of course you could ask it to give you the correct change). This would help break the stereotype that machines are always correct, fair and neutral. It would make people trust machines less, check them more, which is super important as AI systems take over more and more parts of our social systems.

One thing that I like a lot about this answer and thought experiment is that it reminds me of how we should think about evolutionary computation and other search-based optimization algorithms. These algorithms are able to solve problems that are too hard for traditional optimizers by abandoning the idea of finding the *optimal* solutions, and settling for a “best effort” solution.

In practical cases, best effort solutions are actually quite good enough! But, sometimes they’re not, so the user needs to make sure to check the solution that comes out of the algorithm. I think this is a very positive thing, and should be standard procedure for any service that is empowered by AI.

(Review) “The Demon-Haunted World, Science as a Candle in The Dark”

tl;dr: The good parts are really good, the bad parts are really bad, but more than anything, the book felt way too long.

This past week I’ve read “The Demon-Hauted World, Science as a Candle in the Dark”, by Carl Sagan. This book that has been highly recommended by several friends I admire, and has generally very good reviews. From the outside, it is an essay about how scientific thinking is necessary for the well being of society as a whole, which is a topic that it close to my heart, so this book has sat near the top of my “to read” pile for ages.

The book begins with a harsh admonishment of the kind of thinking that leads people to believe in histories about UFO and astrology, as well as other pseudosciences. The first chapter of this was interesting. The second was okay. By 100 pages, I was like “okay, believing in UFOs is bad, I got it, can we talk about something else now, like how science is amazing and super useful for society?”. I mean, the first half of the book is not all UFOs, but it sure FELT like it. It dragged.

By page 200, I was ready to give up on the book. I decided to check reviews to see if anyone had similar experiences to mine, and found a lone 2 star review (among a sea of weird “BEST BOOK EVER!” reviews), that echoed my feelings. They agreed that the first half was dragging, but recommended the last few chapters as really interesting. So I decided to keep on reading, with liberal page skipping whenever Dr. Sagan started talking about UFOs again, and I’m glad I did that.

There is a chapter around the middle of the book describing a “skeptic toolkit”, that is really practical advice about how to think about knowledge in a rigorous manner that I might integrate in my “Experiment Design” lectures.

Then there was one terribly colonialist short chapter which hand-waves away the scientific achievements of non-European civilizations as somehow “non-scientific”, without much of a thought of why, to state that “ancient Greece” was somehow the only place where “true science” was born. This was another place where I felt like I wanted to drop the book. “Question everything, except that thought on the back of your mind that tells your civilization is certainly better than what you don’t understand”.

Right after that there is a chapter about science education, where he laments the small amount of time that American children spend on school and praises bubble era Japan education for sticking kids in school for 33 hours a week. That didn’t seem to have worked very well for us in terms of scientific achievement these days, though…

The book does improve after that, though. The Chapter on Maxwell is very interesting and fun — with beautiful, wondrous and poetics descriptions of invisible waves in our daily lives. This sense of honest wonder feels a bit out of place when compared with the tone of the book until now, and I would much preferred if this had been the main tone from the beginning. Being excited about science is much more interesting than bemoaning the lack of it.

The final chapter about science and politics has its heart in the right place, when it talks about the necessity of ethics to be involved in science, and science to be involved in politics: “The unprecedented powers that science now makes available must be accompanied by unprecedented levels of ethical focus and concern by the scientific community—as well as the most broadly based public education into the importance of science and democracy.”

However, he does dabble in “Freeze Peaches” by arguing that “even the most odious argument should be allowed, and faced with debate, not suppression of ideas”, which I guess is easy to say when you’re not part of a demographic that is often denied a platform or free access to information about itself.

All in all, I have mixed feelings about the book. The good parts are really good. The bad parts are really bad. But above all I felt that the book was long and dragging, with way too much UFO bashing. I wouldn’t recommend the entire book to someone else, but it would be really nice if there was an abridged version of the good parts.

In the end, I leave the read feeling a bit personally disappointed. I haven’t really read any new books in the last 4-5 months, other than obligatory readings of textbooks to prepare classes, the minimum scientific papers I needed to read for work, and the occasional narrative game. I guess I’ll reach to my fiction pile for my next read.

How is streaming like teaching?

Today I watched a conversation between game streamers “Jorbs” and “DirtyRobot” about streaming as a career, relationship with the audience, and mental health. (Note, the video is super long (over 2 hours!) and I don’t really recommend that you watch it, unless you’re SUPER interested in these topics.

Anyway, a lot of what they were talking about their experiences with streaming made me think about my own experiences teaching at undergraduate and graduate level, specially in these last two years where 99% of my teaching went online.

I guess one thing that comes to the front is the separation between personal and public life. It is interesting that as the years passes, I feel an increasing necessity of separating between my public life and my personal life. There is a variety of reasons for that, and not all of them are obvious at first glance. For example, to protect students from the various power imbalances in our relationship, and to allow me the distance necessary to require more effort from the students when necessary. Also, even though I know it this public/private separation is necessary, I’ve come to resent it a lot. Many of the students are now just names in a list in an online content distribution system, and that cuts one of the positive feedback loops that I used to enjoy most in teaching.

Another thing they talked about was the need to hire people to work on administrative tasks that drain you away from your ability to do creative labor. Not only creative labor is highly taxing in time, the mind space necessary for administrative task is not compatible with the mind space for creative labor, and the context switch necessary between the two is expensive. This is something that I’m trying to come to terms and balance in my duties as an educator, researcher, and professor. The first two are creative endeavors, but with quite different inputs and outputs, and I could say that they compete with which other, while the third one is an administrative time sinkhole that has a way of growing and sucking out all my time for the creative tasks I should be focusing on. The ironic part is how I can usually get grants for researching, sometimes get grants for teaching, but it is so damn hard to get a grant that would allow me to use the money to hire someone to do the administrative tasks that would give me more time to do research and teaching… Although, they did have a discussion about hiring people to do things you don’t want to do, and how that is a super weird feeling that I did have myself every time I had the opportunity to pay someone to do a sub task in my work that I did not want to deal with.

Anyway, sometimes it is good to have some prompt for introspection and thinking about how you see yourself and your life.

Lata Velha Discord Bot

I’ve recently spent some time writing a simple discord bot, which I call “Lata Velha”.

A cute tin-can robot

The bot was inspired by a discord server where, in the “random” channel, a bot would serve drinks to users (usually tea) and respond to “good day/good night”. I wanted to reproduce that in my own personal server. I know that there are several very powerful “general-purpose” discord bots, but it makes me happy to try to code these things myself from scratch, so here we are.

You can check the code for the bot in its Github repository. Fair warning: I’m not very good with Javascript, so the code can be janky in places.