AI, Cheating, and Faking It

Source Node: 1121380

The reaction to my last post—AI, Cheating, and the Future of Work—was mixed. Some readers who have data science backgrounds seemed to like it and think it’s funny. Folks from outside that circle tended to shrug and move on.

The thing is, my analogy between cheating and artificial intelligence was meant to be taken both seriously and literally. For a layperson, one good way to think about artificial intelligence (AI) and particularly machine learning (ML) is to make analogies to cognitive shortcuts that humans use. Like all analogies, this one is imperfect. But it’s a helpful first approximation.

Have you ever been in a conversation where you only half understood what was being talked about? Maybe the room was noisy. Maybe the other person had a heavy accent. Maybe the subject was one that you didn’t understand as well as the other person.

Perhaps, for whatever reason, you decided not to let on that you were only getting bits of the conversation. Maybe you didn’t want to embarrass yourself. Or the other person. Maybe you didn’t want to interrupt the conversational flow. Or you could have believed you had understood enough to make fairly confident guesses at the gaps in your understanding.

How did you manage to keep up your side of the conversation? What strategies did you employ? Usually, we’re drawing on whatever context we can. Who is this person? What is the general topic of the conversation? Why are you talking about it? What might this person know about it? What is the person’s body language telling you about the reaction they expect? (“Is this the punchline of a joke? Should I laugh?”)

We can be very good at faking understanding in various situations. Except when we aren’t. Sometimes we guess hilariously and/or disastrously wrong. You’ve probably been on the other side of such a conversation and suddenly been surprised when the person you are talking to responds with a non sequitur or a wildly inappropriate reaction.

This is one way to think about what ML and AI algorithms do. They fake understanding by employing strategies that make inferences from available information and context. Of course, I anthropomorphize the algorithms when I say they “fake understanding.” It would be more accurate to say they “simulate” understanding. But even that’s not right, especially in ML. These algorithms are designed by humans to implement problem-solving strategies. Some of the strategies look a lot like the ones that humans use while others look almost nothing like human cognitive strategies. Regardless, when we try to apply these algorithms to produce human-like results in a problem-solving situation, the output can seem similar to humans faking it or students cheating. Particularly with AI chatbots, we can see a mix of uncanny accuracy and bizarre mistakes.

Credit: Chatbot Life

This explanation is not deep enough to help educators make judgments about when and how to trust specific ML- and AI-based tools. To do that, they would need to understand a bit about the specific strategies that a given tool is employing, the kinds of mistakes it is prone to make, and the educational impact of those potential mistakes.

But it does provide a starting point. If we’re going to continue using the phrase “artificial intelligence” in a layperson’s context, then we need to start finding analogies that laypeople can understand. Humans cheating or otherwise faking understanding seems like one good place to start.

Source: http://feedproxy.google.com/~r/mfeldstein/feed/~3/Fw977ti_ebY/

Time Stamp:

More from e-Literate