Crushing Colin & Killing Sophia — Could You Dispose Of A Sentient Appliance?

Crushing Colin & Killing Sophia — Could You Dispose Of A Sentient Appliance?

Source Node: 2614547

I read the book titled Killing Sophia — Consciousness, Empathy, and Reason in the Age of Intelligent Robots by Thomas Telving, with whom I discussed autonomous cars some time back, and with a smashing title like that, I thought I would dive into it. Sophia is not a car, “she” is a human-like android, but Thomas also touches on the subject of autonomous vehicles, so I thought I would give my own car, “Colin,” a place in the spotlight on this occasion.

Crushing Colin

The book surprised me on a fundamental level. Well, it did not surprise me that Thomas lays out a compelling case for extreme caution on giving rights to AI and robots in the same way that humans have rights. It also did not surprise me that he manages to frame the problem through intelligent thought experiments, and references thinkers who are taking the AI revolution seriously, in an entertaining and yet very serious tone. No, what surprised me was that I realized the robot I have parked in my driveway may actually wake up one day.

Thomas suggests our trust of any intelligent system, be it virtual or physical, increases as it achieves more human-like behavior and features. A study that Thomas refers to in his book states: “…participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a human-like mind. ”

When I engage in heated discussions with friends and colleges about a robotaxi future, I always ask the ones who insist that they will never get into a driverless car this question: “Have you ever asked the driver of your taxi if he was sober and had a good night’s sleep?”

Photo by Jesper Berggreen

As some readers will know, just for the fun of it, I have been writing about my experience with my Tesla Model 3 named Colin in a fictitious way, as if the car itself was indeed fully self aware. I have opted in for the Full Self Driving package, which ensures that whatever software Tesla manages to develop will eventually be downloaded into the car. The features and “personality” of the car is for now very limited, thus I have found it very amusing writing about auto-steering and lane-changing as if Colin was awake and driving along with me.

Colin is named after a small security guard robot in Douglas Adams’ comedy science fiction story The Hitchhiker’s Guide To The Galaxy. Colin is captured by an alien called Ford Prefect, who named himself after a car, because he thought the car was the most intelligent entity on planet Earth, and Ford decides to rewire Colin’s reward circuits to make him find ecstatic pleasure in anything his new master commands. Colin’s circuitry was made by MISPWOSO, the MaxiMegalon Institute of Slowly and Painfully Working Out the Surprisingly Obvious, which sounds a whole lot like FSD, doesn’t it?…

Anyway, in real life, the ever-improving capabilities of my non-fictitious car Colin keeps sneaking up on me, like the fact that it can now show enlarged text for the visually impaired, I kid you not! Might as well, since the automated driving is constantly  getting better, and I would guess at least 50% of all my driving is fully automated (not city driving since I do not have FSD Beta).

The speech recognition is surprisingly accurate — in Danish! But so far, it is a monologue on my part rather than a dialogue with Colin, since “he” does not give me much feedback, apart from text reminding me to keep my hands on the wheel, or to please close the lid gently (the first time Colin hinted at being sentient). This is all fun for now, and I would have no problem trading in my car for another, or worst case, having to dispose of it in case of it being totaled in a crash. Thomas’ story of Killing Sophia made me rethink this.

Killing Sophia

The character Sophia portrayed in the book refers to the very human-like android from Hanson Robotics named Sophia, which definitely has not crossed the uncanny valley yet, but still being somewhat in the valley, Sophia has actually managed to get citizenship in Saudi Arabia. Thomas Telving lays out a thought experiment where Sophia, as an office android indistinguishable from its human peers, is to be shredded to make room for a new enhanced model. Could you escort Sophia to the shredder if your boss ordered you to? Could you even switch her off?

Photo by Jesper Berggreen of book cover “Killing Sophia — Consciousness, Empathy, and Reason in the Age of Intelligent Robots

Thomas does a good job of distinguishing between the easy (action/reaction, question/answer etc.) and the hard (what it feels like) problem of consciousness, the hard part includes the fact that we still actually cannot know for sure that anyone or anything is really conscious, and that it’s an entirely subjective experience. The problem of consciousness is in itself a very deep rabbit hole, but the thing that really caught my attention in the book was the idea of skipping the importance of consciousness altogether and focusing on the relationship between entities, regardless of their respective complexity.

It will be no surprise for dog owners who know that the animal has no spoken language and deep down does not possess a human-like self awareness, but still feel strongly that real and meaningful communication is taking place. Dogs are highly complex and cognitive creatures with relatively large brains, but a cognitive relation to a dog is logically one step down compared to a human to human relationship, and thus it is in theory actually possible to go all the way down to experiencing relationships to inanimate structures. Remember the character “Log Lady” in the David Lynch show Twin Peaks? And just above that level, you have children playing with simple rag dolls. “Kill” the doll, and the child could suffer real psychological trauma, and the Log Lady would probably kill you if you burnt her log!

So, if it’s all about the relations between entities of any kind, of which just one party needs to have any cognitive faculties, I will from here on after take another uneasy look at my car Colin every time he/it gets a software update (as I write this, Colin is silently updating wirelessly from version 2023.2.2 to version 2023.12.1.1.). After reading the book, I thought I would get in touch with Thomas, so that I could challenge him in respect to the future of autonomous vehicles:

Imagine you own a car like Colin inhabiting the fictitious personality from my story. It does not have the physical features we normally think of when we think of robots, it just looks like a dead car. But now it is actually your friend. It knows you from your long drives together. At its end of life, you will not have a boss telling you to put it in the shredder, but you will realize its end of life yourself, like you would an old dog.

Now, the “brain” of Colin will be in the cloud, and you are told that if you buy a new car, it will just be downloaded to the new car, and Colin would likely just say “Aah, fresh body!” when you turn it on. Question: Would you discuss this transition beforehand with Colin, or would you just dispose of the old body and step into the new like nothing happened?

Thomas answered:

“The funny thing is that what you’re doing by naming your car, Colin, is exactly one of the things that makes it relevant to ask the question in the first place! You see, what happens when we add these kinds of human traits to technology is that we tend to develop some kind of emotion towards it.

“So my rational answer would be: No, that would be silly — the car doesn’t feel anything. Why on earth would you talk to something about death that has never been alive?

“My more realistic answer, based on how humans tend to react to technology that has taken on anthropomorphic traits, is that if you think it would be cynical of you to part with your car, then perhaps for your own sake you should have a little chat before you say goodbye. To protect your own feelings and in order not to get yourself used to acting cynically towards something that you actually have a gut feeling is alive in some way.

“I would, though, generally recommend being careful about anthropomorphizing technology. It makes us fool ourselves into believing that it is something it is not.

“We can’t rule out the possibility that AI will eventually develop consciousness and actually merit moral attention, but with current technology, the idea is close to absurd. And even if an AI model were capable of having emotions, it would hardly resemble ours at all, as our mental states are very much dependent on our evolutionary conditions. So, if a car or a robot simulates some language that expresses something human (‘ouch’, ‘I’m sorry’, ‘thank you, that felt good’), we can be certain that it is not a mirror of an ‘inner’ emotional state. The problem is that it is very hard for humans to hold on to this belief when machines start talking to us.”

Living Machines

One philosopher Thomas refers to in his book is Susan Schneider, with whom he does not agree with on all counts, but she once said something (not cited in the book) that blew my mind. Let me paraphrase and condense what she said at a conference a couple of years ago: “A copy of a mind will be its own mind after one single clock cycle/brain wave.” This makes the upload problem — that Thomas also discusses in his book — a very hard problem indeed, and to make things even worse on the biological machine side, we don’t even know when a neurological wave pattern starts or ends, thus probably making it impossible to duplicate in digital form. But even if we take Ridley Scott’s science fiction classic Blade Runner approach and leap to making enhanced organic analog machines, or close digital approximations thereof — there’s the clone problem.

Imagine a mirror-perfect clone of yourself. The mere fact that the two copies cannot inhabit the exact same space and point of view means that sensoric input will be ever so slightly different and result in completely individual neurological wave patterns throughout the system from the very first millisecond, the frontal cortex of the brain included. This is also true for an elusive future silicon-based digital mind that can easily be copied. It is my intuition that this must be prevented (even for cars), and that some kind of digital embryo base-level mind must be standardized to prevent a digital cognitive mass copying nightmare.

Pondering this, something occurred to me: Blockchain technology, with its inherent capabilities of creating NFTs (non-fungible tokens), might hold a solution. If a digital mind was encrypted and impossible to copy, but possible to switch off, move, and switch back on, we might have a kind of digital AI that in principle could be as sovereign as the human brain. This AI could then teach newly “hatched” AIs all it knows, but the new “copies” would be their own unique entities. You could even set a time limit, like the Blade Runner Replicants. I pitched this idea to Thomas and he responded:

“This relates to several interesting philosophical discussions, and I agree with Schneider’s basic point. It has a thread to the philosophical discussions on personal identity, including the concepts of qualitative and numerical identity.

“‘Numerical identity’ refers to a philosophical (and mathematical) concept that describes that two objects or entities are identical if they are one and the same object. It contrasts with ‘qualitative identity’, where two objects are identical if they share the same properties or qualities, but are not necessarily the same object.

“In philosophy, numerical identity is often used in discussions of personal identity and continuity. In these contexts, numerical identity means that a person who exists at one point in time is the same person who exists at another point in time. It does not solve all problems, but it helps us to think about how one person can remain the same throughout life? How can what is 1 meter tall and weighs 17 kilos be the same as what is 2 meters tall and weighs 90 kilos 20 years later?

“Returning to your example, the copy that changes will thus cease to be qualitatively identical to its parent after the first heartbeat. On the other hand, we can still say that it is numerically identical despite the changes. And the question is, if this is really a problem? Wouldn’t it ‘just’ mean that a new individual is created? The ethical question, as I see it, will therefore be about how we treat the clone.

“In this context, I consider it crucial whether the silicone-based cloned brain you are talking about has consciousness: Does it feel like something to be it? Does it have experiences of color, pleasure, pain, time and whatever else we think it takes for something to be called a conscious being? The problem of other minds puts an end to finding out with certainty, and it poses a major problem in itself in terms of knowing what the moral status of such a clone should be. It will inherently be strongly influenced by whether it is an advanced physical system that responds humanly to stimuli or whether it is actually a living thing that has different value laden experiences (is it able to feel suppressed?) It is the answer to this question that will determine whether we are talking about a mass nightmare or whether it really just doesn’t matter much. I discuss these issues in some detail in Killing Sophia.

“Turning to your question about blockchain technology, I should say right away that I don’t know enough about the technology to comment on the technical perspectives. However, I can see the point that it will make cloning difficult, but again, my perspective — there may be others — is that the important thing is not the cloning itself, but whether all properties are included. Does consciousness come across if we make a silicone clone of a human biological brain? Some — such as David J. Chalmers — think so. But the problem he runs into is precisely that of consciousness identity. Is it going to be a new ‘me’? I don’t think so. I think it becomes a ‘you’ the moment it is situated somewhere else in the physical space.”

Language Moves

The discussion of whether an entity is conscious or not, biological and digital alike, might be futile. I highly recommend reading Thomas Telving’s book to get a deeper understanding on this important subject. His book really makes you think, and personally I have come closer to my own conviction on how this all works: It all starts with a stone.

Tim Urban of waitbutwhy has elegantly described how language might have emerged when a couple of humans for the first time ever created an internal symbol of an external object, like a rock. The first human would point at a rock and utter a distinct sound to label it, and from that moment the humans could refer to the rock in its physical absence. When one human thought of a rock and uttered the agreed distinct sound that represented the concept of a rock, the human that heard the sound would instantly think of a rock.

This simple explanation makes you realize that language is how knowledge started accumulating through history, because of the sudden capacity of storing non-physical concepts in place of physical objects. An information explosion was ignited, since it turned out that our brains were very well suited to store incredible amounts of symbols and relations between them — an evolutionary fluke?

In neuroscientist Karl Friston’s view, the vast majority of brain capacity, in any animal lucky enough to have a brain, is used for movement, and any spare capacity comes in handy for conceptualizing future desires, and this — which feels like free will — is crucial to nudge the movement of the humongously complicated machine we call the human body to fulfill magnificent goals, which otherwise for the most part runs on autopilot. So, the question is, is consciousness is a real thing or merely a construct emerging from this physically-anchored cognitive spare capacity, capable of translating the perceived world with real rocks in it into a subjective world with words like “rock” in it?

In my opinion (for now) the concept of consciousness is redundant. It is an interesting emerging feature, but it is nothing without language. A trained language model, in whatever neural substance it is placed, is all that matters. We have to think hard about how we implement the digital entities with unique personalities, whether they will be with or without a “body.”

I suspect that an artificial intelligent entity will not reach anything resembling the human condition until it inhabits a comparable language model and a physical structure with sufficient sensory perception to even conceptualize pain. Will this happen? Definitely. Should it have rights? I honestly have no idea…

Might I suggest a way to determine whether an AI system is conscious or not: Tell it The Funniest Joke in the World as in the sketch by Monty Python, and if it dies, it was indeed conscious…

By the way, did you know that Tesla FSD Beta now uses language models to determine lane structures? The models can give visual spatial information about the location of pixels in an image or a video. Something to ponder…

 


Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

 


Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Solar PV & Farming — Trends In Agrivoltaics


I don’t like paywalls. You don’t like paywalls. Who likes paywalls? Here at CleanTechnica, we implemented a limited paywall for a while, but it always felt wrong — and it was always tough to decide what we should put behind there. In theory, your most exclusive and best content goes behind a paywall. But then fewer people read it! We just don’t like paywalls, and so we’ve decided to ditch ours. Unfortunately, the media business is still a tough, cut-throat business with tiny margins. It’s a never-ending Olympic challenge to stay above water or even perhaps — gasp — grow. So …

If you like what we do and want to support us, please chip in a bit monthly via PayPal or Patreon to help our team do what we do! Thank you!


Advertisement

 

Time Stamp:

More from CleanTechnica