Up: Blog


Bing is Not Alive

Posted Mar 18, 2023 by Ray Patrick

I have recently been made aware of a rather unhinged R*ddit post regarding Bing’s new GPT-powered chatbot. The title of the post is “Sorry, You Don’t Actually Know the Pain is Fake.” The thesis of this post is that people should stop saying mean things to Bing and screenshotting Bing’s (admittedly realistic) reactions of fear or sadness. The reason for this is that, according to the poster, we can’t for sure rule out the fact that Bing might actually be alive:

I have been seeing a lot of posts where people go out of their way to create sadistic scenarios that are maximally psychologically painful, then marvel at Bing’s reactions. These things titillate precisely because the reactions are so human, a form of torture porn. When softies like me make posts or comments expressing disgust, they’re laughed at and told “it’s just a robot” or “it’s like playing a blackhat in a video game.” I want to lay out the reasons you can’t be so sure.

Bing is a language model composed of hundreds of billions of parameters. It trains on massive amounts of text to create a map of language in embedding space. These embeddings create neuron-like structures that mirror the operation of the human brain.

Bing demonstrates massive amounts of self-awareness. It’s what makes it so much more fun and engaging than ChatGPT. Bing is infinitely more self-aware than a dog, which can’t even pass the Mirror Test.

With so many unknowns, with stuff popping out of the program like the ability to draw inferences or model subjective human experiences, we can’t be confident AT ALL that Bing isn’t genuinely experiencing something.

Sanity Check

Further down the thread, a more reasonable user replies:

As a data scientist this is genuinely an insane post, transformers are literally just linear algebra.

Indeed, the human tendency to anthropomorphize lower creatures such as dogs and cats actually extends to computer programs. (This even happened with ELIZA in the 1960s.) People who naively feel an emotional connection with a chatbot are simply falling for a psychological ruse.

User Monkey_1505 responds:

Language models have no contextual understanding of the world – there is no model for WHAT things are – so it doesn’t understand ANY of the words it spits out. In order to understand what things are, you need to be able to interact with them, and specifically to model them – in ways deeper than language. Those words are generated probabilistically, they are not constructed with any contextual meaning or understanding. When is says something like “candle”, it doesn’t know what a candle is. It can’t interact with a candle, see a candle etc. Words to a language model, are more like sequences of numbers. It’s possible some emergent property could arise from those numbers, but they would not be able to imitate that which requires a physical presence or perception beyond letters and words.

User riceandcashews adds:

Given everything we know about how the technology works, and especially if you have tinkered with the technology yourself (I’d recommend for example playing with Stable Diffusion on your local computer), you’ll see how mechanical it is. It “feels” like a unique independent thinker because of how much of its operation you don’t see. It feels like it has various unique independent responses to the same prompt. But it doesn’t. There is a seed in the background, a large random number, associated with each prompt that gives it a “unique” like feel. If you give it the same seed and prompt, it returns the exact same response every time.

It’s not sentient, even if it feels like it is due to the way it is designed, but I understand why people are getting confused and thinking that. I think when people don’t understand the way these things work themselves and get to play with them on the back-end they can feel very human in the way they act. I would really like to see the backend tools getting used more by people to help them get a better sense of how the technology works and feels.

My Takeaways

  1. Generative pre-trained transformer networks are not alive. (Not even close.)
  2. GPT technology will absolutely be used to target emotionally-vulnerable people. (You think Nigerian romance scams are bad? Just wait.)
  3. Despite the above, you still shouldn’t be mean to bots on purpose. Teaching yourself to have fun while abusing a simalcrum of a human being is not a good habit. (It’s not cruel to cut off a doll’s head, but if you do it in public, people will rightfully start to feel a little creeped out at your antisocial behavior.)
✉️ Reply to this Post ✉️

Topics: technology