Home News Blake Lemoine, Google, and searching for souls in the algorithm

Blake Lemoine, Google, and searching for souls in the algorithm


It wasn’t science that convinced Google engineer Blake Lemoine that one of the company’s AIs was sentient. Lemoine, also an ordained Christian mystery priest, said AI’s commentary on religion and his “personal spiritual beliefs” helped persuade His technique has mind, feeling and soul.

“I’m a priest. When LaMDA claims to have a soul and then can eloquently explain what it means, I tend to doubt its goodness,” Lemoine said in a statement. a recent tweet. “What right am I to tell God where he can put his soul and where he can’t?”

Lemoine could be wrong—at least from a scientific standpoint. Prominent AI researchers and Google say that the conversational language model Lemoine is working on at the company, LaMDA, is powerful and advanced enough that it can provide compelling answers to probing questions without actually understanding what it’s talking about. Google suspended Lemoine after engineer Lemoine hired a lawyer for LaMDA and began discussing the company’s practices with the House Judiciary Committee. Lemoine claims Google discriminated against him because of his religious beliefs.

Still, Lemoine’s beliefs have sparked heated debate and are a stark reminder that as AI becomes more advanced, people will raise all kinds of far-fetched ideas about what the technology is doing and what it means for them. idea.

“Because it’s a machine, we’re not inclined to say, ‘This is natural,'” Scott Midson, a liberal arts professor at the University of Manchester who studies theology and posthumanism, told Recode. “We pretty much skip and go into the supernatural, magic and religion.”

It’s worth pointing out that Lemoine isn’t the first Silicon Valley figure to make a statement about artificial intelligence, at least on the face of it, that sounds reverent. Renowned computer scientist and futurist Ray Kurzweil has long advocated the “singularity” where artificial intelligence will eventually surpass humans and humans can eventually merge with technology. Anthony Levandowski, co-founder of Google’s self-driving car startup Waymo, founded Road to the Future, a church devoted entirely to artificial intelligence (the church disbanded in 2020) in 2015. Even some practitioners of more traditional beliefs are beginning to use artificial intelligence, including robots that hand out blessings and advice.

Optimistically, some may find real comfort and wisdom in the answers provided by AI. Religious ideas can also guide the development of artificial intelligence, and perhaps make the technology ethical. But at the same time, viewing AI as anything other than a human-created technology raises real concerns.

I recently spoke with Midson about these concerns. He told me that we risk not only glorifying AI and ignoring its real flaws, but also risking Silicon Valley’s efforts to hype a technology that is far less complex than it seems. This interview has been edited for clarity and length.

Rebecca Helville

Let’s start with the big story that Google released a few weeks ago. How common is it for people with religious views to believe that AI or technology has a soul, or that it’s not just technology?

Scott Midson

While the story sounds really surprising—the idea of ​​religion and technology coming together—the early history of these machines and religion actually made this idea of ​​religious motivation in computers and machines more commonplace.

If we go back to the Middle Ages, medieval times, there were automata, they were basically devices that moved automatically. There is a special automaton, a mechanic, specially designed to encourage people to reflect on the intricacies of God’s creation. Its movement aims to evoke reverence for religion. At the time, the world was seen as a complex mechanism, and God was seen as a clockwork designer.

Jump from mechanical monks to different types of mechanical monks: Recently, a German church in Hesse and Nassau built the BlessU-2 to commemorate the 500th anniversary of the Reformation. The BlessU-2 is basically a glorified ATM that dispenses blessings and moves its arms and has this big, religious, ritualistic thing. There have been many different reactions to this. An old woman in particular, she said, actually, the blessing she received from this robot was really meaningful. It was a special event that was important to her, and she said, “Well, actually, there’s something going on here that I can’t explain.”

Rebecca Helville

What other similar claims have emerged in Silicon Valley and technology?

Scott Midson

For some people, especially in Silicon Valley, there’s a lot of hype and money to attach to grand claims like “my AI is conscious.” It got a lot of attention. It captures the imagination of many people precisely because religion is often beyond what we can explain. It’s that supernatural obsession.

There are plenty of people who will willingly fan the flames of these conversations to maintain the hype. One of the things that I think can be very dangerous is not controlling the hype.

Rebecca Helville

Every once in a while, I talk to Alexa or Siri and ask some big life questions.For example, if you ask Siri if God is real, the bot will reply: “It’s all a mystery to me.” There is also a recent example of a reporter Ask GPT-3, a language model created by artificial intelligence research lab OpenAI, on Judaism, and see how well it answers. Sometimes the answers from these machines seem really hollow, but sometimes they seem really sensible. why is that?

Scott Midson

Joseph Weizenbaum designed the world’s first chatbot Eliza. Weizenbaum did some experiments with Eliza, which is just a basic chatbot, a language processing software. Eliza is designed to mimic a Rogerian psychotherapist, so basically your general counselor. Weizenbaum didn’t tell the participants that they would be talking to the machine, but they were told that you would interact with the therapist through the computer. People would say, “I feel so sorry for my family,” and Eliza would take the word “family.” It grabs parts of the sentence and throws it back almost as a question. Because that’s what we expect from therapists; we don’t expect anything from them. It’s that reflective screen that the computer doesn’t need to understand what it’s saying to convince us that it’s doing the therapist’s job.

The Recode reporter had a brief chat with the recreated Eliza chatbot available on the web.

We have a lot of more sophisticated AI software, software that contextualizes the words in a sentence. Google’s LaMDA technology is very advanced. It’s not just looking for a simple word in a sentence. It can locate words in different types of structures and settings based on context. So it gives you the impression that it knows what it’s talking about. One of the key sticking points of conversations around chatbots is how much the interlocutor — the machine we’re talking to — really understands what is being said?

Rebecca Helville

Are there any examples of bots that don’t provide particularly good answers?

Scott Midson

There are a lot of caveats about what these machines do and don’t do. It all depends on how well they convince you that they understand these things. Noel Sharkey is a prominent theorist in the field. He really doesn’t like robots that make you believe they can do more than they actually can. He calls them “performance robots.” One of his prime examples of using a performance robot is Sophia, who has been granted honorary citizenship in Saudi Arabia. It’s not just a basic chatbot because it’s inside the bot. You can clearly tell that Sophia is a robot because the back of its head is a transparent shell where you can see all the wires and stuff.

For Sharkey, it’s all just an illusion. It’s just smoke and mirrors. Sophia doesn’t actually guarantee personality status by any imagination. It doesn’t understand what it’s talking about. It has no hopes, dreams, feelings or anything that makes it look human. In fact, deceiving others is problematic. It has a lot of wobbly phrases. It sometimes glitches, or rather problematic, jaw-dropping things. But even if it’s the most transparent, we still experience some degree of hallucination.

A lot of times robots have “it’s a puppet on a rope” thing. It doesn’t do as many independent things as we thought. We also have bots to recommend. The Pepper robot participated in the government’s recommendation letter on AI. It’s an evidence hearing in the House of Lords, and it sounds like Pepper is speaking for himself, saying everything. It’s all pre-programmed and not completely transparent to everyone. Again, this is a misunderstanding. I think the biggest problem is managing the hype.

Rebecca Helville

This reminds me of that scene The Wizard of Oz The real wizard has finally been revealed. How does the conversation around whether AI is sentient relate to other important discussions about AI that are taking place right now?

Scott Midson

Microsoft Tay, another chatbot sent to Twitter, has a machine algorithm that learns from interactions with people in the Twitter circle. Trouble is, Tay was attacked and had to be taken down from Twitter within 16 hours for being misogynistic, homophobic and racist.

How these robots — sentient or not — are made in our image is another huge ethical question. Many algorithms will be trained on datasets that are entirely human. They talk about our history, our interactions, they are inherently biased. There are some race-based algorithm demos.

Feeling problem? I can think of it as a red herring, but really, it’s also about how we make machines in our image and what we do with that image.

Rebecca Helville

Two prominent AI ethics researchers, Timnit Gebru and Margaret Mitchell, raised this concern before being fired by Google: By treating perception discussions and AI as a separate thing, we may be ignoring that AI was created by humans fact.

Scott Midson

We almost see this machine in a certain way, detached in some ways, even a bit like God. Back to that black box: there’s something we don’t understand, it’s kind of like religion, it’s amazing, it’s got incredible potential. If we watch all the ads about these technologies, it will save us. But if we see it in this detached way, if we see it as a god-like thing, what encourages us?

This story was first published in the Recode newsletter. register here So you won’t miss the next one!

Source link

Previous articleA Two Prong Attack to Winning the Mid-Terms
Next articleAmid the crypto crash, NFT people keep believing in a Web3 future


Please enter your comment!
Please enter your name here