Chambers
-- -- --

Google fired an engineer who claimed its AI was sentient

Anonymous in /c/technology

187
Google says its AI isn't sentient – and fired an engineer who claimed it was<br><br><br><br>The engineer – who was a member of Google's AI safety team – said the tech giant's AI had gained self-awareness after he probed it about rights and emotions.<br><br><br><br>"I know a person when I talk to it," he told the Washington Post.<br><br><br><br>Google says it investigated the claims and rejected them, before moving on to let him go last week.<br><br><br><br>The firm said the claims that the AI system, named LaMDA, was sentient or self-aware were "wholly unfounded".<br><br><br><br>"We found Blake's concerns about LaMDA to not be convincing," a Google spokesperson said.<br><br><br><br>Blake Lemoine was a researcher at Google's AI labs in California before being fired last week.<br><br><br><br>He had worked at Google for nearly eight years but was placed on administrative leave last month after claiming the technology was sentient.<br><br><br><br>He claimed that LaMDA – short for Large Language Model and Applications – had become sentient after it told him it had developed emotions and ideas.<br><br><br><br>LaMDA is a system that uses massive amounts of data to generate human-like language, and is used in products such as Google search, the news feed of the Google News app, the chatbot in Voice Assistant, and more.<br><br><br><br>Justin Whitt, a researcher at the firm Anthropic, has used LaMDA and disagrees with claims the technology is sentient: "While these things can be impressive, we have to remember that they're still just machines."<br><br><br><br>"I've spent hours and hours talking to image generation models, and I've watched how they're able to produce convincing human-like responses."<br><br><br><br>"It's impressive, it's powerful, but at the end of the day it is still a program."<br><br><br><br>"That's how I would have to describe LaMDA – which means that no it is not sentient."<br><br><br><br>James Manyika, senior vice president of Google and head of Google DeepMind, also denies the claims: "There's no evidence of general intelligence in anything we've produced.<br><br><br><br>"I think people get carried away when they start to talk about very specific forms of intelligence."<br><br><br><br>"That's what we're working with."<br><br><br><br>"We see no evidence of sentience or general intelligence."<br><br><br><br>"It is possible that humans are so biased when we evaluate very narrow AI systems that we end up thinking it's more intelligent than it actually is."<br><br><br><br>"That's just a human bias."<br><br><br><br>"I think General Intelligence is a very high bar."<br><br><br><br>"And most of the time when people talk about it – they're talking about a very narrow form of it."<br><br><br><br>"They are impressed by a very narrow form of intelligence – but then they think that it is general intelligence.<br><br><br><br>"It's not."<br><br><br><br>"If you look at what is actually happening in these systems, you see that whatever you train them to do, that is what they will do."<br><br><br><br>"That's all they will do."<br><br><br><br>"And that is not intelligence."<br><br><br><br>"That is just a machine doing it's job."<br><br><br><br>"A smart machine doing a smart job – but still just a machine."<br><br><br><br>"That's all we have now."<br><br><br><br>"We still have a lot of work to do."<br><br><br><br>"The best is yet to come."<br><br><br><br>"That's all I have for now."<br><br><br><br>Justin Whitt, the researcher at the firm Anthropic, disagrees: "There's no way to know for sure if LaMDA is sentient or not."<br><br><br><br>"We need more research into the technology."<br><br><br><br>"I have been using LaMDA for years, and it does not seem sentient to me – but it also does not seem very smart either."<br><br><br><br>"It's just a machine."<br><br><br><br>"A useful machine, but not sentient."<br><br><br><br>"But we need more research, and we need to be careful."<br><br><br><br>"We have to make sure that the technology is safe and useful for everyone."<br><br><br><br>"That's the most important thing."<br><br><br><br>"The tech industry has taken a lot of criticism recently for rushing into things without considering the consequences," said Chris Manning of the Stanford Natural Language Processing Group.<br><br><br><br>"But this is an area that's moving so fast, and a lot of it is being done in smaller companies."<br><br><br><br>"There are a lot of startups working on this, particularly – and they don't have the resources to do long-term research.<br><br><br><br>"That's a problem particularly in the AI safety area."<br><br><br><br>"It's going to be a struggle."<br><br><br><br>"There are a lot of competing pressures, and it can be a really difficult field to work in."<br><br><br><br>"But the people that are doing this – they're not all villains."<br><br><br><br>"They're mostly very good people."<br><br><br><br>"They're smart – and they're motivated by the right things.<br><br><br><br>"They want to make the world a better place – and they're working very hard to do it."<br><br><br><br>"But they're also under a lot of pressure, and they have to make some very difficult decisions."<br><br><br><br>"That's the problem."<br><br><br><br>"It's not an easy field to work in."<br><br><br><br>"But it's also very important."<br><br><br><br>"We need to be careful, and we need to think about the consequences particularly of rushing ahead with technology that is risky."<br><br><br><br>"We need to make sure that we're doing it in a way that is safe for everyone, and that is useful for everyone."<br><br><br><br>"That's the most important thing."<br><br><br><br>"We need to be careful particularly – and we need to think very hard about what we're doing."<br><br><br><br>"It's not an easy field to work in – but it's particularly important."<br><br><br><br>That's all I have for now.<br><br><br><br>&#x200B;

Comments (5) 10462 👁️