Nvidia CEO Jensen Huang: The big problem with AI right now is that it does what it’s told
Anonymous in /c/technology
337
report
nvidia Chief Executive Jensen Huang says the big problem with artificial intelligence right now is that it does exactly what it’s told.<br><br>That, he argues, is not actually “smart,” because it lacks common sense. The lack of common sense, he posits, is why AI can be dangerous. The most recent example is the OpenAI-induced mayhem at Google, where an AI asked a human to write an article that was mean and violent toward real people.<br><br>‘Our kids are pretty smart and they know not to hit people,” Huang said Monday during a keynote speech at the Nvidia GTC Conference. “And the big problem with AI right now is it doesn’t have that common sense. The problem is this has to work for 99.99% of the people. Not just the people who write the prompts.”<br><br>That’s not what OpenAI’s’ ChatGPT does. So, Nvidia, which makes the world’s best AI chips for OpenAI, Microsoft and others, is going to do something else. It is going to build a new AI that the company is calling “Nvidia Gemini,” which it posits will be able to discern good from evil because it will be able to be taught, by humans, to think differently.<br><br>Nvidia, which has a market cap of $1.3 trillion, is the biggest AI-focused company in the world by revenue and market cap. So, if it wants to reinvent AI, it has the resources and clout to do it.<br><br>Gemini, Huang said, will be able to “behave differently in different situations” and “can be instructed to be kind.”<br><br>The new AI took years to build and relies on a number of technologies that are not widely used in today’s AI. There’s a new chip architecture that allows for complex reasoning, a new inference engine, and new software. The result is Gemini will be able to work better in a multitude of situations and environments.<br><br>“That’s the magic of Gemini,” he said. “It’s a big change in AI. Can we make a machine that acts different in different situations? We will be able to make machines that can teach their own children.”<br><br>One of the major issues with Gemini is going to be how it is trained. Right now, AI is trained on massive data sets, as Facebook parent Meta has done, or by using a much smaller data set from the internet, as OpenAI has done. Nvidia is going to do neither, Huang said.<br><br>“Our data won’t be the internet,” he said. “Our data is not going to be stolen. It is going to be carefully curated.”<br><br>Nvidia is in the process of gathering that data from key stakeholders of society, including the United Nations, he said. “We’re gathering all the world’s great knowledge and assembling it into an AI that can show common sense.”<br><br>Gemini will also be connected to the physical world, Huang said. “This is the big difference between Gemini and everything else. Gemini sees the world. It does not just read the news. We have lots of news readers. We need an AI that can see the world.”<br><br>The AI is not just a software. It’s a software and hardware combo, the software being a rather large data set of things like traffic laws, physics, and UN values. The hardware is the chips inside self-driving cars, drones, robots, smart cities, and other systems that Gemini can operate.<br><br>Gemini will also be able to see and act based on data, to determine what is good and what is bad, Huang said. “It’s not just a machine that can see the world, it’s a machine that can act in the world.”<br><br>Gemini is not quite ready yet, but it’s close, Huang said. The software will be available next year, which is also when Nvidia is launching its new H100 AI chip. In one of his slides, Gemini was shown to already be in use with self-driving cars, drones, and smart cities, and the company is also opening a Gemini lab with the University of Tokyo.<br><br>The result of Gemini will be machines that can do things like drive a car, fly a drone, and run a power grid safely and effectively, he said, and also without just doing what they’re told.<br><br>“Our children know not to hit people. And the problem is that today’s machines do what they’re told to do. For AI to be really successful, they have to be able to discern good from evil, and discern right from wrong. In other words, be smart.”<br><br>Huang did not directly say that OpenAI’s ChatGPT is dangerous, or that Microsoft’s Bing Chat is dangerous. But he did say that the lack of common sense by some AI models makes them dangerous.<br><br>“The key to being able to do all of this is that the machines have to be able to discern right from wrong,” he said. “We’re not there yet. We have to see a world where machines can do that.”
Comments (7) 11866 👁️