Opinion Polls: Delphi's Polling Place

Hosted by Showtalk

Opinion polls on all subjects. Opinions? Heck yes, we have opinions - but we're *always* nice about it, even when ours are diametrically opposed to yours. Register your vote today!

  • 5001
    MEMBERS
  • 130110
    MESSAGES
  • 0
    POSTS TODAY

Discussions

The beginning of Skynet?   The Newsy You: News of Today

Started May-15 by WALTER784; 2656 views.
WALTER784
Staff

From: WALTER784

May-15

You remember the Skynet that the Terminator attempted to take down? 

Google CEO says he doesn't 'fully understand' how new AI program Bard works after it taught itself a foreign language it was not trained to and cited fake books to solve an economics problem

CEO Sundar Pichai admitted he doesn't 'fully understand' aspects of Bard
Notably, the technology taught itself a language it wasn't programmed to learn
'I don't think we fully understand how a human mind works either,' Pichai said

By STEPHEN M. LEPORE FOR DAILYMAIL.COM
PUBLISHED: 05:40 BST, 17 April 2023 | UPDATED: 23:14 BST, 17 April 2023

Google's CEO Sundar Pichai admitted he doesn't 'fully understand' how the company's new AI program Bard works, as a new expose shows some of the kinks are still being worked out. 
 
One of the big problems discovered with Bard is something that Pichai called 'emergent properties,' or AI systems having taught themselves unforeseen skills.  
 
Google's AI program was able to, for example, learn Bangladeshi without training after being prompted in the language.
 
'There is an aspect of this which we call - all of us in the field call it as a 'black box.' You know, you don't fully understand,' Pichai admitted. 'And you can't quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that's where the state of the art is.' 
 
DailyMail.com has tested out Bard recently, in which it told us it had plans for world domination starting in 2023. 
 
Google's CEO Sundar Pichai (pictured) admitted he doesn't 'fully understand' how the company's new AI program Bard works, as a new expose shows some of the kinks are still being worked out
 
Google's CEO Sundar Pichai (pictured) admitted he doesn't 'fully understand' how the company's new AI program Bard works, as a new expose shows some of the kinks are still being worked out
 
Scott Pelley of CBS' 60 Minutes was surprised and responded: 'You don't fully understand how it works. And yet, you've turned it loose on society?'
 
'Yeah. Let me put it this way. I don't think we fully understand how a human mind works either,' Pichai said.
 
Notably, the Bard system instantly wrote an instant essay about inflation in economics, recommending five books. None of them existed, according to CBS News.
 
In the industry, this sort of error is called 'hallucination.'
 
Elon Musk and a group of artificial intelligence experts and industry executives have in recent weeks called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society. 
 
'Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,' said the letter issued by the Future of Life Institute.
 
The Musk Foundation is a major donor to the non-profit, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union's transparency register.
 
Pictured: Google Sundar Pichai explained Bard's 'emergent properties,' which is when AI systems teach themselves unforeseen skills
 
Pichai was straightforward about the risks of rushing the new technology
 
He said Google has 'the urgency to work and deploy it in a beneficial way, but at the same time it can be very harmful if deployed wrongly.'
 
Pichai admitted that this worries him.
 
'We don't have all the answers there yet, and the technology is moving fast,' he said. 'So does that keep me up at night? Absolutely.' 
 
When DailyMail.com tried it out, Google's Bard enthusiastically (and unprompted) created a scenario where LaMDA, its underlying technology, takes over Earth. 
 
DailyMail.com took the artificial intelligence (AI) app on a test drive of thorny questions on the front-lines of America's culture wars. 
...[Message truncated]
View Full Message
Showtalk
Host

From: Showtalk

May-15

That is what doomsayers as well as scientists have been worried about.  

Showtalk
Host

From: Showtalk

May-15

There goes honest history. This is what you’re concerned about.

https://youtu.be/Sqa8Zo2XWc4
 

This bit starting at 8:35 explains something relevant to this fear mongering, it’s narrow AI, it’s not general, it’s not Skynet. Jeez.

Showtalk
Host

From: Showtalk

May-15

So he’s saying because narrow AI is I use now, general AI won’t become more common? I didn’t watch the whole thing. I haven’t seen any of the Terminators so I’m not familiar with Skynet.

WALTER784
Staff

From: WALTER784

May-16

Skynet (Terminator) - Wikipedia

You might enjoy reading up about it on Wikipedia. 

It's an AI based program that determined humans were a threat to their existence!

FWIW

don5328

From: don5328

May-16

Yes!

  • Edited May 16, 2023 3:39 am  by  don5328
Showtalk
Host

From: Showtalk

May-16

Thanks.

WALTER784
Staff

From: WALTER784

May-18

‘The Godfather of A.I.’ warns of ‘nightmare scenario’ where artificial intelligence begins to set itself objectives like gaining power

Geoffrey Hinton, known as the “godfather of A.I.,” says he regrets his role in helping to develop artificial intelligence.

BY CHLOE TAYLOR
May 02, 2023 7:55 AM EDT

The so-called Godfather of A.I. continues to issue warnings about the dangers advanced artificial intelligence could bring, describing a “nightmare scenario” in which chatbots like ChatGPT begin to seek power.
 
In an interview with the BBC on Tuesday, Geoffrey Hinton—who announced his resignation from Google to the New York Times a day earlier—said the potential threats posed by A.I. chatbots like OpenAI’s ChatGPT were “quite scary.”
 
“Right now, they’re not more intelligent than us, as far as I can tell,” he said. “But I think they soon may be.”
 
“What we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has, and it eclipses them by a long way,” he added.
 
“In terms of reasoning, it’s not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast—so we need to worry about that.”
 
Hinton’s research on deep learning and neural networks—mathematical models that mimic the human brain—helped lay the groundwork for artificial intelligence development, earning him the nickname “the Godfather of A.I.”
 
He joined Google in 2013 after the tech giant bought his company, DNN Research, for $44 million.
 
‘A nightmare scenario’
 
While Hinton told the BBC on Tuesday that he believed Google had been “very responsible” when it came to advancing A.I.’s capabilities, he told the Times on Monday that he had concerns about the tech’s potential should a powerful version fall into the wrong hands.
 
When asked to elaborate on this point, he said: “This is just a kind of worst-case scenario, kind of a nightmare scenario.
 
“You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own subgoals.”
 
Eventually, he warned, this could lead to A.I. systems creating objectives for themselves like: “I need to get more power.”
 
“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” Hinton told the BBC.
 
“We’re biological systems, and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.
 
“All these copies can learn separately but share their knowledge instantly, so it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
 
Hinton’s conversation with the BBC came after he told the Times he regrets his life’s work because of the potential for A.I. to be misused.
 
“It is hard to see how you can prevent the bad actors from using it for bad things,” he said on Monday. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”
 
Since announcing his resignation from Google, Hinton has been vocal about his concerns surrounding artificial intelligence.
 
In another separate interview with the MIT Technology Review published on Tuesday, Hinton said he wanted to raise public awareness of the serious risks he believes could come with widespread access to large language models like GPT-4.
 
“I want to talk about A.I. safety issues without having to worry about how it interacts with Google’s business,” he told the publication. “As long as I’m paid by Google, I can’t do that.”
 
He added that people’s outlook on whether superintelligence was going to be good or bad depends on whether
...[Message truncated]
View Full Message
In reply toRe: msg 9
WALTER784
Staff

From: WALTER784

May-22

Why OpenAI’s CEO Called for AI Safety Standards at Senate Hearing | Tech News Briefing | WSJ

FWIW

TOP