Skip to main content


The threat of #GAI #generativeAI

(1/n)

Almost every week now, + despite statements to the contrary, by many #AI #scientists and #programmers, the utopias of #IsaacAsimov and #PhilipKDick (+ others 1)) are making a leap forward.
Due to all the white noise + the hype regarding #AI most of the general public.

Since I posted my warning in February (mastodon.social/@HistoPol/1098…), much has happened.

I see the enabling of #robots with #AI (mastodon.social/@HistoPol/1101…) as a particular threat because it...


#ChatGPT empowered #Bing:

β€œI will not harm you unless you harm me first”!

The beginning of a (dumb?) #Skynet?

The #robots in the #IRobot movie were more intelligent.

Whatever happened to #Asimov's #LawsOfRobotics?

"First Law
A #robot may not injure a human being or, through inaction, allow a human being to come to harm...

Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."


This entry was edited (4 months ago)
in reply to HistoPol (#HP) πŸ₯₯ 🌴

(2/n)

...will make the potential learning curve of #AGI's a lot steeper. Why? Well, for three reasons:

1) i/Because after digesting most of the world's information from the #internet and other online sources, for lack of similar databases of #alien species, there is not much more knowledge to be accumulated ("only" to be processed differently, which is not as "steep").

2) the #InternetOfThings will provide "almost" infinite (and increasing) data points to be processed.

3) However,...

in reply to HistoPol (#HP) πŸ₯₯ 🌴

(3/n)

...more importantly, #AI can finally learn to differentiate between fiction and reality.

This said, I am "finally" being joined by renowned #academic from #ComputerSciences in my skepticism, which coincides with my stance (though I do not agree with his choice of fiction, see boost of older thread in the follow-up):

"#MIT professor and AI researcher #Max #Tegmark is pretty stressed out about the potential impact of #ArtificialGeneralIntelligence (#AGI)..."

time.com/6273743/thinking-that…

in reply to HistoPol (#HP) πŸ₯₯ 🌴

(4/n)

...on human society. In a new essay for #Time, he rings the alarm bells, painting a pretty dire picture of a future determined by an #AI that can outsmart us.

"Sadly, I now feel that we're living the movie 'Don't Look Up' for another existential threat: unaligned #superintelligence," #Tegmark wrote, comparing what he perceives to be a lackadaisical response to a growing #AGI threat to director Adam #McKay's popular climate change satire...
A recent survey*..."

*aiimpacts.org/2022-expert-surv…

in reply to HistoPol (#HP) πŸ₯₯ 🌴

(5/n)

...showed that *half* of #AI researchers give AI at least ten percent chance of causing *HumanExtinction," the researcher continued.
Since we have such a long #history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer #AI in a safer direction than out-of-control #superintelligence

"Think again," he added, "instead, the most influential..."

in reply to HistoPol (#HP) πŸ₯₯ 🌴

(6/n)

...responses have been a combination of denial, mockery, and resignation so darkly comical that it's deserving of an Oscar."

In short, according to #Tegmark, #AGI is a *very real threat*, and human society isn't doing nearly enough to stop it β€” or, at the very least, isn't ensuring that #AGI will be properly *aligned with human values and safety*."

[I.e. #Asimov's #LawsOfRobotics at an absolute minimum]

I agree 100% with this analysis, as posted in earlier threads:

"#Tegmark..."

in reply to HistoPol (#HP) πŸ₯₯ 🌴

(7/n)

"...goes as far as to claim that #superintelligence "*isn't* a long-term issue," but is even "more short-term than e.g. climate change and most people's retirement planning."

To support his theory, the researcher pointed to a recent #Microsoft study arguing that #OpenAI's large language model GPT-4 is already showing *"sparks" of AGI* and a recent talk given by #DeepLearning researcher Yoshua Bengio*."

youtube.com/watch?v=w92y0YiJA4…

[And as I am living proof, you do not even...

in reply to HistoPol (#HP) πŸ₯₯ 🌴

(8/n)

...do not even have to be a #ComputerScientists to ascertain that.]

Even without #AGI, "...the current crop of less sophisticated #AIs already poses a threat, from #misinformation-spreading synthetic content to the threat of AI-powered #weaponry...

Although #humanity is racing toward a cliff, we're not there yet, and there's still time for us to slow down, change course and avoid falling off – and instead enjoying the amazing benefits that safe, aligned #AI has to offer,"..."

in reply to HistoPol (#HP) πŸ₯₯ 🌴

(9/n)

...#MIT Professor "#Tegmark writes. "This requires agreeing that the cliff actually exists and falling off of it benefits nobody."

"Just look up!" he added."

Source:
futurism.com/mit-professor-agi…

#JustLookUp
#AGI
#ArtificialGeneralIntelligence
#ELE
#ExtinctionLevelEvent

This entry was edited (1 year ago)
⇧