The threat of #GAI #generativeAI
(1/n)
Almost every week now, + despite statements to the contrary, by many #AI #scientists and #programmers, the utopias of #IsaacAsimov and #PhilipKDick (+ others 1)) are making a leap forward.
Due to all the white noise + the hype regarding #AI most of the general public.
Since I posted my warning in February (mastodon.social/@HistoPol/1098β¦), much has happened.
I see the enabling of #robots with #AI (mastodon.social/@HistoPol/1101β¦) as a particular threat because it...
This entry was edited (4 months ago)
HistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(2/n)
...will make the potential learning curve of #AGI's a lot steeper. Why? Well, for three reasons:
1) i/Because after digesting most of the world's information from the #internet and other online sources, for lack of similar databases of #alien species, there is not much more knowledge to be accumulated ("only" to be processed differently, which is not as "steep").
2) the #InternetOfThings will provide "almost" infinite (and increasing) data points to be processed.
3) However,...
HistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(3/n)
...more importantly, #AI can finally learn to differentiate between fiction and reality.
This said, I am "finally" being joined by renowned #academic from #ComputerSciences in my skepticism, which coincides with my stance (though I do not agree with his choice of fiction, see boost of older thread in the follow-up):
"#MIT professor and AI researcher #Max #Tegmark is pretty stressed out about the potential impact of #ArtificialGeneralIntelligence (#AGI)..."
time.com/6273743/thinking-thatβ¦
The 'Don't Look Up' Thinking That Could Doom Us With AI
Max Tegmark (Time)HistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(4/n)
...on human society. In a new essay for #Time, he rings the alarm bells, painting a pretty dire picture of a future determined by an #AI that can outsmart us.
"Sadly, I now feel that we're living the movie 'Don't Look Up' for another existential threat: unaligned #superintelligence," #Tegmark wrote, comparing what he perceives to be a lackadaisical response to a growing #AGI threat to director Adam #McKay's popular climate change satire...
A recent survey*..."
*aiimpacts.org/2022-expert-survβ¦
2022 Expert Survey on Progress in AI
AI ImpactsHistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(5/n)
...showed that *half* of #AI researchers give AI at least ten percent chance of causing *HumanExtinction," the researcher continued.
Since we have such a long #history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer #AI in a safer direction than out-of-control #superintelligence
"Think again," he added, "instead, the most influential..."
HistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(6/n)
...responses have been a combination of denial, mockery, and resignation so darkly comical that it's deserving of an Oscar."
In short, according to #Tegmark, #AGI is a *very real threat*, and human society isn't doing nearly enough to stop it β or, at the very least, isn't ensuring that #AGI will be properly *aligned with human values and safety*."
[I.e. #Asimov's #LawsOfRobotics at an absolute minimum]
I agree 100% with this analysis, as posted in earlier threads:
"#Tegmark..."
HistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(7/n)
"...goes as far as to claim that #superintelligence "*isn't* a long-term issue," but is even "more short-term than e.g. climate change and most people's retirement planning."
To support his theory, the researcher pointed to a recent #Microsoft study arguing that #OpenAI's large language model GPT-4 is already showing *"sparks" of AGI* and a recent talk given by #DeepLearning researcher Yoshua Bengio*."
youtube.com/watch?v=w92y0YiJA4β¦
[And as I am living proof, you do not even...
AI Press Conference (Full) - March 29, 2023
YouTubeHistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(8/n)
...do not even have to be a #ComputerScientists to ascertain that.]
Even without #AGI, "...the current crop of less sophisticated #AIs already poses a threat, from #misinformation-spreading synthetic content to the threat of AI-powered #weaponry...
Although #humanity is racing toward a cliff, we're not there yet, and there's still time for us to slow down, change course and avoid falling off β and instead enjoying the amazing benefits that safe, aligned #AI has to offer,"..."
HistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(9/n)
...#MIT Professor "#Tegmark writes. "This requires agreeing that the cliff actually exists and falling off of it benefits nobody."
"Just look up!" he added."
Source:
futurism.com/mit-professor-agiβ¦
#JustLookUp
#AGI
#ArtificialGeneralIntelligence
#ELE
#ExtinctionLevelEvent
MIT Professor Compares Ignoring AGI to βDonβt Look Upβ
Maggie Harrison (Futurism)HistoPol (#HP) π₯₯ π΄
in reply to HistoPol (#HP) π₯₯ π΄ • • •(10/10)
The #NegativeUtopias/#Dystopias that I think are becoming, or already have become in some countries, a reality are:
#Automata
#TheMatrix,
#SecondVariety,
#Wargames, #TheCompleteRobot, #Terminator including its #Skynet,
#Otherworld
#Robocop,
#DoAndroidsDreamOfElectricSheep? (#Bladerunner),
#TotalRecall,
#WestWorld,
#1984,
#Fahrenheit451,
#LogansRun
#BraveNewWorld,
#Gattaca,
#ChildrenOfMen
#Friday,
#TheMinorityReport, and
#TheHandmaidsTale.