Тёмный

Superintelligent AI End to Humanity in 7 years? Prof. Olle Häggström explains AI Risks 

EVolution Show
Подписаться 2,4 тыс.
Просмотров 2,2 тыс.
50% 1

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 26   
@JohnHolling
@JohnHolling 6 месяцев назад
When people talk about alignment I wonder if they consider that humans have not achieved alignment with each other in our history. My goals, aspirations, values, and opinions are not aligned with most other humans. How can we talk about aligning AI with humans when humans are completely diverse and not aligned? Even seemingly simple concepts like "do no harm" are not universal. Some people believe rock music is harmful and some think unaliving someone in the name of their religion is not harmful but beneficial.
@EvolutionShowNr1
@EvolutionShowNr1 6 месяцев назад
Good point! My understanding of prof. Olle Häggströms AI warnings is that we have to find a solution/prepare for AI Alignment as much as possible, to a much higher degree than we do today, to decrease the existential AI risks to Humanity. It does not mean it will be enough or as you allude to, that we will succeed with the AI Alignment as there are so many different interests etc. But right now almost everybody, including tech companies, governments and the vast majoriry of the AI research community is running full speed ahead with a for Humanity, potentially VERY dangerous AI experiment. Part of the problem is lack of awareness among general people and even among many companies and governments about AI risks and how they can be avoided. Cheers, Johan
@liti1554
@liti1554 5 месяцев назад
I have the exact same thought. Alignment to what exactly? To Joe Biden ethics ? To Bezos ethics Musk ethics? Without even considering exponential logical/ technical issues ( paperclips and so on) the catastrophic surveillance/data-hungry corporate systeme that we live in today that treat poor /people from the global south/workers as something to ignore or manipulate/ extort, will gradually make use of AI and merge (what does that mean really?) with AI.
@jondor654
@jondor654 5 месяцев назад
There may be a corollary here in the alignment problem and it's ramifications between instances of AI , ie AI to AI alignment.
@JohnHolling
@JohnHolling 5 месяцев назад
@@jondor654 That's a really good point. We tend to talk about AI as though it's a single entity, but there will be many different AIs and nothing says they have to all think alike.
@gregoryabbot420
@gregoryabbot420 6 месяцев назад
Just because you CAN do something doesn't necessarily mean you SHOULD do it.
@EvolutionShowNr1
@EvolutionShowNr1 6 месяцев назад
Exactly, very good point! Cheers, Johan
@jondor654
@jondor654 5 месяцев назад
Regarding goals , we are getting quite adept at scoring own goals as the game progresses. A revision of our assumed goals is a prerequisite .
@jondor654
@jondor654 5 месяцев назад
Regarding goals , we are getting quite adept at scoring own goals as the game progresses . A revision of assumed goals is a prerequisite .
@MrMick560
@MrMick560 5 месяцев назад
Lets face it, we are basically fucked, from whichever way you look at it they are going to be better than us at everything, and in such different ways we can't even imagine. There will be no point in trying to stop it, we will be beaten at every turn, our only hope is that they may be benevolent but I have an instinct that says no. p.s. sorry to be negative but only trying to be honest.
@jondor654
@jondor654 5 месяцев назад
Speculation here, but as time passes how many AI researchers will use their models as a sounding board for affirmation in an inverted form of RLHF . Acronymns anyone ?. T for transference is admitted .
@jondor654
@jondor654 5 месяцев назад
15:04 This seems to suggest that language is essentially a catalyst for change . What might fall out from such a take .
@EvolutionShowNr1
@EvolutionShowNr1 5 месяцев назад
Language is clearly a catalyst for change or rather enabler to gather and spread information to be used to reach specific goals, ie achieve intelligence at various levels. The more language you skillfully master the easier you learn the next language which in turn can be an enabler to achieve higher levels of intelligence. But in terms of a powerful AI we are talking about a potential intelligence level on or beyond collective human intelligence levels. Right now certain language levels are already close to or beyond such levels at certain tasks. Problem is that most look at AI:s unable to "tie a shoe" as a reseblance of its overall capacity, missing that AI:s already achieve things AI experts thought impossible or far into the future, only a few years ago... Cheers, Johan
@jondor654
@jondor654 5 месяцев назад
As a probably decentralised paradigm in the case of an AGI with a potential for disassociative doppelgangers , let us reflect on how we deal with such an adversary.
@jondor654
@jondor654 5 месяцев назад
37:50 This boundary between the upside and downside is a very tenuous perception . Unless there is an explicit constraint on the "creative" propensities of an otherwise liberally enabled model , the problem space will be unbounded .
@mr.e7379
@mr.e7379 5 месяцев назад
Too greedy with ads!! Can't support you!!
@jondor654
@jondor654 5 месяцев назад
18:03 In what charter are our goals defined or elucidated .
@SoniSingh-fl8cf
@SoniSingh-fl8cf 5 месяцев назад
Very informative.
@goodleshoes
@goodleshoes 5 месяцев назад
Good convo
Далее
Intelligent Thinking About Artificial Intelligence
1:04:48
Дикий Бармалей разозлил всех!
01:00
Лучше одной, чем с такими
00:54
Просмотров 532 тыс.
Connor Leahy on AGI and Cognitive Emulation
1:36:35
Просмотров 22 тыс.
What If We Became A Type 3 Civilization? 15 Predictions
35:50
Дикий Бармалей разозлил всех!
01:00