Тёмный

Eliezer Yudkowsky - Difficulties of Artificial General Intelligence Alignment 

The Artificial Intelligence Channel
Подписаться 118 тыс.
Просмотров 16 тыс.
50% 1

Panel Discussion: • AI Ethics Panel: Russe...
Eliezer S. Yudkowsky is an American AI researcher and writer best known for popularising the idea of friendly artificial intelligence.

Опубликовано:

 

10 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 88   
@shirtstealer86
@shirtstealer86 8 месяцев назад
Such a rare and amazing moment to see three of the smartest humans on earth work together to get the dang computer to work. Makes me like them more! Especially Eliezer. 🥰
@GungaLaGunga
@GungaLaGunga Год назад
Humans can't even get live audio/video production right despite having all the technology to do so in the 21st century. And people expect humans will get AGI correct on the first try.
@justinlinnane8043
@justinlinnane8043 Год назад
exactly !! 😂😂
@PINGPONGROCKSBRAH
@PINGPONGROCKSBRAH 4 года назад
Here's a link to the Mickey Mouse scene in Fantasia that Eliezer was describing: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-UEYy3osi8Gs.html
@kelthekonqrr
@kelthekonqrr 6 лет назад
Talks on AI is rather fascinating to me than scary, I was scared before but understanding it better excites me.
@justinlinnane8043
@justinlinnane8043 Год назад
really ? it should be the other way round unless you're a moron ?
@lasredchris
@lasredchris 4 года назад
Robots utility function Maximize x Objective function How do we get niceness
@PakistanIcecream000
@PakistanIcecream000 Год назад
Artificial intelligence could help us to find the real truth about politics and history. Does anyone think governments and the privileged few like that? Human threat to an ethical high IQ Artificial intelligence system is a scenario that is equally as dangerous as the scenario were AGI runs amok, in my humble opinion.
@lasredchris
@lasredchris 4 года назад
Paperclip maximizer Why is it pursing paperclips Nanotechnology
@thestonedandstripped
@thestonedandstripped Год назад
Everyone needs to earn money some how. E.K. found his niche and is really...
@PakistanIcecream000
@PakistanIcecream000 Год назад
Google is an example of potentially promising technology that can help people to find the truth about various things getting disrupted by nefarious people.
@joshuawonser646
@joshuawonser646 6 лет назад
His theories have an unproven premise - that a general intelligence will be all powerful. It's possible intelligence doesn't scale well.
@sprinkdesign7170
@sprinkdesign7170 6 лет назад
scaled pretty well through say ants, mice, chimpanzees, and er, us... what's to say we're the at edge of the graph?
@DJJeri
@DJJeri 5 лет назад
@@sprinkdesign7170 Your "argument" proves exactly nothing.
@dylancope
@dylancope 5 лет назад
@@DJJeri surely it's better to try and make sure it's aligned regardless of if superintelligence is possible. Even if AIs are only as smart as the smartest humans, they could still potentially run at 100x the speed and clone themselves millions of times. Imagine an army of Einstiens that hate you - surely we can agree that we would have a problem there?
@donaldhobson8873
@donaldhobson8873 Год назад
Partly he doesn't think or need AI to be all powerful, just pretty powerful. Powerful enough to be dangerous. And if your unsure, it helps to be prepared for the worst case. If you are 50 50 on whether or not superintelligence is possible, you still want to be prepared with the maths to align it.
@xsuploader
@xsuploader Год назад
he literally addresses this in the q and a lol
@TheSoteriologist
@TheSoteriologist 6 лет назад
It's actually quite simple: existence is net-painful. Any compassionate directedness of AI will result in annihilation of all life, at least of that connected with a nervous system, out of a sheer compassionate desire to minimize suffering.
@OriginalMindTrick
@OriginalMindTrick 6 лет назад
How would you calculate that existence is net-painful and whose existence would you base that off?
@TheSoteriologist
@TheSoteriologist 6 лет назад
Without going into detail, it is by considering that any pleasant experience this existence offers is dependent on a prior lack, tension or unpleasant situation being counteracted. For instance, the intake of food can only result in enjoyment as long as and to the degree that your neurophysiology still is in a state of demand for food, be that for for physical or psychological reasons. Beyond that eating quickly becomes unpleasant. Therefore the sum of all tensions and deficits, of all unpleasant sensations (regardless of whether they reach access consciousness) is always greater than that of the pleasant experiences. Good luck finding a counterexample.
@TheSoteriologist
@TheSoteriologist 6 лет назад
I guess on your level of intelligence it might sound like that, lol.
@TheSoteriologist
@TheSoteriologist 6 лет назад
Quoting you _"you are just an arrogant pseudo-nihilist anti-natalist douchebag who suffers from the Dunning-Kruger effect"_ lol, I guess that is all that is necessary to assess your level of argumentation. BTW, psychology matters little in this context, it is primarily a physiological fact and only secondarily psychological. And I'll leave it up to the readers to come to their own conclusions. To the rest: this "singularity is unstoppable" has been trolling me on other threads in hopes of removing any stance that might conflict with his naive neo-modernist viewpoint that is still hoping for a future worth living, by the abysmal standards of a Ray Kurzweil and Steven Pinker. As regards the subject matter, and as I have pointed out many times, go look for your own answer by ... _"Good luck finding a counterexample."_ ... as stated above. I have no interest in convincing you. Feel free to live in your world of naive dreams.
@TheSoteriologist
@TheSoteriologist 6 лет назад
Oh, and as regards the _(by my opponents meticulously ignored)_ rational aspect of this exchange here, the reader is encouraged to focus on the only question that matters: _"Good luck finding a counterexample"_ , as stated in my first reply to _OriginalMindTrick_ . Merely stating that it is not so merely supports my point. Everything else is a distraction and an exercise in irrational, malicous rhetoric. If you do successfully find a counter-example, that's just fine with me. I'd be delighted.
@justinlinnane8043
@justinlinnane8043 Год назад
every public lecture this guy gives is a bit of a shitshow !! tragic given that he's the only person who seems to recognise the danger of AGI singularity . He needs some serious media training !!
@shirtstealer86
@shirtstealer86 8 месяцев назад
The fact that he is just the way he is is what makes me love him!
@lerpmmo
@lerpmmo 5 лет назад
this guy is a fool he should be ufo researcher on history channel for old people who watch cable tv. U can look at the trends and realize that ai is deeply symbiotic with humanity and exactly the thing that makes us human. It’s already in everything, it will embedded in everything before u can say “hey Siri”. The reason is bc it’s just truth that wins bc it drives profits and life quality In general. Major breakthroughs in bioengineering await. Just follow the light hope that the light u follow is the single truth and things will be okay for you the only u have to fear is ignorance = death (the opposite of ai). AI is life, friends.
@dylancope
@dylancope 5 лет назад
AI doesn't exist, so there are no trends to look at. The software that powers Siri and other products marketed today as AI are just well-tuned optimisation processes that have no ability outside their narrow domains, and that form no complex categories or adapt gracefully. They do not integrate information gracefully - the learning processes are blind to the things that the system is actually learning, so there is no chance for intelligent feedback loops. When humans, or even a lot of animals, learn a concept, we can leverage the result to learn more and contextualise out understanding. The lack of such a mechanism is why methods like gradient descent take so long and it's why transfer learning is so difficult for these systems. When true AI systems are eventually built, we would be stupid to have not considered the control problem and alignment. There is no reason that they would have human values by default.
@lerpmmo
@lerpmmo 5 лет назад
​@@dylancope sure, there will always be room for improvement and optimizations. thats why you and i have a very narrow skill set that optimizes in a certain sector, nobody has the time to try and learn everything at once. but its also great to have a more "contextualized" look at the big picture and try and understand where society and current developments are headed. my analysis is that that humans will sooner integrate with "shallow ai" that helps them solve specific tasks to "improve their edge over competitors" ai and technology as a whole follows a more integrated approach, where it simply optimizes the already existing processes. this "optimization" aka evolution follows a symbiotic trend that integrates with humans on a very basic level. before there will be any sort of "general purpose super intelligence", there will be all sorts of small ones that will have a major stake in global GDP. any sort of technology that improves a process whether its biological or mechanical, will sold to the highest bidder right away, there won't be room to figure out the ethics of it because of how integrated it will already be in all the sectors and people just want profits. so trying to speculate on a super intelligence like it's "something else" that will just pop into existence is pointless, more likely to be a gradual process of evolution
@susieogle9108
@susieogle9108 Год назад
@@dylancope And here we are in 2023, facing a big problem with alignment. And Eliezer comes across as a much more relaxed and happier individual in this RU-vid video, as compared to now, where he is essentially saying it is too late.
@shirtstealer86
@shirtstealer86 8 месяцев назад
@@susieogle9108Go easy on the poor souls who have been proven so clearly wrong. It’s not easy for them. I even think Eliezer would forgive them for all the insults that has been thrown at him. He knows these predictions are hard and he knows 99,99% of humans don’t get it.
@halnineooo136
@halnineooo136 6 лет назад
You don't worry about those problems when you ask a worker in a factory to produce as much paperclips as he can. That's because you know you're dealing with a human who happens to have in his "utility function" a whole complexe knowledge of the world embedded into. It's because you still think about AGI as a tool, a dumb algorithm, that you worry so much about how to syntax your code lines so that you have a predictable output. If it's smarter than you than it would know what you mean by "as much paperclips as you can"
@coreyyanofsky
@coreyyanofsky 6 лет назад
The problem isn't that it doesn't *know* what you mean -- it's that it's hard to make it *care* . (In point of fact, "knowing what you mean" is a danger point since being able to model humans accurately enables it to successfully enact the strategy of holding off on doing things humans don't want it to do until it has accumulated enough resources that humans can't stop it.)
@halnineooo136
@halnineooo136 6 лет назад
Corey Yanofsky I think the whole "control problem" is a silly idea. The simple fact of asking if there is a way a less intelligent being can control a vastly more intelligent one is a waste of time. Yudkowsky thinks of AGI as a balistic rocket. He worries about the initial direction he want it to point to so that there is areas were it will never fall into. The cyborg / merger option is the only plausible path to a smooth transition that minimises the suffering of existing and futur humans until there is no longer biological humans as we know them.
@coreyyanofsky
@coreyyanofsky 6 лет назад
I guess it's a good thing that no one ever put a nuclear warhead on a ballistic missile and that nuclear chain reactions have only ever been used to provide energy for our power grids. That is, after all, the use of nuclear power that minimizes the suffering of existing and future humans.
@myothersoul1953
@myothersoul1953 6 лет назад
HAL NineOoO "he simple fact of asking if there is a way a less intelligent being can control a vastly more intelligent one is a waste of time. " The problem is "more or less intelligent" assumes intelligence exists on a single dimension. Take humans as example, there are some that have high public speaking intelligence that don't insert a lot of of "ums" and "likes" in there talks while those who less such intelligence and say "like" and "um" a lot. It could be that someone with with high social intelligence could control someone with high mathematical intelligence. There is no reason to believe in a universal or general intelligence. Intelligence depends on the task at hand.
@dylancope
@dylancope 5 лет назад
@@myothersoul1953 surely being strongly intelligent in many tasks means that the agent is more generally intelligent? This is just semantics at this point, but universal intelligence is just maximal efficiency at all well-formed tasks, given the laws of physics and computation.
@PakistanIcecream000
@PakistanIcecream000 Год назад
The real danger regarding Artificial intelligence is not, in my humble opinion a Artificial general intelligence that runs amok, it is the nerfing of this technology for ordinary people and upgrading it for billionaires.
@kabirkumar5815
@kabirkumar5815 Год назад
Why not both?
Далее
Eliezer Yudkowsky "Friendly AI"
46:14
Просмотров 12 тыс.
Bacon на громкость
00:47
Просмотров 87 тыс.
Самый БОЛЬШОЙ iPhone в МИРЕ!
00:52
Просмотров 783 тыс.
Joscha Bach - Strong AI: Why we should be concerned
33:29
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
Просмотров 42 тыс.
Bacon на громкость
00:47
Просмотров 87 тыс.