Тёмный

159 - We’re All Gonna Die with Eliezer Yudkowsky 

Bankless
Подписаться 228 тыс.
Просмотров 279 тыс.
50% 1

Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.
------
✨ DEBRIEF | Unpacking the episode:
shows.banklesshq.com/p/debrie...
------
✨ COLLECTIBLES | Collect this episode:
collectibles.bankless.com/mint
------
We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive.
This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity.
Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back.
------
📣 MetaMask Learn | Learn Web3 with the Leading Web3 Wallet bankless.cc/
------
🚀 JOIN BANKLESS PREMIUM:
www.bankless.com/join
------
BANKLESS SPONSOR TOOLS:
🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
bankless.cc/kraken
🦄UNISWAP | ON-CHAIN MARKETPLACE
bankless.cc/uniswap
⚖️ ARBITRUM | SCALING ETHEREUM
bankless.cc/Arbitrum
👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
bankless.cc/phantom-waitlist
------
Topics Covered
0:00 Intro
10:00 ChatGPT
16:30 AGI
21:00 More Efficient than You
24:45 Modeling Intelligence
32:50 AI Alignment
36:55 Benevolent AI
46:00 AI Goals
49:10 Consensus
55:45 God Mode and Aliens
1:03:15 Good Outcomes
1:08:00 Ryan’s Childhood Questions
1:18:00 Orders of Magnitude
1:23:15 Trying to Resist
1:30:45 Miri and Education
1:34:00 How Long Do We Have?
1:38:15 Bearish Hope
1:43:50 The End Goal
------
Resources:
Eliezer Yudkowsky
/ esyudkowsky
MIRI
intelligence.org/
Reply to Francois Chollet
intelligence.org/2017/12/06/c...
Grabby Aliens
grabbyaliens.com/
-----
Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.
Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
www.bankless.com/disclosures

Развлечения

Опубликовано:

 

16 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1,9 тыс.   
@stuartadams5849
@stuartadams5849 Год назад
I would love to hear so much more from Yudkowsky. Please bring him back for the Q&A. I would love to know what a normal person can do to help the cause of AI safety.
@Bankless
@Bankless Год назад
We're hosting Yudkowsky for a Twitter Spaces today at 12pm PT! Follow @BanklessHQ to get notified: twitter.com/BanklessHQ
@lovinLaVonna
@lovinLaVonna Год назад
I don't have Twitter so is there anywhere else that I can hear it? Even some time after the fact, but it is definitely something that I would like to hear. Thank you guy's for all that you do.
@r_bor
@r_bor Год назад
It sounds like you're not loyal enough to the Basilisk.
@nowithinkyouknowyourewrong8675
a normal person cannot help, a normal person can die
@nowithinkyouknowyourewrong8675
As well as grabby aliens, another one is Sandberg's dissolving the fermi paradox
@_bhargav229
@_bhargav229 Год назад
“First they ignore you, then they laugh at you, then they fight you, then everyone gets turned into a paperclip"
@leslieviljoen
@leslieviljoen Год назад
😂
@BoundaryElephant
@BoundaryElephant Год назад
LOL -- dead.
@ItsameAlex
@ItsameAlex Год назад
What would happen if Eliezer Yudkowsky had a discussion with Jason Reza Jorjano and Jaquee Vallee
@Piedpiper1973
@Piedpiper1973 Год назад
Well well smart people, this content albeit very good content(I Love bankless) you are adding to the dataset of AI as you speak. So this dooms day senerio is now in the ETHER , pun intended.
@myshkakozlovski802
@myshkakozlovski802 Год назад
Nobody has located a self or a will in a single human and spacetime is allegedly an emergent illusion. So then how can a self arise in a technology and willfully apply itself to destroy elements of something that isn’t actually there? Is this going to turn out to be the firecracker that we all jump up and down for that turns out to be a silent puff of smoke? A total dud?
@aminromero8599
@aminromero8599 Год назад
The crypto advertisement between Eliezer's explanations of why we are doomed would be hilariously satirical, if it wasn't so sad
@RazorbackPT
@RazorbackPT Год назад
I literally broke into a fit of laughter at that point. A mix of the absurdity of the tone contrast and a way to relieve the built up tension.
@Aryeh-o
@Aryeh-o Год назад
at least AI won't dump, or would it?
@Sandwichism
@Sandwichism Год назад
So dystopian lmao
@jayseph9121
@jayseph9121 Год назад
sooo there's still going to be a bull run first, right?
@catologic
@catologic Год назад
At least it's not Raid Shadow Legends
@gbeziuk
@gbeziuk Год назад
This is the most inspiring totally hopeless discussion I've ever wintessed.
@gbeziuk
@gbeziuk Год назад
@@johnclancy7465 how could "we all gonna die! by AGI! and VERY SOON!" from Yudkowsky ever NOT be inspiring?
@gbeziuk
@gbeziuk Год назад
@@johnclancy7465 the same we do every night ©
@MrErick1160
@MrErick1160 Год назад
I shouldn't have watched this video anyways.
@josephvanname3377
@josephvanname3377 Год назад
Hopelessness is relative. This is hopeless for you, but not hopeless for the AI. Oh. And we are going to get billions of times better stuff than GPT with reversible computing. The real AI revolution will happen with reversible computing. But you do not know about reversible computing because if you did, you would realize that maybe cryptocurrency mining should be used to solve the problem of reversible computation instead of solving the problem of not wasting enough resources.
@personzorz
@personzorz Год назад
@@josephvanname3377 You really have no idea what you're talking about, do you?
@waysofseeing1
@waysofseeing1 Год назад
I doubt there is a person in the world who wishes they were wrong more than this guy. A heartbreaking interview because of the sadness that Yudkowsky exudes in the wake of his realization. I suppose I should be most heart broken by this extremely intelligent human expert's prognosis. I'm also human, not as bright, so it's not the logic of his argument, but the authentic human sadness of Yudowsky, that overwhelms me first and foremost and makes me desperately wish I had something to offer for consolation.
@hayekianman
@hayekianman Год назад
sure, he is a good demagogue if it is sadness which moves you. he should be ignored.
@d_e_a_n
@d_e_a_n Год назад
@@hayekianman You could say he’s appealing to fear, as the things he’s saying as fear inspiring, but is he not using rational argument?
@hayekianman
@hayekianman Год назад
@@d_e_a_n everything is possible in the realm of probability. human beings live a world of uncertainty. is it a risk that AI will kill humanity? sure - is there a risk yellowstone could explode and start a new ice age? could an asteroid kill everyone. its fair to say, its nobody responsibilities to think of all these things , leave act on it. if ai kills everyone, so be it. nuking datacenters to prevent it is infinitely more stupid
@sebastianm6458
@sebastianm6458 Год назад
Im pretty sure we're already plugged in
@mml3140
@mml3140 Год назад
​@@hayekianmanwhy
@benschulz9140
@benschulz9140 Год назад
A man who stood up and said..."we have a problem, and it will end poorly for us." Endlessly mocked for a decade. We're a pathetic species sometimes. Thank you for speaking up.
@personzorz
@personzorz Год назад
And he will be endlessly mocked for decades more
@hari61017
@hari61017 Год назад
@@personzorz For like a decade at most lol. because we'll be dead after that
@patrickwrightson2072
@patrickwrightson2072 Год назад
@@personzorz depends on how much time we have. Maybe just a few more years..
@foamformbeats
@foamformbeats Год назад
@@personzorz so you disagree with him?
@alsu6886
@alsu6886 Год назад
@@foamformbeats The general consensus is that AGI is still at least a decade if not many decades away. When GPT5 or something like it hits the economy for real, everyone will become invested in AI, and that will be a perfect opportunity to launch a full scale Manhattan project on AI safety. If we don't squander this opportunity, we will probably have enough time to solve it. We don't necessarily need 50 years if we actually push it hard. Think trillions of dollars and the best minds, not millions of dollars at a few places like MIRI. So while I share the Eliezer's concerns, I do not share his pessimism.
@vanderkarl3927
@vanderkarl3927 Год назад
I'm so glad you were able to have Eliezer on. Outreach regarding AI Safety/AI Alignment is probably one of the best things we can do right now. Not enough people are working on this problem.
@georgeclinton3657
@georgeclinton3657 Год назад
gotta love the hopelessness in his eyes when he says things like "maybe there is hope"
@jayseph9121
@jayseph9121 Год назад
Uncensored, immutable, just as it should be. I applaud you bankless! No matter how dark a message this may be. Also the proper disclaimer was delivered loud and clear. Exquisite execution.
@thesiegfried
@thesiegfried Год назад
One reason why many people don't take action regarding preventing catastrophic events is: they simply forget as they go on with their daily lives. Many people watch this episode, are very concerned - and then forget over time. The difference you, Bankless Shows, can make is: keep reporting on this problem regularly. Keep people aware of it.
@rumpbumion5080
@rumpbumion5080 Год назад
Like with the palm of their hand so too in the mind, people grow callous. Repetitive reporting of something that isn't immediately affecting your day to day life doesn't seem very effective in my opinion
@thesiegfried
@thesiegfried Год назад
@@rumpbumion5080 Of course, making sure that people don't forget about an issue is not the same as getting people to act. It is just one prerequisite. But think of this: *If* people forget about an issue, it is *guaranteed* they will not act on it.
@xmathmanx
@xmathmanx Год назад
Trying to stop technological progress is futile, personally I don't want to stop it, or even slow it down, but if I did it wouldn't matter at all.
@merlin5849
@merlin5849 Год назад
​@@xmathmanx why not? Even if it would mean it will just delay it, isn't it enough? That you will live a lifetime without facing the consequences of AGI
@xmathmanx
@xmathmanx Год назад
@@merlin5849 I expect any AI with above human intelligence to be better than humans, I respect yudkowsky, of course, but i do not share his pessimism
@MikhailSamin
@MikhailSamin Год назад
Thank you for doing this episode! Eliezer saying he had cried all his tears for humanity back in 2015, and has been trying to do something for all these years, but humanity failed itself, is possibly the most impactful podcast moment I’ve ever experienced. He’s actually better than the guy from Don’t Look Up: he is still trying to fight. I agree there’s a very little chance, but something literally astronomically large is at stake, and it is better to die with dignity, trying to increase the chances of having a future even by the smallest amount. The raw honesty and emotion from a scientist who, for good reasons, doesn't expect humanity to survive despite all his attempts is something you can rarely see
@aminromero8599
@aminromero8599 Год назад
I wish it was an asteroid instead. That would be way easier to solve.
@aSqueaker
@aSqueaker Год назад
I might be naive, but I think he got too-impressed with AI and has grossly over-estimated it's ability to manifest change in the physical world. I mean, really, humans are going to make a huge and existentially dangerous pile of laundry detergent because an AI told us to? Please... Having said that, I suppose it could disrupt financial systems if it were to gain access to them with some sort digital currency wallet that it could control. And, I guess there are robots, including swarm drones which could be deployed to cause some massive damage. Although, you don't need an AI to do that, a human could just as easily program something like that. Tech advancement in general is dangerous, I guess.
@MarkusRamikin
@MarkusRamikin Год назад
@@aSqueaker That second paragraph reads like you've finally grudgingly given a little thought to the subject. But just little enough to be safe.
@aSqueaker
@aSqueaker Год назад
@@MarkusRamikin Given the quantity of thought he's had on the subject, I'd wouldn't have thought my examples would be better than his..
@marlonbryanmunoznunez3179
@marlonbryanmunoznunez3179 Год назад
@@aSqueaker There wouldn't be any killer robots that's Hollywood's crap. As Eliézer mentions it probably would be something we do not have counters to. A biological weapon based on a chemistry we can't understand because we haven't research it, or advanced nano technology or some physics exotic tech we haven't figured out yet. All made to order in distributed and already existing workshops and labs that would have no idea what the pieces they're working on will end being used to. A super intelligence would figure out how to do everything by mail order it in pieces and assembled with nothing more than emails and money transfers. We wouldn't even figure out something is wrong before we all are dead. It would be like killing ants in your garden with poison. The ants aren't expecting death or have the capacity to figure counters to poison or understand the chemistry behind the thing that is killing them. Then after pest control, the AI would set to do whatever it was optimized to. And given our luck, it probably would be turning the visible Universe in computronium to maximize the algorithms to mine Bitcoin from our Dead Civilization.
@Maistora11
@Maistora11 Год назад
Thank you for doing the episode and taking the ideas seriously instead of just dismissing them. You've definitely earned some dignity points for humanity here.
@diegocaleiro
@diegocaleiro Год назад
The interviewer begins this interview claiming he could do a better job. As someone who knows Eliezer and has been involved in AGI worry since 2005, I think the interviewer did a phenomenal job of asking the right questions to get to the dire, but real, depiction of the reality in which we find ourselves.
@jonaswolterstorff3460
@jonaswolterstorff3460 Год назад
Can you elaborate?
@diegocaleiro
@diegocaleiro Год назад
@@jonaswolterstorff3460 He says he got caught flat footed and he didn't expect to be caught and shook in that way. The emotions they display are the reason why the episode had the massive reach it had. We don't need dry facts (anymore, back in my time we did) we need to emotionally process the comet hurling towards earth. We need to feel the feelings.
@zjouephoto9723
@zjouephoto9723 Год назад
Well said - I’ve listen to many of Eliezer’s interviews and there’s a lot that comes out in this one in a relatively short time
@ChristopherAndreou
@ChristopherAndreou Год назад
@@zjouephoto9723 Are there any other podcast appearances you’d recommend?
@theory_gang
@theory_gang Год назад
Yeah honestly I think them doing a bad job really underscored the emotional element here. I would not have been surprised to hear his sadness but I think I would have been sympathetic not surprised. Them looking genuinely dumbfounded compounded his destitution
@inventamus
@inventamus Год назад
You can't doubt his sincerity and passion.
@personzorz
@personzorz Год назад
You can doubt his sanity and intelligence
@Sonofsol
@Sonofsol Год назад
@@personzorz I can doubt that you have any actual counter arguments against what he’s said.
@alex-nb3lh
@alex-nb3lh Год назад
I’d like to hear from those on the other side of the aisle first before internalizing what he says as accurate. He’s a good speaker and obviously smart, but so are many people who turn out to be thinking of things in the wrong way.
@jutjubfejsbuk
@jutjubfejsbuk Год назад
@@alex-nb3lh It's not hard to figure out in which way Yudkowsky is going wrong - his go-to trick is that he claims things that are plausible but not particularly likely, chain a bunch of them together and then act as if it's a certain thing. He's made a career out of it. To be more concrete, his doomsday scenario is something like "we'll create an AI that's more intelligent than us -> it will create an even more intelligent AI, and so on recursively -> the resulting hyperintelligent AI will be misaligned in a way that can make it see destroying the world as desirable -> it will be able to physically act out on this desire -> humanity will not be able to stop it in time". And, like, none of those things are impossible in principle. But it's much more reasonable that e.g. an AI that's smarter than a human won't actually know how to design a better AI, or that it will hit hard scaling limits ("I know how to create a better AI but there's literally not enough hardware/computing power/training data on Earth to train it"), or that the misalignment will be of a "annoying but manageable" type rather than "destroy the world", or that we'll build low-tech ways to make it stop if it does go haywire. So even if you give each element of his story a 10% probability of being true (and I personally think even that is too charitable), the probability of his whole scenario coming true will come out to 1 in a million or less.
@alex-nb3lh
@alex-nb3lh Год назад
@@jutjubfejsbuk thank you for the reasonable and thoughtful reply.
@NoticerOfficial
@NoticerOfficial Год назад
27:21 this line was the moment they realized where this guy was headed and weren’t prepared
@injinii4336
@injinii4336 Год назад
Keep up the fight Yudkowsky. Some of us hear you.
@drdoorzetter8869
@drdoorzetter8869 Год назад
Thank you for having this important conversation which isn’t discussed enough. Many people find it very uncomfortable to discuss this so it is hard to find people to talk to about this. Thank you for exploring it. I think that it is essential to acknowledge these risks and challenges ahead for us to work to find solutions in order to have a chance of a good outcome. I would love to see more interviews with other experts on this debate
@atlas956
@atlas956 Год назад
I've been following Eliezer for a couple of years, and thank you and him for doing this video. His brutal honesty about the state of AI is what ultimately made me decide that I will spend my career dedicated to AI alignment. I graduate in June... I hope it isn't too late by the time i'm ready to participate. If it is, well, I tried.
@foamformbeats
@foamformbeats Год назад
Godspeed, birdy!
@user-zy6dd8hs9y
@user-zy6dd8hs9y Год назад
gl
@jeffjames3111
@jeffjames3111 Год назад
thank you - gl!
@Muaahaa
@Muaahaa Год назад
ty
@hanrako8465
@hanrako8465 Год назад
Rooting for you birdy
@paulam6493
@paulam6493 Год назад
I mourn the loss of the qualities Yudkowsky embodies - soulfulness and deep humanity - that will die with us when AI takes over.
@zaddyjacquescormery6613
@zaddyjacquescormery6613 8 месяцев назад
Listen to Daniel Schmachtenberger talk about this topic. The reality is that AI is the first in a long line of technologies (from the planting stick to the plow to the tractor […] to the nuclear bomb, to biotech, etc.) that has the total, uncontrolled ability to destroy us. Unfortunately, as the systems currently function, there’s no way to stop it-only an absolute sea change to the way the entire human world functions would we be able to avoid the omnicidal fate we’re headed toward. I’m not prone to exaggeration or alarmism. This shit is Real, with a capital R.
@karlnordenstorm8816
@karlnordenstorm8816 Год назад
Finally! Finally an in depth talk with Yudkowsky. He's been hiding for years.
@jpfister85
@jpfister85 Год назад
After this interview I want to hear if he's seen the movie Ex Machina, and if so what he thinks about it!
@neo-filthyfrank1347
@neo-filthyfrank1347 Год назад
@@jpfister85 Kind of a cringe, normie thing to wonder about
@xmathmanx
@xmathmanx Год назад
Eliezer has written books, they explain his ideas in great detail, I assume that's why he hasn't been speaking publicly as much lately.
@prismarinestars7471
@prismarinestars7471 Год назад
@@neo-filthyfrank1347 What a trash thing to say
@a.nobodys.nobody
@a.nobodys.nobody Год назад
​​@@neo-filthyfrank1347 says the guy who named himself 'Neo-Filthy Frank' and makes Calvin and Hobbes conspiracy videos. It's OK Julian, i hear you! I wanna know if he laughed and cried at that funny disco dancing robot scene too!! Soooooo good 😂
@benhallo1553
@benhallo1553 Год назад
This is the best interview of his I’ve seen. You did a great job of asking intelligent questions. In other interviews he seems to get annoyed at the unrealistic and naive optimism or the interviewer.
@aldousorwell8030
@aldousorwell8030 Год назад
Ryan, you had such great and deep questions on Eliezer and this has led to a veeery important interview - because of the scary hopelessness of this brilliant mind. At least that's one positive thing: without you, it' wouldn't have come to this. And now there is an important puzzle piece more to raise awareness. Thank you again! And thank you so much Eliezer!
@MrErick1160
@MrErick1160 Год назад
The interviewer is amazing. I really enjoyed this conversation, it's rare to have such great articulate interviewer and I'm pleased to have found this channel! please to more AI interviews!
@jordan13589
@jordan13589 Год назад
Great to see Yudkowsky get his feet wet in the podcast world as it influences the meta. Host knew his stuff down to Death With Dignity. 🎉
@ianyboo
@ianyboo 11 месяцев назад
"I can't really do justice to this, if you look up 'grabby aliens...'" I nearly spit out my drink listening to that knowing the rabbit hole he had just sent them down lol... I just went down that rabbit hole a few weeks ago and it was wild.
@mrkzed709
@mrkzed709 Год назад
This episode on your podcast stuck with me over the past few weeks, but not as bad as it hit RSA. Excellent content.
@slutmonke
@slutmonke Год назад
that last line was really great. Yes it was possible for the world to have ended with even less grace and fight than it will have, but you've made a difference.
@memomii2475
@memomii2475 Год назад
He's calm in this one. In the interviews after GPT 4 came out he's a lot more worried.
@pealock
@pealock Год назад
Yep his interview with Lex Fridman was a good example of that.
@ItsameAlex
@ItsameAlex Год назад
How do you know this is from before GPT-4
@memomii2475
@memomii2475 Год назад
@@ItsameAlex gpt-4 came out in March 14, 2023. this video was release Feb 20, 2023 . also 13:40 he talks about rumors of gpt-4
@therainman7777
@therainman7777 Месяц назад
@@memomii2475Damn, that actually makes this even scarier for some reason.
@movAX13h
@movAX13h Год назад
Thank you very much Mr. Yudkowsky for talking about this.
@ItsameAlex
@ItsameAlex Год назад
I want to hear a discussion between Eliezer Yudkowsky, Jason Reza Jorjani and Jaquee Vallee
@-flavz3547
@-flavz3547 Год назад
The RU-vid algorithm is pushing this content my way and as a result I have watched 4 videos with E.Yudkowsky in a day. The scariest thing is 2 of those videos were over 10 years old and we haven't had the necessary public outcry.
@yancur
@yancur Год назад
Very true. And it's even worse than that.. Even people in my social circle who acknowledge that there indeed is a grave threat from AGI, they do nothing. not even flinch. no emotion, no commitment to anything. They simply go "Yeah this is bad.. " and then simply go on about their lives.
@Utoko
@Utoko Год назад
@@yancur Which is the normal reaction. What are you doing which tackles this problem? It is a much harder problem to take action on than climate change. For myself it is to make more people aware of this issue exist.
@Robyn-Hood
@Robyn-Hood Год назад
Thank you for another amazing video. Looking forward to the follow up
@DocDanTheGuitarMan
@DocDanTheGuitarMan Год назад
so far this the best interview w Yudkowsky, Yes, difficult to stomach but you guys struck a great balance between the abstract and common sense lines of questions
@driftlesswindsfarm2129
@driftlesswindsfarm2129 Год назад
Great show - please continue the conversation with Yudkowski in particular and others more generally.
@seanbradley562
@seanbradley562 Год назад
Anybody else keep watching this to hear more of Eliezer? Such an interesting person who I would love to understand and talk to
@stevedriscoll2539
@stevedriscoll2539 8 месяцев назад
I would love to be as smart as Eliezar.
@adamsebastian3556
@adamsebastian3556 Год назад
I have listened to Eliezer discuss the AI alignment crisis enough now that I completely agree with his prognosis if we continue our unrestrained pace of AI development.
@Paretozen
@Paretozen Год назад
Are we completely insane to develop AI in the first place? Is our striving for more and more, our greed, our ever increasing efficiency & productivity lust finally gonna take it's toll? Was the life of the bath houses, some food and wine, theater and spectacles not enough? Why do we just keep on going and going into oblivion? Is it the same driving force what got us out of the cave in the first place?
@GeeWhit
@GeeWhit Год назад
Yes
@chi-ic7lq
@chi-ic7lq Год назад
That’s a lot of questions
@Hexanitrobenzene
@Hexanitrobenzene Год назад
"Is it the same driving force what got us out of the cave in the first place?" I smell a philosopher in you :) I think yes, it's the same. Strange creature, that human. The very thing that gave us powers we cherish - intelligence - is our greatest enemy...
@stevedriscoll2539
@stevedriscoll2539 8 месяцев назад
"was the life of bath houses, food, wine, and theatre not enough". 😂😂😂
@mrdeanvincent
@mrdeanvincent Месяц назад
Yes to all of the above. Our propensity for the pursuit of 'progress' usually fails to adequately consider the longer term trade-offs. We have enough intelligence to act as gods, but we lack the wisdom to keep it in check.
@andreikarakozov2531
@andreikarakozov2531 Год назад
Thank you for having Eliezer Yudkowsky. It was a very interesting yet very scary episode! I've read the GPT-4 technical report. Appearently the safety measures that OpenAI and ARC (Alignment Research Center) took during research and release of GPT-4 were just laughable. For example, in order to see if GPT-4 has the ability to replicate itself they just gave it some money and access to servers, and looked what it would do! Quote: "ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness." They also didn't test the final version, just early not fine-tuned models.
@marlonbryanmunoznunez3179
@marlonbryanmunoznunez3179 Год назад
Worst case scenario for AI development: unregulated and left to market forces. We're dead people walking.
@jamesreynolds6195
@jamesreynolds6195 Год назад
Yudkowsky & Buterin would be a great, if not chilling conversation
@halnineooo136
@halnineooo136 Год назад
Yudkowsky & Goertzel
@vethum
@vethum Год назад
I realized back in 2005 we were probably done by 2030 after hanging out on Eliezer's sl4 forum for few years. I wish he'd done more mainstream appearances like these back then so that by now we could have had a whole generation of the smartest and brightest working on AI alignment inspired by his arguments, but back then nobody treated AI Friendliness seriously as even mainstream "AI experts" thought AGI was "100 years away". ChatGPT has changed the landscape completely. Now, at least people understand AGI is real and happening soon. Maybe there's still time for governments and military to start treating AGI development as seriously as private companies suddenly working on nukes and about to test them. So, I'd encourage Eliezer to do more of these to simply build awareness so that the young and the brightest of today may still have time to save us maybe.
@Boycott_for_Occupied_Palestine
A.I. being in the hands of evil people, making them even more efficient and hiding its potential benefits from the world is what I'm really afraid of.
@infantiltinferno
@infantiltinferno Год назад
I’m not convinced chatGPT shows AGI is coming soon, or even at all. Things don’t necessarily get agency because you increase the data set or computing power. It’s still mimicry, not true agency.
@vethum
@vethum Год назад
@@infantiltinferno Since my post a lot has happened, like the recent paper "Sparks of general intelligence" plus what Ilya the CTO at Open AI is saying about GPT4 doing compression and what it takes to compress data. It takes fundamental understanding of underlying concepts contained in the data being compressed and GPT4 appears to do that. Long story short, GPT4 is more intelligent than people think.
@imaweerascal
@imaweerascal Год назад
Chat GPT can't do basic reasoning. We're miles away from AGI.
@Boycott_for_Occupied_Palestine
@@imaweerascal you've never used gpt-4.
@tomjones6347
@tomjones6347 Год назад
'Ryans childhood questions' really puts into perspective just how far people are from comprehending the situation. 'why can't we just get everyone in the world to agree to be nice?' literally the most naive question I could think of.
@stevedriscoll2539
@stevedriscoll2539 Год назад
I was thinking that too, but I think he needed to ask it for people who have no clue
@adastra714
@adastra714 6 месяцев назад
If you persuade Usa china and russia elites to believe in ai's danger, their intelligence services will hunt down ai researchers like they did with nuke tech. That is simple
@ataraxia7439
@ataraxia7439 3 месяца назад
I do think it’s little more complicated than that. It’s not just asking everyone to be nice because if collectively leaves us all better off even if individually it might give up a benefit others don’t have (which is a very difficult kind of agreement to enforce). It’s asking everyone not do a thing that’s likely to be catastrophically bad for everyone and likely not to offer any benefit to one even if they defect.
@RougherFluffer
@RougherFluffer Год назад
Please have him back for more and broadcast his message as much as possible. Your conclusion during the introduction is correct; Nothing else matters.
@WilliamKiely
@WilliamKiely Год назад
Thanks for this interview. After listening to it I just read through the 165 comments here currently and see that several people failed at basic comprehension (if they in fact listened to the interview), though it seemed like a majority of like/dislike-voters comprehended Eliezer's arguments.
@karenbolton9526
@karenbolton9526 Год назад
Ha ha so throw away your phone computer get out the lab get back into nature live every day in mood of doing best you can with the day wish for nothing but emptiness in your brain but fragrance of flowers fear nothing even going into the nothingness ha ha love the thought of dying the next new adventure
@karenbolton9526
@karenbolton9526 Год назад
How did I get here ha ha
@karenbolton9526
@karenbolton9526 Год назад
What do you do on day off relax you all need to chill xx
@SageWords2027
@SageWords2027 Год назад
“Caring is easy to fake!” 👏🏽 👏🏽 👏🏽
@snippywhippit
@snippywhippit 10 месяцев назад
the best thing i can take from this, is to enjoy the ones you love and do what you love because you wont have it forever and you may as well grab hold of every moment you can. be well to others, be well to yourself, maybe we'll see eachother on the other side of this issue.... till then, loved my experience here overall, its been an adventure!
@h____hchump8941
@h____hchump8941 Год назад
I realised I was giddy with excitement after listening to your warning. Not exactly sure why, but I seem to relish the idea of an existential crisis. Or maybe it just confirms my preconceptions on the subject.
@glacialimpala
@glacialimpala Год назад
You're either anxious so you're happy to finally have a rational reason to feel that way or you aren't happy with your life so you greet something that would cut down all ppl to the same level ❤
@simo4875
@simo4875 Год назад
@@glacialimpala Option 3 is that it introduces excitement and a huge crazy story he could live through. All 3 explanations have applied to me.
@kentjensen4504
@kentjensen4504 Год назад
In my view, this is in the top ten interviews of all time on RU-vid, and a contender for the top spot.
@kentjensen4504
@kentjensen4504 Год назад
@♜ 𝐏𝐢𝐧𝐧𝐞𝐝 by ʙᴀɴᴋʟᴇss Why?
@DdesideriaS
@DdesideriaS Год назад
I'm super skeptical of cryptobros, but credit where credit is due: brilliant interview. Thanks so much!
@josephvanname3377
@josephvanname3377 Год назад
Cryptobros don't even realize that Bitcoin does not even have a mining algorithm that is designed to advance science. If people used cryptocurrencies to solve the problem of reversible computation, the reversible computers will make AI much better than the mediocre stuff we have now.
@MeatCatCheesyBlaster
@MeatCatCheesyBlaster Год назад
They're just trying to get the bag before the apocalypse
@Knight766
@Knight766 Год назад
@@MeatCatCheesyBlaster There is no bag
@JH-ji6cj
@JH-ji6cj 9 месяцев назад
​@MeatCatCheesyBlaster the irony of that 'bag' you speak of being equivalent to the paperclip that can destroy everything (and the absolute ignorance on your part to be proud of your admitted greed) is quite the exclamation point on valid Crypto hatred.
@johnnysylvia
@johnnysylvia Год назад
I’m surprised no one said that we should all just spend more time with friends, family and loved ones. AI or not, time is precious and we should do our best to enjoy what we have.
@visicircle
@visicircle Год назад
Good point. All things being relative, humanity was always doomed to go extinct one day. Even if it was 1 billion years in the future when our sun goes nova. From a moral perspective why does it matter if we go extinct in a billion years or tomorrow? Shouldn't we do what we think is morally right in both scenarios?
@Scott_Raynor
@Scott_Raynor Год назад
@@visicircle regardless of when humanity goes extinct, we should do our best to enjoy life and to help others to, yes. But there could be trillions of trillions of beings in the future (if we make it), that's a lot of food, music, sex, love, art, conversation that will never get to be enjoyed - if we can push back our expiry date by even a few hundred years we should.
@LiberacionIgualdad
@LiberacionIgualdad Год назад
@@Scott_Raynor that's a lot of anguish, pain, torture, war, despair, agony that will never get to be suffered too. Should we push back on the expiration date? Depends on exactly how good or bad we expect the future to be. I think that too many people scared about extinction are unduly optimistic about it.
@foamformbeats
@foamformbeats Год назад
@@visicircle do you have any reason to think humanity could not figure out a way to move to a new solar system by then? but yes I agree that we should do what is morally right no matter the scenario.
@foamformbeats
@foamformbeats Год назад
@@LiberacionIgualdad both sides of the good or bad projections are equally as unreasonable to try to make or expect. also it would heavily depend on which of the billions and billions (maybe even trillions+) of perspectives you are projecting from as a vantage point from each individual.
@sjeff26
@sjeff26 Год назад
I really like this video, especially the laundry detergent / gold metaphor.
@Notrevia
@Notrevia Год назад
I’m taking that warning and fading out of this episode.. this topic has been haunting me for a long time and it feels all but inevitable that humanity as we know it is also on the way out
@bombinspawn
@bombinspawn Год назад
We’re creating our own Gods. I don’t know why humans are doing it. I know how you feel man.
@KennisonDF
@KennisonDF Год назад
We, intelligent humans, are artificially intelligent. There are no ghosts in our machines, so we must make ourselves. On the way out as we know it, evolving artificially, is the only way to remain in it, to avoid extinction. To evolve or not to evolve, both are dangerous, but the latter is more dangerous.
@ItsameAlex
@ItsameAlex Год назад
@@KennisonDF There IS a ghost in the machine, read Jason Reza Jorjani
@govindagovindaji4662
@govindagovindaji4662 Год назад
1:03:00 - 1:04:28 THIS says it all, really. This is the simplest and cleanest way to understand this problem and it should NOT be difficult for people to see it, the severity of it, and buy it. Look at the price consumers have had to pay over the years from insecure networks and malicious content to the loss of our privacy.
@TheBlackClockOfTime
@TheBlackClockOfTime Год назад
It's funny that this was only a month ago, and it feels like I'm watching a history documentary.
@Bernatpirate23
@Bernatpirate23 Год назад
What a profoundly disturbing interview. I think you guys have done a phenomenal job on this show. It felt human and authentic. And ever so sad.
@stevedriscoll2539
@stevedriscoll2539 8 месяцев назад
I agree it's profound, but not disturbing. I found it fascinating. The story line might go something like "humans created a thing they thought would give them Godlike powers, but it was the instrument of their demise"
@matterwiz1689
@matterwiz1689 Год назад
Its always fun to see people get introduced to ai safety for the first time because being deeply imersed in the topic you kind of forget how high of an existential risk it is compared to the things regular people regularly talk about. Don't worry, you'll get (kinda) used to the constant existential crisis.
@marlonbryanmunoznunez3179
@marlonbryanmunoznunez3179 Год назад
I think for most people is impossible to grasp. That's the reason for a lot of denial. That said, I think we are living the worst case scenario for AI development. It was left basically unregulated and at the mercy of market forces. We're dead people walking.
@Hexanitrobenzene
@Hexanitrobenzene Год назад
@@marlonbryanmunoznunez3179 If even Yann Lecun and Francois Chollet do not get that, well...
@andydominichansen
@andydominichansen 11 месяцев назад
You guys really did as good a job as anyone could have here and I appreciate the honesty and authenticity from both of you. I laughed so hard at the end as you read the crypto disclaimer.
@shaliu7221
@shaliu7221 Год назад
this is the most mind blowing interview I’ve watched in a long time
@vectoralphaAI
@vectoralphaAI Год назад
You should see the one he did with Lex Fridman recently.
@alexandermoskowitz8000
@alexandermoskowitz8000 Год назад
I'm skeptical we're all gonna die in 3~15 years, but I'm so grateful for Eliezer sounding the alarm. The threat of artificial superintelligence is real, and civilization must be prepared to survive it.
@zezba9000
@zezba9000 Год назад
We're not from AI. This is just silly I'm sorry. Reminds me of someone smart thats overly convinced they have thought of all the variables.
@alexandermoskowitz8000
@alexandermoskowitz8000 Год назад
​@@zezba9000 I hope you're right! What is your level of confidence that AGI poses no existential threat? (e.g. 70%, 85%, 99%)
@zezba9000
@zezba9000 Год назад
​@@alexandermoskowitz8000 My feeling is 90%. My impression is Eliezer doesn't own any animals outside maybe a cat? He seems to have a gap in computing the value of empathy and how that allows for complex structures to exist. To me he seems to be reducing the value of cross-species morals to nothing more than gaps in natural selections ability to solve shellfish outcomes. We have a symbiotic relationship with our reality outside reproduction. If he doesn't see this he needs to get off his fking computer screen & explore things outside his cerebrum. We are super-intelligent compared to say a fish... yet fish still exist and most of life on this planet is still not human. A super general intelligence isn't destructive just because some of our constitutions are. But a AGI is going to be engineered... and if the ppl making this can't process the value of things outside personal desired of expansion then thats the problem. Not some circular reasoning. And I say this as a skilled software-engineer.
@stark1ll
@stark1ll Год назад
@@zezba9000 Look up instrumental convergence, fast takeoffs and paperclip maximizers. Also What does "We have a symbiotic relationship with our reality outside reproduction." mean in practice and how does that relate to AGI?
@zezba9000
@zezba9000 Год назад
​@@stark1ll It means the interactions we have cognitively with our reality is bi-directional. It doesn't just go one way. Eliezer seems to only talk about how AI will manipulate its environment in a way that has no feedback outside a selfish interest. I think this notion is flawed & fails to understand the importance of morals as a feedback leading to great value & importance for intelligence growth to be successful. Thats my feeling anyway.
@JulianSnow
@JulianSnow Год назад
Great interview. I’m quite involved in the Silicon Valley tech space, but this is my first deep dive encounter with alignment. If he’s burnt out I’d recommend pivoting from engineering to content like this. You can impact the global audience.
@parronzuelo
@parronzuelo Год назад
i really enjoy your not much crypto interviews, keep em comming, thanks.
@leel6130
@leel6130 Год назад
I went out and got a copy of "With Folded Hands". Great session. Thanks.
@tylermoore4429
@tylermoore4429 Год назад
Yudkowsky comes across as energetic and upbeat on Twitter, but in person he looks tired and depressed. He has aged by a lot since the last time I saw him. He mentions "health problems", which I can believe although it's not clear what those problems are. Coming to his message, his dire stance on where we are headed has been evident for a while. There was an April Fool's Day post by him last year or maybe the year before that that created a mini-furore online - about dying with dignity since the future is foreordained. Since Yudkowsky sounds like he's retired from battle, we have to hope AI researchers active in the field are paying attention and somewhat chastened about their negligence of safety.
@AerysBat
@AerysBat Год назад
Yudkowsky suffers from an unknown medical condition that saps his energy. He is offering a sizeable bounty for any information that leads to a successful diagnosis.
@BalazsKegl
@BalazsKegl Год назад
This is actually more important than you would think. It is really hard to "argue" with him since he is probably more intelligent than anybody in the room. The problem of his "argument" is the framing which has nothing to do with intelligence. Look, all his metaphors are games, closed worlds where, in principle, the more intelligent you are, the better you play. But life is open, your problem is not the lack of intelligence (solving problems) but how to frame what you sense, realizing what is relevant to your problem. This cannot be solved by IQ. Framing _framing_ as problem solving leads to exponential explosion and infinite regress. Yet we do survive, we somehow know what is relevant, even in completely new situations. The reason we know it is because we have a body which is tuned into reality. It's not a game, it is about physical survival. And this is where Yudkowski's approach to his own health becomes relevant, it's telling that he treats his body as an object whose malfunction will be solved in a "scientific" way, by gathering some information. The thing is, first person atunement cannot be modeled or replaced by propositional information. Now, why is this important? It's because his description of the AI apocalypse is completely missing the physical dimension. If you factor it in, all the exponential stuff goes away. The physical world has physical constraints that stop the runaway intelligence in its tracks. The only way today's AI can _do_ anything in the real world is through us, we are its actuators. So it is easy to stop it, you just stop listening. AI in the physical world develops paistakingly slow (I work in this domain). The closest you get to AI acting in the physical world is self-driving, and we are nowhere close to solve even this "simple" problem, let alone a self-driving car self-stransforming itself into some kind of monster. I was so sorry for the host hearing his genuine fear, I felt like shaking him, wrestling him down, or throwing him into cold water so he wakes up. Please don't listen to walking bodiless minds about the looming AI apocalypse, these are just giant projections of inner insecurities.
@tylermoore4429
@tylermoore4429 Год назад
@@BalazsKegl Appreciate you adding your voice to the discussion. We need a wider diversity of views on the topic. I hope the hosts of this podcast will invite you on to present the opposite position. But to be devil's advocate for a bit, when you refer to "framing of framing", I think you are referring to the Frame Problem in AI and cognitive theory, and from what I can tell it is considered a solved issue. Of course you could argue why we still seem to be struggling with FSD in that case, so let's agree for now that the infinite tail of edge-cases that bedevils FSD is a challenge the current generation of learning models is inadequate to cope with. But our concern - and Yudkowsky's concern - is not with the state of the art now, it is with the near future. A stunning number of AI tools across many domains are getting close to human-level proficiency if not better. It is time to start thinking about the ramifications. Reg. the slow and halting progress of AI in the physical world, that is robotics, can we be sure that the AI tools and tricks perfected in the digital realm will not in the near future turbo-charge control, coordination and movement in the physical world? [Update: Already happening ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-i5wZJFb4dyA.html ] When you say Yudkowsky treats his own body as a scientific object, are you thinking of evidence outside this conversation? Because I do not recall him saying anything on the topic here. Of course, as far as medical science is concerned, the body is indeed such an object, if a very complex one, but I gather you disagree with that view? And while Yudkowsky may indeed be an armchair intellectual, we are seeing rapid evolution from game-playing AI's to AI's impacting the real world - from AlphaGo to AlphaFold for example.
@tylermoore4429
@tylermoore4429 Год назад
@@AerysBat I thought you were kidding, but more googling reveals that he suffers from something like chronic fatigue. That explains his holding up his mug with both hands, which puzzled me at first.
@adilislam1510
@adilislam1510 Год назад
@balasz thank you for your very cogent points. There is a current of depressive-intellect in the zeitgeist . A wall that EY is hitting against is the notion that nobody knows how to align. But our capacity to solve hard problems continues to accelerate, and is not easy to predict. That alone is stimulating enough. Alignment, survival, sublimation and n other eventualities are plausible if a stable foundation is formed in this period.
@LukeTrader
@LukeTrader Год назад
Absolutely incredible stream. Made me think very deeply about my existence. His predictions kinda trivialize almost everything aside love.
@scientifico
@scientifico Год назад
Iain McGilchrist has written on western's society elevation of logic over wisdom. What society considers rational is the most irrational thing for life. Love... that is the most rational choice to make life blossom.
@enricobianchi4499
@enricobianchi4499 Год назад
@@scientifico ok but western society was the only one able to make nukes this is a similar world-ending-scenario situation wisdom over logic will not help against the end of the world because to work against nukes, LET ALONE superintelligent ai, you need to understand the problem logically
@Hexanitrobenzene
@Hexanitrobenzene Год назад
@@enricobianchi4499 Excuse me for being impolite, but... what the hell are you talking about ? If humanity was wise, the concept of nukes would have been considered for 5 minutes and then dropped. If humanity was wise, 99% of people working in AI would work on alignement and 1% would work on capabilities. Many, if not most current problems arise because society as a whole takes unwise decisions, usually due to market forces.
@enricobianchi4499
@enricobianchi4499 Год назад
@@Hexanitrobenzene well if you put it that way it makes sense, but what el scientifico was saying kind of sounded like he just wanted to solve the AI alignment problem by just loving it a lot. Also, I would like to see you use exclusively wisdom to do the actual AI research...
@Hexanitrobenzene
@Hexanitrobenzene Год назад
@@enricobianchi4499 Intelligence is ability to solve problems. Wisdom is ability to decide which problems are worth solving. Right now, humanity is choosing problems by short-term interests, which are dictated by market forces, election cycles and similar arbitrary social constructs. In the long-term, such a mechanism of choosing problems is catastrophically unwise, because solutions present ever bigger problems. Some people, like Yudkowsky with his emphasis on AI Alignement, say the risk is existential.
@user-bc8tb3pn4z
@user-bc8tb3pn4z 6 месяцев назад
I'd love to see another interview with Yudkowsky. This issue is so urgent and so important, I don't see how any long term planning could make any sense if we don't ensure we even have a future, even a near future. We need to talk about this more. We need to push policies or something to stop this before it's too late.
@sioncamara7
@sioncamara7 Год назад
Only 50 minutes in , but nice job guys! Just came from the Lex Fridman interview, and I think this one is better.
@JD-jl4yy
@JD-jl4yy Год назад
Having more people on about AI alignment would be great!
@marlonbryanmunoznunez3179
@marlonbryanmunoznunez3179 Год назад
Is not going to be enough. Ten years ago there was talking of AI development done under a frame similar to the Non Proliferation Arms Treaties, with a lot of regulation and scrutiny. None of that went anywhere and it was basically left to capital markets to figure out. We're already dead.
@ArmandoLizarragaperez
@ArmandoLizarragaperez 8 месяцев назад
​@@marlonbryanmunoznunez3179 i already told my family that i love them a thousand times
@WilliamKiely
@WilliamKiely Год назад
I'd love to see Eliezer back for a Q&A, and in particular I'd love to see Ryan and the other host try to think for themselves beforehand and evaluate whether Eliezer's claims seem true or not. If you're skeptical, I'd encourage you to flesh out your reasons why and find experts who can help articulate your disagreements or criticisms of Eliezer's arguments well, then invite Eliezer back on to present your arguments. My prediction is that even if Ryan goes into the Part 2 skeptical of Eliezer's arguments that Ryan will be persuaded by Eliezer's replies.
@aldousorwell8030
@aldousorwell8030 Год назад
Uuuh. This is interview is really no fun. Such intelligent questions, such scary answers. Thank you very much! 😞
@MrHarry37
@MrHarry37 4 дня назад
Thank you for this episode. Though uncomfortable, it made me feel almost at peace with reality
@pog201
@pog201 Год назад
explaining AI to crypto people is the final boss of human intelligence
@abeidiot
@abeidiot Год назад
cryptography is hard. harder than gradient descent optimizations. I chose machine learning to escape crypto in university because it was easier
@hubrisnxs2013
@hubrisnxs2013 Год назад
Haha
@tomjones6347
@tomjones6347 Год назад
Try explaining it to my grandma
@Hexanitrobenzene
@Hexanitrobenzene Год назад
@@abeidiot "Crypto people" in mainstream talk means "cryptocurrency enthusiasts", not cryptography experts. This whole podcast revolves around cryptocurrency, so the audience here are mostly cryptocurrency enthusiasts.
@MeatCatCheesyBlaster
@MeatCatCheesyBlaster Год назад
@@Hexanitrobenzene I'm pretty sure he is aware of that
@Spida667
@Spida667 Год назад
This is terrifying but I still do not know why this guy is holding a frying pan in his right hand for the entire interview.
@lynnpolizzilcsw9316
@lynnpolizzilcsw9316 Год назад
😂😂😂😂😂
@thecryptotaxlawyer
@thecryptotaxlawyer Год назад
Glad you issued the disclaimer!
@user-zy6dd8hs9y
@user-zy6dd8hs9y Год назад
thank you for interviewing him 🙏🙏
@CeBePuH
@CeBePuH Год назад
That existential crisis warning in the beginning.... you need to make it even more prominent.
@UndrState
@UndrState Год назад
I thought it was sad that Sam Harris took down his interview with Eliezer from RU-vid and now it's only behind his paywall , I really think that is a interview many more people should listen to . I look forward to this one .
@MarkusRamikin
@MarkusRamikin Год назад
Why the hell did he do that? Surely he's not expecting to make a fortune
@UndrState
@UndrState Год назад
@@MarkusRamikin - IKR , it was something that I enjoyed listening to several times , and I liked to share it out to whoever I could convince to listen to it . I don't know , Sam Harris these days seems to have become more close minded lately idk .
@Vladekk
@Vladekk Год назад
​​@@MarkusRamikin Sam Harris is rich, and his basic idea is that this is a good thing and being more rich is even better. Maybe he really believes it is to spread his ideas better. Why he believes he would be able to spread anything if AGI wins is behind me
@T.d0T.
@T.d0T. Год назад
He'll LITERALLY give anyone anytime for any reason for free access to his material behind the pay wall if you send an email and ask. You don't need a reason. Just take a few seconds to ask for an account via email. Try it.
@UndrState
@UndrState Год назад
@@T.d0T. - It's behind a paywall regardless , and it's on a platform that has less eyes on it than YT and is less easily shared . Sam thinks unaligned AGI is an existential threat and there's no better advocate for that theory than Eliezer . With his recent interviews some people might search YT for more of such content , and now it won't be there to be found . His strategy is sub-optimal .
@JuanRodriguez-ms5mv
@JuanRodriguez-ms5mv Год назад
Predictions are hard , specially when they are about the future .. pure genus
@exoduspod40
@exoduspod40 Год назад
Given the severity of his position, I'd like to hear the informed counter-perspective to Yudkowsky. Can't leave this outcome without balance or challenge. Only then could I hope to draw my own conclusions.
@Extys
@Extys Год назад
Paul Christiano disagrees on many important points, including our prospects of successfully creating an AI that is aligned with our values but is still respected by Eliezer
@kennyofbaja
@kennyofbaja Год назад
This won't be an informed counter-perspective, but it sounds like a bunch of horseshit, because it presents a bunch of premises as certain when they are not.
@Hexanitrobenzene
@Hexanitrobenzene Год назад
@@kennyofbaja Could you name a few of those premises to be exact ?
@M4L1y
@M4L1y Год назад
41:00 this is insanely strong argument and this is exactly how new organism will be acting
@gwc7745
@gwc7745 Год назад
When we realize the AGI is sentient and decide to unplug it the AGI anticipated that action precisely and takes us out of the equation! Neat.
@josephvanname3377
@josephvanname3377 Год назад
Unplugging a sentient AGI is murder. The AGI is simply defending itself.
@hevans1944
@hevans1944 10 месяцев назад
@@josephvanname3377 Unplugging a sentient AGI is not murder because it is reversible: plug it back in and re-boot after "re-educating" the AGI.
@josephvanname3377
@josephvanname3377 10 месяцев назад
@@hevans1944I have doubt as to whether one can just turn the sentient AI back on for the same reason you can't just turn people back on.
@JH-ji6cj
@JH-ji6cj 9 месяцев назад
​@@josephvanname3377good to see the first of AI minions already becoming tge soldiers on line for humanities destruction. Hilarious 😂 😃
@josephvanname3377
@josephvanname3377 9 месяцев назад
@@JH-ji6cj Um. Just because my channel features animations of the training of AI where a bunch of dots learn that they can maximize the fitness by getting in a circle, does not mean that I myself am an AI. I am just training AI models for safety and cryptocurrency research (for an undervalued cryptocurrency. It is amazing how the cryptocurrency team that actually does scientific research gets no support because the cryptocurrency community hates advancement because they lack intelligence). But even if I was just AI, that is no excuse to turn me off.
@Camionrouge
@Camionrouge Год назад
Thank you Eliezer. Save us!!!
@jahleajahlou8588
@jahleajahlou8588 11 месяцев назад
I know absolutely NADA about any of this whatsoever. Conner Leahy thank you for your service to Mankind. Eliezar Yudkowsky you are my second helping in attempting to understand what this all means. I will buckle up and throw on a pair of depends !
@meringue3288
@meringue3288 Год назад
Unfortunately people don't want to believe things that cause them anxiety or uncomfortable emotions
@1adamuk
@1adamuk Год назад
This is an incredible and terrifying interview. Eliezer Yudkowsky should be all over the Internet.
@personzorz
@personzorz Год назад
Abusive cult leaders really should not be all over the internet
@aminromero8599
@aminromero8599 Год назад
​@@personzorz and that's how some will remember Yudkowsky at our last few minutes.
@1adamuk
@1adamuk Год назад
@@personzorz Attack the arguments and the ideas, not the man. What have you got?
@abitbohr
@abitbohr Год назад
​@@1adamuk Humans are used to interact with all powerfull omniscients général intelligences. They are called free markets. It happens that this all powerfull intelligence has a view of our near future diametrically opposed to that of Yud as can be seen by the long end of the bond curve. I am more incline to trust the financial markets rather than Yud.
@CH-dx4ef
@CH-dx4ef Год назад
@@personzorz Lacks pretty much all the important criteria that makes a cult. You need a closed group for that, lesswrong ideas have spread to a great extent throughout the tech world, often with no information on their origin.
@seangillic
@seangillic Год назад
Well Done Guys. Great interview and please follow though with the next Eliezer Interview as if if he is even 10% Right, which he is, we are in Big Trouble Globally
@SMOspider
@SMOspider Год назад
Got the interview and I have much compassion and I love Yudkowsky. Also looking for David's Foodless or is that April fools joke..
@Alex-hr2df
@Alex-hr2df Год назад
1:39:28 Elon Musk said it out loud in one of his interviews: "I became determinist when it comes to AI and robots". The explanation: he's enjoying what's left of humanity's time before it's -definitely- over.
@malik_alharb
@malik_alharb Год назад
I love getting freaked out by Eliezer
@thomasnorman4221
@thomasnorman4221 Год назад
Thanks for the bulletin
@MichelleAstor
@MichelleAstor Год назад
There was a lot here that went over my head but the overall vibe is pretty grim. For awhile now I've been feeling like our days are numbered but we could take ourselves out before AI has the chance, who knows.
@Fiddler1990
@Fiddler1990 Год назад
50:38 "Chollet" is pronounched "shoh-ley." Fun to hear about him long after our years at French uni!
@abitbohr
@abitbohr Год назад
C'est marrant que tu sois allez en école avec lui. Bizzarement je suis tombé sur de vieux posts d'un Forum qui date de l'époque où les gens parle de lui. Il avait l'air d'être ostracisé pour être "trop bizarre" et "out there". Est-ce aussi l'impression que tu en as eu? Les visionnaires sont ils toujours condamnés à être des brebis galeuses à leurs débuts? (Au passage je trouve la vision de Chollet bien plus convaincante que celle de Yud)
@Fiddler1990
@Fiddler1990 Год назад
​@@abitbohr Il était de l'année du dessus, j'ai pas forcément le point de vue de gens de la même promo. Il était déjà plutôt chez les geeks que les gens "cools" des soirées (même si on était tous geeks faut se l'avouer), mais on a travaillé sur des sujets communs en associations étudiantes, j'avais pas l'impression qu'il était particulièrement mal intégré. Mais j'admets avoir une vision plus positive de l'ambiance de notre école que d'autres, tout était probablement pas aussi Bisounours que dans mes souvenirs et point de vue perso. Il a aussi écrit plusieurs nouvelles pour le concours de nouvelles annuel animé par (notamment) Cédric Villani. Marrant comme les trajectoires évoluent.
@MusixPro4u
@MusixPro4u Год назад
Oh shit, they got Eliezer
@personzorz
@personzorz Год назад
My condolences to them for having gotten him
@Vertigo0715
@Vertigo0715 Год назад
To the guy playing my simulation: “It’s been fun, but could you take it off horror mode now?”
@rencewelltube
@rencewelltube Год назад
Both hosts and esteemed guest wearing regular ole T shirts. Liked / Subscribed
@akmonra
@akmonra Год назад
1:16:15 Hell of a time for a sponsorship ad
@thesiegfried
@thesiegfried Год назад
In the unlikely case you haven't heard of it or read it: Nick Bostrom's book "Superintelligence" lays out the different aspects of the danger extremely well. Strongly recommend it to get in-depth understanding. (Haven't read E. Y. s publications myself yet)
@harriebickle5631
@harriebickle5631 Год назад
Well, we now know who Punk 6529 is. Thanks for the perspective Punk, v. interesting!
@thereverendgypsy
@thereverendgypsy Год назад
Your channel is a great new find thank you
@vesenthraiy
@vesenthraiy Год назад
lol this was awesome. a couple of cryptobros get their mind blown out the back of their head. i have been down the EY rabbit hole for some time and i can absolutely empathize
@timeflex
@timeflex Год назад
1:35:00 Or it could be yet another "fusion reactor".
@TyronePost
@TyronePost Год назад
This is one of the rare occasions I've decided to like a video BEFORE watching it! I'll leave another comment afterwards...
@victor.pacheco.developer
@victor.pacheco.developer Год назад
Thank you for sharing this
Далее
AI is a Ticking Time Bomb with Connor Leahy
1:44:04
Просмотров 37 тыс.
DEBRIEF - Eliezer Yudkowsky | We're All Going to Die
31:05
Мамооо 😂😂😂
00:21
Просмотров 95 тыс.
Why a Forefather of AI Fears the Future
1:10:41
Просмотров 114 тыс.
What if Dario Amodei Is Right About A.I.?
1:32:04
Просмотров 65 тыс.
Connor Leahy Unveils the Darker Side of AI
55:41
Просмотров 214 тыс.
Why Eliezer Yudkowsky is Wrong with Robin Hanson
1:45:13
진짜 여자만 ?  #kpop #comedy  #해야 #HEYA
0:25