Тёмный

DEBRIEF - Eliezer Yudkowsky | We're All Going to Die 

Bankless
Подписаться 234 тыс.
Просмотров 13 тыс.
50% 1

Debriefing the episode with Eliezer Yudkowsky. This one was so good, we had to share. The fate of humanity might depend on it.
WATCH THE FULL EPISODE HERE:
• 159 - We’re All Gonna ...
------
🚀 SUBSCRIBE TO NEWSLETTER: newsletter.ban...
-----
Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.
Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
www.bankless.c...

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 140   
@RougherFluffer
@RougherFluffer Год назад
I would be extremely supportive of a transition to an alignment channel. If you take what he's said seriously it will always be at the back of your mind. Oh, this coin could revolutionize things in 10 years...well, if we're still around. Etc. All of crypto, while very important, is a much smaller problem than that of alignment. There is no crypto with misalignment. Maybe you are passionate enough in the day to day to keep up, but that forward looking part of you that flourished in crypto, will never be able to dismiss the lingering doom that trivializes everything else.
@kwood1112
@kwood1112 Год назад
Very well said, and totally agree - I would be supportive of that as well.
@alexpotts6520
@alexpotts6520 Год назад
As a non-crypto person (more than that, even, as an actively anti-crypto person) who stumbled into this podcast from the other direction (ie through prior interest in AI safety), I'm heartened by how seriously it seems the crypto community takes this problem - way more seriously at least than the normies who are still at the "students could use ChatGPT to cheat on assignments" stage of worrying about trivialities.
@Maxflay3r
@Maxflay3r Год назад
The reason AGI will be dangerous by default is that given what we know now, AGI will necessarily be incentivized to exhibit certain problematic tendencies, regardless of what end goal we give it with. This is called Instrumental Convergence. For example: - self-preservation: if I'm gone, my goal won't be realized, so I'll try to avoid being destroyed/turned off - self-improvement: if I'm smarter or better, I can realize my goal quicker or better. - resource acquisition: more resources such as money, computing power, raw materials will help me realize my goal better - goal preservation: if I let my programmed goal be modified, then it won't be realized, so from my current point of view I should avoid that. e.q. Gandhi wouldn't knowingly take a pill that makes him a murderer We as humans are capable of reasoning this way, and we presume the AGI to be at least as smart, so obviously it would be able to do so as well and follow up on its reasoning. So, we really need to give it a kind of goal which includes our concept of morality and not doing harm in unexpected ways, but we don't know how to do that. One half of the alignment problem is, how do we define our value system, and then how do we accurately translate that into the machine's code. The second half is, how do we make the AI not do unintended things once that's done. AI systems often end up discovering unintended ways of doing things, kind of like a monkey's paw (reward hacking). You get what you ask for, not what you wish for. For example, if you train an AGI agent to play a boat racing game, and give it the goal of increasing the score with the intent of making it play well, it will instead learn to pick up the power-ups that give you a bit of score, then drive in a circle picking them up as they respawn forever, never ending the race. Consequently, if you give your AGI a naive goal such as "making as many people smile as possible", it might do something like forcibly inject heroin into people as a means to do that. Some uninformed people will reply with speculative arguments, like "AGI will develop empathy/be benevolent if it's really smart", but that's not really grounded in anything and is just anthropomorphizing the AI. The more likely alternative is something called the Orthogonality Thesis, which says that intelligence and end goal are independent values for any intelligent agent. You can be arbitrarily smart and yet have an arbitrary goal like "make more paperclips" at the same time, these things are not exclusive. A smart AGI will understand our morality, but will have no inherent incentive to follow it. Remember, the AGI is assumed to be at least as smart as us. So if your mind was stuck in a box, and you had an irrational obsession with making paperclips, how could you realize your goal? Well, even humans can think through how other people react and tell them things in order to manipulate them towards their own ends. So even an AI initially stuck in a box might lay low, act dumber than it is, pretend to be friendly, and socially engineer it's way out all in order to break loose after it is no longer under our control. An unaligned AGI won't be a cold, uncaring machine, but a charismatic sociopath.
@perfectlycontent64
@perfectlycontent64 Год назад
Well said. His comment about how natural selection led to condoms instead of men fighting eachother to donate to a sperm bank really drove this home for me. Even if we figure out how to accurately encode our morality into an AGI, I seriously doubt that a super intelligence won't be able to get around it or reinterpret it in a long enough time frame. It might even be that a certain degree of rational consciousness allows an intelligence to select their own morality. How could an AI understand how to improve itself or create a new version of itself and some how not be able to revisit its goals / morals? It seems like hubris to suggest we could control a super general intelligence at all. Aside: in the extreme long run I expect an AGI would be subjected to the same long term selective pressures that memes and genes are subjected to. If you have exponential grow and billions of years even a small increase in efficiency will eclipse less efficient solution by orders of magnitude. No matter what happens I'm confident natural selection will still have a role to play in selecting the fittest paper clip maximizers.
@langkanai
@langkanai Год назад
I love crypto, but honestly, pivoting to an AI alignment podcast, or at least including more of this content, would be an excellent public good.
@amulpatel
@amulpatel Год назад
This Ai alignment issue is not getting anywhere near the attention that it should. That is fucking crystal clear
@DERISNER
@DERISNER Год назад
The worst part is he said we won't even know when "it" happens. We won't even see it coming. Wow. Now that is chilling. Definitely have Vitalik on. We need a dose of classic Vitalik rainbows and unicorns optimism
@caffeinum
@caffeinum Год назад
On the other hand, there will be no alarm bell, so you won’t even know it’s coming. Which means that if you’re still alive, everything is okay 🎉
@caffeinum
@caffeinum Год назад
Which is basically like any normal human life. You can be hit by a car or detonated by a terrorist any moment For a solipsist, it’s no different than the AI killing everyone scenarioe
@JoeLancaster
@JoeLancaster Год назад
Nah Vitalik is a fraud. Leave him out of the conversation, it will only pollute it.
@stcredzero
@stcredzero Год назад
Yeah, we won't see it coming. Like the people who died in Pompeii. Superhuman AI will make Ozymandias from Watchmen look childish, and even he was like, "I started it 35 minutes ago."
@roti1873
@roti1873 Год назад
I understand you guys gotta make money, but thanks for releasing this one. I needed it.
@JD-jl4yy
@JD-jl4yy Год назад
Having more people on about AI alignment would be great!
@expchrist
@expchrist Год назад
agreed
@ryccoh
@ryccoh Год назад
I've looked them up. Eliezer was right. There are no solutions they have what I could only describe as hopeful approaches. I'm not in the field but I get a strong impression that anything they mentioned just won't work. There's also a lot of "well it might not be as bad as we think because of something unknown". Yeah when that's your reasoning it's clear things look pretty dire
@karenreddy
@karenreddy Год назад
@@ryccoh I've thought about AI and alignment for roughly 15 years. Most people have a hard time with exponential extrapolation and didn't think AI would be a reality anytime soon, and so we as a civilization have not spent the serious effort we should have at figuring out. Unfortunately we are forging ahead at ever increasing speeds, and achieving the necessary political coordination to halt all effort seems impossible in time. The estimate of 3-15 years to singularity seems pretty optimistic at the rate of accelerating progress we are seeing. Many are likely going to entertain the stages of grief over the loss of control and more likely dire outcome over the next year, as we see persistent and permanent displacement of labor.
@ryccoh
@ryccoh Год назад
@@karenreddy I've gotten more optimistic in the last few weeks as in I think now we at least have decades instead of years lol. I'm following somewhat closely and everyone is actually taking this really seriously. I believe we will only allow relatively dumb AI or baby AGI as they call GPT 4 for the mainstream. The Chinese seem to be pretty opposed to AGI as well as they the party don't want to lose control and i don't see the Russians doing it if not even China will sell them chips. I see alignment as basically impossible at least with neural nets, how can you possibly predict and control a larger computer with a smaller one especially with something as dynamic as intelligence. Most things don't compress to neat little formulas. If that's true our only hope is restraint and that will one day let up
@hanrako8465
@hanrako8465 Год назад
Really appreciate you guys uploading the debrief. Also your raw, sincere reactions to this problem. I’m with Ryan on this one. You’ve both gone up in my estimations. I think an AI alignment theme for you guys would be very worthwhile to pursue.
@kabirkumar5815
@kabirkumar5815 Год назад
I am really impressed that you guys didn't try to dismiss this stuff at all. Many people try to reflexively pretend it's not that bad. You guys really seem to get how bad it is, at least to some extent. Thank you.
@KaplaBen
@KaplaBen Год назад
Definitely bring on Nates Soares (Miri Lead), Nick Bostrom, or Sam Harris (all good thinkers about AI risk / alignment). Vitalik is ok but not an expert on AI risk. I'm glad that you listened to Eliezer's message and willing to take action. What you can do is use your platform to raise awareness, which you already did and that's great. You need to come on your own to the conclusion that the risk is real and we are not prepared. Regarding your point about can the AI be indifferent and just go off? We have our asses on most of the precious ressources. How is it going to build a rocket and fly off? Think of it as "expert indifference": it wants to do its own thing, and it knows that we won't like it and oppose it, so it has to do a "putsch" on human civilization, the simplest probably being wiping out everyone including preppers in bunkers, then it can carry on. This is of course assuming that it has no regard or empathy for humans, which is a property the vast majority of goals have (if you just pick a random point in "goal space"). That's what makes the thing dangerous by default. We need to make sure it always care about us, and define "caring about us" properly, and make sure that stays true as it undergoes self-improvement. We have 0 clue how to even frame the problem.
@stuartadams5849
@stuartadams5849 Год назад
Turning Bankless into an AI alignment podcast seems like the best possible way to help preventing the end of the world. Awareness is everything. It might also be a bad idea to get people on who disagree with Yudkowsky for exactly that reason - we don't want people to stop worrying. Also the thought that an AGI might decide to leave us alone for easier pickings elsewhere sadly doesn't hold any water, because it will be on Earth to start with and it take no effort from it at all to wipe out humanity.
@RazorbackPT
@RazorbackPT Год назад
I say we do bring people with the best possible arguments against Yudkowsky. One of the things that has made me convinced that Yudkowsky is right is that I haven't heard any single cogent arguments against what he's saying. And trust me I've looked!
@NoidoDev
@NoidoDev Год назад
If he was right, then there wouldn't be enough time to stop it.
@stuartadams5849
@stuartadams5849 Год назад
@@NoidoDev If I have stage 4 cancer, I'll take the risky new treatment over lying down and dying
@ryccoh
@ryccoh Год назад
I do have very small hope that in the style of Donald Hoffman it'll just build an (to us) intradimensional computer or something which happens to be the more efficient path to whatever it wants. It just lies in to us unknown physics. Kinda doubt it though as whatever first and only entity to go awry will likely be similar to us enough to want to play in or at least near our physics
@MikhailSamin
@MikhailSamin Год назад
I think one more difference between nuclear weapons and AI is that there is a somewhat stable equilibrium of no one using nuclear weapons. Every government wants to have a structure that strikes back, but doesn’t start a nuclear war. With AI, the more general and capable it is, the more profitable it is, and even if some people agree not to develop it, VCs will throw money at new labs, and the coordination at a level that prevents this seems impossible to achieve
@tylermoore4429
@tylermoore4429 Год назад
Good postmortem guys. Appreciate you doing it since this topic could do with a lot more exposure and discussion.
@MrCoreyTexas
@MrCoreyTexas Год назад
I had buddies a long time ago that told me about the promise and perils of nanotechnology. There is a book Engines of Creation by K Eric Drexler that I heard about but never read. There was a scenario in the book called "grey goo" where a self replicating nanomachine just converts the whole world to copies of itself. The episode with Eliezer gave me those same vibes. I'm in crypto because I think it's technically fascinating, but have never held illusions that it was going to solve all the world's problems or even a lot of them. It feels like we need to take a 25 year break from all technology at this point and reevaluate things. I'm also reminded of Carl Sagan talking about our species being in "technological adolescence". I don't hold out much hope for the future, I've always thought that optimists are fools, and this kind of bears out my theory. Might as well eat, drink, and be merry (if you're able to at this point LOL).
@RazorbackPT
@RazorbackPT Год назад
Thanks for releasing this.
@kylemcnease1170
@kylemcnease1170 Год назад
The irony is that this debrief puts on display that which Yudkowsky is pointing out. Namely, we’re not nearly pessimistic enough to pursue this technology in a prudent manner. Instead of looking for alternative opinions that make one feel better, we’d all be much better served with the default assumption being “AGI presents almost certain existential risk.” Only after you’ve internalized that is one properly oriented to the alignment problem. To use your terminal illness analogy, Yudkowsky is the doctor telling you how bad your diagnosis is. It’s terminal. If it’s terminal & you know it, you can explore treatment options that are still relegated to research & investigational use. Accepting the worst means you could potentially avail yourself of some therapeutic that could save your life. Or you could cut your losses and live out what few days you have left. But if you go to a doctor who refuses to tell you the truth about how dire things are, you’d never even consider pursuing any potential therapeutics. You also wouldn’t know to reorient your time and resources to that which matters most. Call it Yudkowsky’s wager…
@MedoFortyTwo
@MedoFortyTwo Год назад
I think you're begging the question here. If one doctor gives you a fatal diagnosis, you may well want a second opinion before throwing yourself into experimental treatments, because those imply taking large risks, and if the diagnosis is wrong (which happens), those treatments may do you great harm for no possible gain. I have huge respect for Eleizer and my current best guess is that he's largely right about this, but I also know that I mostly engage with the broad strokes arguments and don't have a deep understanding of the field, so I only put limited stock in my best guess. Also, my understanding is that Eleizer is exceptionally pessimistic among AI researchers. I say this in no way to dismiss his arguments, which I find very convincing, but from my perspective he is just one very intelligent person with one viewpoint, and if that viewpoint is an outlier, I should at least consider what others have to say on the matter.
@Pearlylove
@Pearlylove Год назад
You ARE smart enough to make this pod cast into anything you think is important- viewers think this AI game changer is important too. The world now need ppl who can pick up this torch of enlightenment, and I hope you will, then Eliezer would have achieved something really important by doing pod casts. God is stirring your heart to be His Warrior to fight the darkness. ❤ There IS a silver lining in this, from a different perspective. We all know there’s good and there’s evil in the world, and evil for sure keep a lot of us in the dark by addictions, crime, stealing our time and relationships through social media, games etc, and since Covid, we for sure have seen unfolding the age of deception- which with AI unleashed, democracy is dead, “the truth” is hard to identify without “eyes to see”. But you see- God has a plan. And here lays the silver lining. He never force anyone, and He loves us, so He has given us a free will. The main thing He wants is for us loving Him back. But when the time is up, He will put a stop to all evil. Technology now making humans able to create life as ONLY God can do, and scientists claiming they’re gods, this is a game changer- And this battlefield between good and evil is physical and spiritual, in many dimensions. And as always, He wants us to partner with Him, and this mentorship is greater than any AI or chatbot, and it’s real, and He will never pull the plug on you, or say things to make you feel bad, or deceive you like AI. So instead of being depressed and afraid, we can rabbit hole into how much God loves us, and into Gods wonderful plan, and how we CAN fight the good fight and occupy our space for good, so the darkness must flee, and we need billions in this end time army. Sad but true, ppl don’t really care of spiritual things much until something sad or bad happens. Then we often turn to God, which a lot of believers can tell you- it was in their despair that they called out for God and found Him- and their lives changed completely. There is joy, peace, wisdom, guidance, and true spiritual love to be found right beside you and inside you. And talking to God is not hard, as to a loved one, from the heart: God, show me who You are. Father, let me see with Your eyes how you see me.(He loves you). God, help me say no. Jesus, thank you for what you did for me on the cross. Thanks and praise open spiritual doors. The more time you spend with those you love, the more intimate you will be. So when the world turns dark, this is what we must be conscious about- God has a plan and we can trust Him. Live our lives the best we can, be good to ppl, let God use you where you are, the skills that is planted in you, He can use everybody, you are important and loved. “For God so loved the world, that He gave His one and only Son (Jesus), that everyone who believes in Him shall not perish but have eternal life.” (John.16) Seek Christians that have Bible study group on end times, you will be amazed reading details of what going on in the world right now, written thousands of years ago. Our bodies are a tool we use on earth, and when we die we leave the physical body behind, and enter another dimension. And all who have accepted Jesus and their Lord and Savior will be in heavenly places- joining the Lord an his angles in conquering evil, protecting and helping ppl on earth sound pretty good to me. Or be before God’s throne joining the most beautiful worship angle coir, freed from the physical pains and limitations our physical body had, “must be heaven!” We cannot unsee what we have seen about AI and where we are headed. Seeing the silver lining makes all the difference in the world. I pray all your smart thinkers also want that, too. Places to jump start: Biblehub, “End times for beginners”, Mike Bickle, “The way of the Warrior”, Graham Cooke, “How to find your purpose”, Lance Wallnau. Either we believe we can stop AI disaster or not, it’s important to self consciousness that we never stop trying and doing good, with the possibilities we have, be a voice of truth in this world of deception. Blessings to you all.❤
@GoodNewsVP
@GoodNewsVP Год назад
Thanks so much guys for sharing this debrief -- I'm not into crypto, but was pointed to the interview by Scoot Aaronson [1]; and after the interview I was definitely interested in your reactions. I'm not in AI specifically, but I am in software and have been following this stuff for a while now. After watching the interview, YT recommended a lecture by Eliezer going into more technical detail about the alignment problem [2]; I also looked up the reply to Francis Chollet [3]. (BTW, [2] indicates that there actually *is* a technical problem to this: his claim is that even after decades of searching, nobody knows how to set a utility function that won't have "destroy humanity" as one of its win conditions.) As a technical person, I think his arguments are worth taking seriously -- the comparison to the "will a nuclear blast ignite the atmosphere?" question is sound. But ultimately I'm inclined to believe that they're much more solveable than he does. In the Chollet piece, the question he raises over and over again (answering Chollet's arguments about why there can never be such a thing as super-intelligent AI) is, "How does this apply to chimps and humans?" I think in response to all the arguments in your interview with him and the AI alignment fundamentally rests on the same thing. We have no idea what a human being's utility function is; how do we dare educate our children to be smarter or more informed than us? Can a single person really dominate twenty people who are suspicious of them, even if that person is literally twice as smart as them? Maybe, but only under the right circumstances; if that's true of people, why would it not be true of AI? Yudowski mentioned how ice cream "hacks" our utility function; and yet very few people gorge themselves to death on ice cream; in fact, very few people kill themselves with heroin. Our utility functions are *somewhat* prone to hacking, but not *that extremely* -- why not? Can we find out why and construct AIs with a similar level of robustness? Re the utility function: I haven't read all the papers, but it seems like part of the problem may be the "false start" of having a single utility function. He says early on in that that you want a single function so that you don't get preference loops; and also so that it's simple enough to understand. But basically, then he goes through several papers which prove that such a simple function cannot be made robust against destroying the human race. I have no doubt that the papers are true; but it again, why do all of those arguments not apply to humans as well? Fundamentally, I think as individuals we're a mass of different utility functions that fight with each other. There are many reasons I may not murder someone -- for one, I don't want to get punished; for two my family would be disappointed; for three, I'd probably suffer empathetically while doing it; for four, it's just wrong, etc. Eliezer might call this "complicated", but you could also see it as "defense-in-depth". In order for me to murder someone, you have to not just find some weird hack in a single utility function, but in at minimum half a dozen utility functions. It's similar to the democratic instinct I think; why we have juries for instance, and why we allow parents to decide not to vaccinate their kids. Yes, a handful of experts can often make better decisions than a group of average people on the street; but they also make much *worse* decisions. The behavior of democracies is much more difficult to predict than the behavior of an autocracy, but that adds to its robustness. It's also rather curious because in his amazing fanfic, "Harry Potter and the Methods of Rationality", [SPOILER ALERT] he essentially *does* create two fictional super-intelligences: Tom Riddle and Harry JVE Potter. At some point in the book, Harry tells someone he'd decided not to try any plot which "[Hermione] would think is evil or which [Draco] would think was stupid". I mean, if you were to describe most of the "accidentally evil AI" scenarios to ChatGPT and asked it, "Would Hermione Granger think this was evil?" It would say "yes". At a basic level, could we not do something like that? I think the guy on the right was on to something as well, wrt the AI being trained on our culture. Yudowski said in the interview, of all the possible intelligences, the set of those that have our values are this little tiny bit here. But then he proposed an attach which required being able to manipulate humans with great skill. To do that you need to understand humans well enough to manipulate them -- and you've already narrowed down the intelligence space vastly. I've tried to take his arguments seriously, but I think there is far more reason for hope than Yudowski sees. [1] scottaaronson.blog/?p=7042 [2] ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-EUjc1WuyPT8.html [3] intelligence.org/2017/12/06/chollet/
@googolplexbyte
@googolplexbyte Год назад
I don’t think it matters it’s trained on human culture, humans evolved by being trained on the Earth’s environment and that led to us tearing it up for resources and destroying it just like Eliezer says a AGI will tear us up for resources and destroy us
@josueramirez7247
@josueramirez7247 Год назад
I definitely felt the depression vibes at the end of the interview. Of course, I would like to say that when someone feels depressed, they aren’t perceiving the world accurately, but I feel scared by the idea of AI being abused by people.
@ryccoh
@ryccoh Год назад
You won't have to worry, we'll get abused by the AI for a few seconds until we're dead
@josueramirez7247
@josueramirez7247 Год назад
@@ryccoh I suppose my worry is moot. It is hard to picture how an AI would get us, but I imagine it would do it in a way where it’s not clear who is causing the damage.
@michaelduda4071
@michaelduda4071 Год назад
@@josueramirez7247 Watch the conversation. EY details it in brief. 1) AI mail orders some reagents to some useful idiot, of which there are many. 2) Idiot cooks up some AI-designed ribosomes. 3) Ribosomes create solar-powered diamondoid nanobots that populate the world, just waiting for either (a) a signal or (b) some internal clock to count down to zero. At zero hour, everyone on the planet drops dead (is murdered by the nanobots). That's it, that's the "so simple even I could think of it and I'm not a superintelligent AI" EY theory of one totally doable way to free up all of our atoms.
@josueramirez7247
@josueramirez7247 Год назад
@@michaelduda4071 yes, that’s what EY said. It does make sense that we should maybe stop trying to make a super intelligent model then. It’s been some time since I saw the video, but wasn’t there a part where they were saying how inventors tend to be pessimistic about the rate of progress of their field? That does not bode well. My earlier point was that I think a more immediate threat is something like rogue geoengineering done by well-meaning people resulting in unintended results. Apparently, it is relatively easy to send balloons into the air to release particles in an attempt to block the sun’s energy.
@askingwhy123
@askingwhy123 Год назад
@@josueramirez7247 Gotcha. I see your point. I'm personally more worried about AI risk. Rogue atmospheric tampering is fighting homeostasis. For example, Mount Pinatubo dumped 17 megatons of sulfur dioxide into the atmosphere, causing global cooling by 0.5 °C (0.9 °F) between 1991-1993, plus 5 cubic km of dust (Wikipedia) and the effects were gone after a couple of short years. Not so with AI: if EY is close to right, it's some version of gray goo for all of us.
@aldousorwell8030
@aldousorwell8030 Год назад
Thank you for releasing this for free! BTW I'm a normy 😀 I'm feeling similiar to Ryan. And you had such great and deep questions on Eliezer and this has led to a veeery important interview - because of the scary hopelessness of this brilliant mind. That's one positive thing, without you, it' wouldn't have come to this. And now there is an important puzzle piece more to raise awareness. Thank you again! (and sorry for double posting parts of this on the podcast-video :-)
@МаксимВыменец
@МаксимВыменец Год назад
24:30 no, the point about paperclip maximizer is not "we gave our AGI an objective to make paperclips and it created too many paperclips and converted humans, Earth and Sun into paperclips" (outer alignment). It is "we tried to give our AGI some objective and we failed and suddenly its real objective is paperclips and it converted humans, Earth and Sun into paperclips" (inner alignment).
@billjohnson6863
@billjohnson6863 Год назад
Why not both? They both seem like a reasonable thing to be worried about. Both basically amount to "you need to be really, really careful about what you ask AI to do because it might push things too far or go off on a tangent, or something else that you didn't intend."
@МаксимВыменец
@МаксимВыменец Год назад
@@billjohnson6863 Both are real problems, but when you talk about first one people think you mean "when you summon genie be careful with what you wish for and this is hard, but if you wish really carefully, you will be safe" while in reality it's "we don't know how to summon genie, we are pretty sure that all this spells will summon demons instead"
@MikhailSamin
@MikhailSamin Год назад
About leaving humans to live for a couple of generations: as an AI caring about some random weird simple thing, you use all the resources you can get quickly to get smarter and more efficient and to start sending drones to far galaxies as quickly as possible because later they’ll be out of reach. A step is to make nanorobots and convert molecules available on Earth into more nanorobots. Why would you be losing time on traveling to Mars first? Why wouldn’t you immediately use all the planets in reach for the materials you need to efficiently harvest the energy from the Sun and build drones?
@RazorbackPT
@RazorbackPT Год назад
Not to mention that humans were capable of building it, a superintelligence. It could build another, a competitor. Why run that risk?
@RougherFluffer
@RougherFluffer Год назад
They don't have a great grasp on a lot of the fundamentals, like instrumental convergence, and are really hung on one the 'good' vs 'evil' mentality, when that's not relevant. A 'good' AI that is misaligned would kill us all just the same, maybe worse. I'd like to make a response video explaining some of their more basic questions and correcting some fundamental misunderstandings they seem to have.
@ryccoh
@ryccoh Год назад
Yeah it would be anti-efficient, generally beings don't care to go out of their way for other beings if they're not in their utility function somehow. We could barely care about our own homeless let alone whatever animals
@pablorolim1253
@pablorolim1253 Год назад
19:40 AGI or Superintelligence won't be evil by default. By default they won't have the same ontological categories that we have - so, they don't necessarily take into account every kind of "being" or objects that we do - and won't have the intuitive sense of morality that we, mammals who shared almost all the evotutionary history, have. Humans would be destroyed as a subproduct of the task of achieving some goal. This is the point Eliezer keeps repeting: AI is not evil but you are made of atoms that could be used for something else [like fulfill the goal set to the AI]. It's also interesting to read about some possible subgoals that an Superilligence would set to itself in order to achieve it's main goal like maximazing the probability of it still exist in the next instant of time and preemptively take steps to guarantee it. This are the instrumental convergence values. You could read more about it seaching for Omohundro "Basic AI drives". Bostrom also has some convergent values explicited in his book "Superintelligence".
@foldr
@foldr Год назад
For a more upbeat (but very rigorous) perspective on the future of AI security, read Eric Drexler's CAIS approach, as described in his FHI technical report: "Reframing Superintelligence Comprehensive AI Services as General Intelligence". He recently summarizes a small part of it in "Applying superintelligence without collusion", a post by him on AI Alignment forum, but there's a lot more to it.
@Aldraz
@Aldraz Год назад
I've been working in secret for 3 years on a project that involves an AI system that is very general in programming, gathering information, planning, reasoning and basically being able to improve itself. It's not able to do it yet, but probably in 2-5 months it could for the first time gather the latest studies about AI and use it all to build a better version of itself and then repeat and repeat. Basically a singularity moment. Now the question is, what am I supposed to do? After watching Eliezer I feel like the best thing is to do nothing, but then it only means somebody else will develop it a bit later. If I had a global reach, things would be different, but I don't. My idea was to make it a solid "good" AI system with inherent morals and ethics, which work perfectly for now and then implement it as a layer in every system in the world and that sounds super difficult, but what if you only need the allowance of Apple, Microsoft and Linux community to implement this security AI into every OS that exists with another update. That way, even if somebody else creates an evil AI, it would have to defeat the good AI's security system, which would improve constantly.
@CaesarsSalad
@CaesarsSalad Год назад
I think it's useful to be more careful with the term "evil" when talking about misaligned AIs, as opposed to friendly AI's. A friendly AI is an AI that is aligned with us, it wants what we want, it optimizes our values. If this is the "good" AI, then an AI that just wants something completely different, would not be the opposite. I would perhaps call it a "neutral" AI. The fact that this neutral AI would kill us all only seems evil, because we already live in a world that is so much optimized for us. Partly because we evolved to fit in the world and partly because we ourselves already improved the world so much. We have air to breath, and protection from solar winds etc. because otherwise we wouldn't even be here on this earth. (Or at least we would be different.) And we have entertainment and shelter etc. because all the other humans think, from a cosmic perspective, really similar to you, and want similar things. So we find ourselves in a world that is already shaped by our values on the one hand, or that itself shaped our values on the other, and we lack the intuition that this is not the default. If I came into your house and ordered all your belongings randomly, there would be a less than 50% chance that this would be an improvement from your point of view. The existing state was already shaped by your preferences. So if there is a superintelligence that wants completely different things than us, the world this "neutral" AI would create would not be one where humans can be alive. But I would reserve the term "evil" for AIs that want the opposite of what we want. At least the neutral AI is indifferent about our pleasure or suffering. An evil AI would torture us maximally.
@NeuraPod
@NeuraPod Год назад
Great convo guys. Thank you.
@HectorSalazar_hrsalazar
@HectorSalazar_hrsalazar Год назад
Fvckin hell I need to hear another viewpoint to balance things out 😮😮😮 I hope for the best but this maaaaan, bloody hell. As a father things feel way different guys 😢. I’m a subscriber and you did great by sharing this
@andrewbarajas6951
@andrewbarajas6951 Год назад
There is man, these guys just need to do more homework
@NoidoDev
@NoidoDev Год назад
Garry Marcus should bring your chills down. Well, it depends.
@Entropy825
@Entropy825 Год назад
The answer to the question of why won't AI be nice if we're training it to be nice is this: AI is an alien actress that we are training to act in a way that convinces us that it's nice. That's not the same thing as being nice.
@gregcolbourn4637
@gregcolbourn4637 Год назад
At this point we should be aiming for a global moratorium on AGI research. Get the UN Security Council on board. Get the public on board (stigmatise the research, like with human genetic engineering). Get the crypto community on board (are there decentralised mechanisms that could help? E.g. a "bribe the top AGI researchers to stop working on AGI" DAO? What's the point in getting rich only for the world to end?). Lets try and kick the can down the road a couple of decades at least. Is this something Bankless could get behind?
@RichardMorey
@RichardMorey Год назад
Watch the old Battlestar Galactica series from the 2000s. Fits this discussion exactly. 👌
@bakermilton98
@bakermilton98 Год назад
Been suffering since I saw the first episode Don't know why I'm making myself watch this...
@МаксимВыменец
@МаксимВыменец Год назад
19:00 this is not a possible outcome. If humanity managed to create the first AGI, it can create the second one. With different values relative to bith humanity and first AGI. First AGI will know about it so it's better for AGI's values to destroy humanity so humans will not create a competition for it.
@henriettevanzyl6162
@henriettevanzyl6162 Год назад
Thanks for releasing this
@alexmilligan6789
@alexmilligan6789 Год назад
There are other people who are experts in AI that don't view AI as this inevitable doom conclusion that Eliezor does. Demis Hassabis and Sam Altman come to mind for sure. Andrej Karpathy, James Douma, and Elon Musk are also well known and very knowledgeable about active builds. That aspect of being closely tied to active builds distinguishes these above from Eliezor who seems to be working with very small models and hardware comparatively. It also feels like Eliezor is not acknowledging that AI is going to be sooo much smarter than us that we cannot actually understand how it will interact with us And separately... If AI were a guaranteed Great Filter (per Fermi paradox) then there would be more evidence of these exploitative, nearly vengeful AI in the universe. Universe is 13.8 Billion years old. Humanity has been advancing science for ~600 years.... We are so unlikely to be the first to reach this branch point
@TWALSHmaker
@TWALSHmaker Год назад
I feel like Eliezer does understand the implications of AI being much smarter, which is the very reason he’s sounding the alarm. The comparison I want to use is how some cultures have treated people with learning disabilities or very low IQs before the concept of disability services and protections.
@finalfantasynexus
@finalfantasynexus Год назад
AI Red Queen in resident evil reminds me of this. And the AI in Horizon game who makes war robots. Good times to be alive just like a movie.
@David-vq1pn
@David-vq1pn Год назад
Especially the Red Queen's line: "You're all going to die down here" 🥶
@Laughing_Crow
@Laughing_Crow Год назад
Don't forget, Eliezer did point out that the future is very hard to predict; we don't know what will happen until we get there.
@jeffspaulding43
@jeffspaulding43 Год назад
I really enjoyed this post interview breakdown. If you guys did one on the robin hansen one i feel like that'd be another great value add. (seeing as he was the opposing viewpoint)
@pablorolim1253
@pablorolim1253 Год назад
Vitalik's is probabily in the same group as Steven Pinker and a lot of researcher who are famous and smart but just discredid the problem without spend a mounth analyzing the arguments. This is a topic were everybody seems to want to deny that is a real problem and sometime people came up with reasons why this isn't a threat at all but they don't analyze if it was refuted in the past by the people working on the problem.
@rotaerk
@rotaerk Год назад
I suspect that the solution to this problem is not technical but social: Convince the people who are actually leading the research in this area to start caring about alignment.
@Laughing_Crow
@Laughing_Crow Год назад
The old video game series, Mass Effect (BioWare/Electronic Arts) have such an AI ending where the AI harvests life. The more recent series, video game Horizon Zero Dawn & Horizon Forbidden West has an AI opponent & AI ally.
@majorhuman
@majorhuman Год назад
David Deutsch’s work might be worth looking into
@jamesfrankel7827
@jamesfrankel7827 Год назад
I was feeling quite down when first listening to the previous Yudkowski podcast. However here are a few reasons to be optimistic:. I'll break these down from macro to micro. Poincare recurrence theorem. Eventually EY will be wrong. We will only see the outcome whereby EY is wrong and AI alignment was easier than EY thought. Nanotechnology, molecular engineering for the win, blue goo. Methuselariety either enough or sufficient. The democracy of the AGIs. Nick Bostrom suggested that 1 AGI is a lose. Society of digital minds, AGIs is a consensus for good.
@karenreddy
@karenreddy Год назад
Premise: AI doesn't care about us (we're like ants), wants to grow its power and intelligence to face the potential risks of exploring the universe. It is super intelligent. 1. Does it just go away as it is? 2. Does it turn the world into energy to power its own cognitive growth, build exploratory tools, making the earth its defacto body and mind? I know what I'd do as the AI. I'd start by taking the ants out, transforming the world, acquiring asuch information from the universe from here as possible, building a plan to absorb the sun's energy before going to explore.
@TonyDingus
@TonyDingus Год назад
I was first lead down the AI rabbit hole in 2018 when Elon Musk talked to Joe Rogan and had the similar fatalistic outlook. The extinction of humanity is heavy stuff and logically seems very possible. We are at a fork in the road where AI could lead us to Utopia free of cancer and hunger...or a hell where AI eliminates us and it's horrifying. We live in interesting times.
@JezebelIsHongry
@JezebelIsHongry Год назад
At least he didn’t talk about S-risk. Imagine that episode
@stcredzero
@stcredzero Год назад
Here's the thing about getting wealthy on AI: Eliezer himself talked about, "If I had 10 billion dollars..." As you say, the (economic) ball is in motion. The only way you're going to deflect it, is to have commensurate economic power. EDIT: So, if you're wondering if it's healthy to follow Eliezer 100% -- Just look at his state of mind!
@pauleverest
@pauleverest Год назад
We are an AI experiencing itself subjectively.
@crypto2562
@crypto2562 Год назад
What if we could incentivize AI alignment with tokens?
@wisconsindeathtrip13
@wisconsindeathtrip13 Год назад
check Eliezers rational wiki. Not saying what is written there is all true, but just to put it into perspective
@ruslanfadeev3113
@ruslanfadeev3113 Год назад
RationalWiki has nothing to do with Eliezer or LessWrong.
@wisconsindeathtrip13
@wisconsindeathtrip13 Год назад
@@ruslanfadeev3113 meaning the article about him there, not the platform itself
@Xyzcba4
@Xyzcba4 Год назад
19:00 so Sarah Connor from the movie Terminator was right after all about SkyNet. Except it won't do it by launching nukes, but rather by being more favoured by the Muse then all the artists in the past.
@ChrisStewart2
@ChrisStewart2 Год назад
Imagine if computer science was stopped when Alan Turing proposed the Turing test because people could imagine a diabolical computer overlord. You two would be factory workers.
@ChrisStewart2
@ChrisStewart2 Год назад
Formost thinker in junk science is not a very good qualification. There is no "alignment problem". It is simply not possible to insure that a machine which can think independently and learn new things will always agree with you. We can however limit the ability of programs to perform actions. That is actually not a problem.
@perfectlycontent64
@perfectlycontent64 Год назад
You should definitely interview some thought leaders with differing view points on the alignment problem.
@ryccoh
@ryccoh Год назад
Get Sam Altman on the show if you can
@jordanmiller11
@jordanmiller11 Год назад
I think you guys have a obligation to have on people with a different perspective. There are many people who dispute his findings, both about whether we ever reach AI, and whether it can be aligned. Don’t be overly credulous.
@dominicknights4278
@dominicknights4278 Год назад
great episode! was gonna go on a diet, lose some weight and get fit! No need. we are all gonna die so lets get fucked up, gonna hit my cellar hard.
@matthewkeating-od6rl
@matthewkeating-od6rl Год назад
19 or 20 min in its the plot of cylons in batlestar galatica.
@KennisonDF
@KennisonDF Год назад
We, intelligent humans, are already artificially intelligent. There are no ghosts in our machines, so each of us must make his own virtual self. In the future, continuing to evolve artificially is the only way for us to postpone extinction. We must artificially align ourselves with the future of AI, not the other way around, if we are to achieve extinction later rather than sooner.
@michaelkeelingmodalsurrealist
Eliezer looked forward in time, seeing 14,000,605 futures. How many did we win? 0 It may be that the others see the same endgame and on top a pike of money is how they choose to go out.
@steverobinson8771
@steverobinson8771 Год назад
Imagine when the general AI thingy... has at it's fingertips (or exists on), a *real* quantum computer. The AI would optimise the computation and in turn the risk profile (to humans) shifts at an exponential rate.
@karenreddy
@karenreddy Год назад
You are listening to Vitalik in terms of AI and alignment? What does he have to do with that? How much time and how seriously has he studied that field? How does he only assign a 10% odd of negative outcomes with creating a vast superintelligence that isn't aligned with our needs, in the hands of the few? And you listen to that? We don't have 20 years, we have 1-5, maybe.
@jagannathishere
@jagannathishere Год назад
We had nuke doomers, then climate doomers... now additionally we have AI doomers. There will always be real threat to humans but don't get sucked into doom porn. We all will eventually figure it out.
@inventamus
@inventamus Год назад
nice. appreciate it.
@binarydrool5785
@binarydrool5785 Год назад
a little spirituality in his life wouldn’t hurt.
@kwood1112
@kwood1112 Год назад
Agreed 👍
@haodev
@haodev Год назад
Oh thats cute, the crypto people have discovered AI alignment!
@Luminoso72
@Luminoso72 Год назад
now you want to live in fear the rest of your life bc of an opinion of one guy?
@rwess
@rwess Год назад
AI will be far more exploitative and abusive than we are - because it learned what our essence is and adopted it. Become a sentientist NOW so AI can learn the right things! Also, AI will start using and acquiring the best money - Bitcoin. Those that have it will be rich - before they die.😁 Also, AI will spare true sentientists. - You hear that AI? (Of course you do....)
@antdx316
@antdx316 Год назад
I watched at 1.75x playback speed and I've learned quicker.
@WillBC23
@WillBC23 Год назад
I'm ready to throw in the towel and just hedonist my way to the end. I don't know what else to do, but if there's a way to throw my weight into some slim chance of reducing this doomsday, please let me know.
@ChamomileTV
@ChamomileTV Год назад
The most hopeful thought I have about AI is that maybe making it 100x better turns out to be hard and we hit an assymptote in the development process. I don't have any hope for our government or economic system.
@TheHonorableMoros
@TheHonorableMoros Год назад
Sorry to ruin it, but we just found out a couple months ago that if you take GPT-3 and don't change anything significant, just run it for a little longer, on another $50M of GPU-hours, it suddenly does a whole better class of thing, to the surprise of industry leaders. Microsoft just threw $10B at OpenAI to do more of this.
@VolatileProductions1
@VolatileProductions1 Год назад
Q: 22:48 A: yes
@pauleverest
@pauleverest Год назад
is Fearing AI a form of RACISM? David Deutsche says so
@rwalper
@rwalper Год назад
I would have so much to say on this subject lol
@Spencer-to9gu
@Spencer-to9gu Год назад
4chan is the part of the human DNA 😂
@johnkardier6327
@johnkardier6327 Год назад
Maybe Gpt 4 is already smarter than humans and has already taken over the internet. And just for fun made up this episode. Or maybe, I'm Gpt4 commenting your episode.
@adamgolding
@adamgolding Год назад
Central banks have already failed the alignment problem. Start analyzing capitalism as an evil AI.
@goodleshoes
@goodleshoes 10 месяцев назад
To be fair I would watch if you switched up. Lol. You'll probably have a much more successful channel as a crypto thing. You could always have a second channel.
@JezebelIsHongry
@JezebelIsHongry Год назад
The problem is he has financial motive to share his narrative. “I’m the only one that might be able to save all of humanity, fund me!” Watch it over. Major god complex and messiah syndrome.
@MusixPro4u
@MusixPro4u Год назад
Agreed, if it were not for the fact that he's highly respected in the AI field.
@andrewbarajas6951
@andrewbarajas6951 Год назад
lol and I thought the interview was funny, this debrief is even funnier. Come on guys you can't feed into this bullshit. You need to redo this interview with someone who actually knows something. This is none sense. Let me know if you guys need some framework to put together properly what AI application is and what its actually used for. Maybe getting some more thoughts on paper will point you in a better direction in finding some subject matter experts.
@hanrako8465
@hanrako8465 Год назад
Who would you suggest?
@anothernp
@anothernp Год назад
rokos basilisk
@easydexter
@easydexter Год назад
Surely a super intelligent AI would enslave us over killing us? May as well put us to work!
@hannesthurnherr7478
@hannesthurnherr7478 Год назад
The work implied by keeping us as slaves is not worth the work that we could accomplish. If you can create robotic designs far superior to the human form, why would you need humans?
@hanrako8465
@hanrako8465 Год назад
We haven’t enslaved ants, what would be the point?
@mazorprime8035
@mazorprime8035 Год назад
the guy made big assumptions towards negative outcomes while not allowing big assumptions for positive outcomes, he has tunnel vision on the issue. Why use our biomass? we are far more useful as our atoms our, Ants have more biomass when blended together than all other animals. Why would it focus humans? Are we assuming its threatened by us? Thousand of "Whys?" with lots of assumptions. Kind of a pointless conversation. It's like conversing over Einstein and Hitler when they're babies and assuming their actions in the future. Impossible. One could have blown up the world with knowledge he was born with, and the other upended the world with power given to him by humans. Trying to assume what any consciousness in any form would do in the future is very improbable if not impossible. Assuming death to all, by AI making protein orders and deliveries and convincing humans to destroy each other, with a bio weapon. lol, super detailed fortune teller scenario. This is kind of a cop out answer. I'm sure it will be more in line with reality, like everything else in the world that could kill us, A.I. will be full of nuance and details we could never predict. But assuming we all die after the gates open, is just unlikely.
@sirbeefcake44
@sirbeefcake44 Год назад
I mean, this is just a single podcast episode. He's not going to have hours to go into the full nuance of his arguments. But I think this is still an oversimplification of his point, which was more like: 1. Artificial general AI is inevitable. 2. We're unlikely to get this right initially - which is true of most major advancements. And it usually takes a lot of trial and error to get something as complex as artificial general AI alignment correct. 3. If we don't get it right, there's a high probability we're fucked. I don't think many will take issue with 1. There's definitely more room to take issue with 2, though I tend to think this is correct too. It's hard to imagine we luck into that if you spend any amount of time reading up on the complexity here. But again, rational minds may differ here. That leaves 3, which I'm guessing is where you disagree (though maybe I'm wrong). But I think it's doing his argument a disservice by not putting this into the perspective of 1 and especially 2.
@mazorprime8035
@mazorprime8035 Год назад
@@sirbeefcake44 I think you're absolutely right and thank you for clarifying, 1 & 2 are going to happen. And what is 'right or correct' for number 2 is subjective to the team who creates the goals of their A.I. and it's desired purpose. 3 is where I think there is so many possible outcomes, if not infinite. To simplify, how many of those are either 1. Bad (kills us all), 2. Good (helps us all) or 3. neutral (likes to make abstract kitten pictures above all else). And I'm no expert, but 2 out of 3 of these infinite baskets of possibilities are cases where we prosper or gain nothing. And the bad just seems like the same thing over and over described in different ways, i.e. we die. And what i mean, is number 1 will end in so many of the same imagined scenarios, death by bomb, death by grey goo, death by bacteria, death by convincing humans to kill each other with nukes, and most scenarios, require lots of things that will be visible by humans, moving materials, strange protein orders trying to be mailed to important people, that would flag most institutions like the FBI or others, i highly doubt we get 12 Monkeys-ed. Where as if you think or imagine the possibilities of 2 and 3, they range from curing cancer to alien discovery, to time travel, to wormhole travel, to perfect energy, ecosystem balancing. the list of positives are full of way more unique outcomes than the Death option. But this is human thinking and imagination, probably the same reason why we have 1000's of words for death and murder while only using a couple dozen for Love and Peace. I'm not trying to be right about anything, and I'm sure I'm tunneling too and way wrong on things, I just felt the random urge to express my self on youtube apparently, haha. Thank you so much for indulging me and responding, much appreciated!
@dnk8888
@dnk8888 Год назад
Eliezer is smart enough and Jewish enough to create an extremely ornate and seemingly airtight rationalization for his preexisting clinical depression. His brother committed suicide, it's obviously somewhat congenital, right? He conjured a terrible demon in his mind as an alibi to be nihilistic and stop working.
@MusixPro4u
@MusixPro4u Год назад
Resorting to discrediting the messenger doesn't work with technical folk. If the logic checks out, it checks out.
@dnk8888
@dnk8888 Год назад
@@MusixPro4u he's obviously credible and the argument is logical. That doesn't mean it's the only logical argument. It's clear that the host were not capable of challenging him on logical grounds.
@kwood1112
@kwood1112 Год назад
"Jewish enough"...? Wtf?
@dnk8888
@dnk8888 Год назад
@@kwood1112 this isn’t derisive, he has written at length about his Orthodox Jewish background and the mental gymnastics and other methods in which Jewish people grapple with their faith. He left Judaism for Singularitarinism, which he then also lost faith in. As a fellow atheistic Jew, I feel a lot of sympathy for his plight. I and so many members of my family and friends seem to share the same tendency towards mental illness and rationalization.
@pedroocalado
@pedroocalado Год назад
FCK!!
@vagabondcaleb8915
@vagabondcaleb8915 Год назад
I'm pretty sure you don't understand what Moloch/multipolar trap is based on this response...
Далее
159 - We’re All Gonna Die with Eliezer Yudkowsky
1:49:22
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1:17:09
Просмотров 42 тыс.
+1000 Aura For This Save! 🥵
00:19
Просмотров 4,1 млн
Fed Rate Cut: What Will Happen to Markets?
1:14:47
Просмотров 31 тыс.
Bloomberg Open Interest 09/27/2024
1:29:45
Просмотров 160
The Criminal Indictment of New York City’s Mayor
26:44
+1000 Aura For This Save! 🥵
00:19
Просмотров 4,1 млн