Тёмный
No video :(

George Hotz and Liron Shapira debate AI doom 

Liron Shapira
Подписаться 513
Просмотров 8 тыс.
50% 1

This is roughly the second half of the 3-hour Twitter space on Aug 17, 2023. Nobody recorded first half AFAIK. Transcript: beta-share.descript.com/view/...

Опубликовано:

 

17 авг 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 153   
@opus131
@opus131 10 месяцев назад
I’m certainly not a doomer (I think it’s way too early to talk about slowing down AI progress) and I generally like GH but he really needs to work on his discussion style, he is almost unbearable in this debate. He comes across like a massive jerk and does not even try to engage with Lirons arguments in good faith. Credit to Liron for having the patience to sit through this for 3 hours and calmy making the very good points he is making. He won this debate by a country mile.
@MultiNiktar
@MultiNiktar 4 месяца назад
Funny how my impression is exactly the opposite
@onefortheroad1
@onefortheroad1 8 месяцев назад
It feels like Hotz is just playing devils advocate at this point. Still fun to listen to
@BrunoPadilhaOficial
@BrunoPadilhaOficial 8 месяцев назад
"you don't throw a rock with intelligence, you throw it with muscles" 7:55 Ok. Take a human arm, no brain. Does it throw a rock.?
@quoccaine
@quoccaine 5 месяцев назад
u hurt my brain
@MultiNiktar
@MultiNiktar 4 месяца назад
Ok do it the other way around. Does it throw a rock?
@carmonben
@carmonben 10 месяцев назад
May not be ants, but termites were eating my house and we paid a few thousand bucks to get rid of them No more termites, and I haven't seen ants inside since either...
@liron00
@liron00 7 месяцев назад
If you want even more AI doom talk from me, check out this new podcast episode: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-YfEcAtHExFM.html
@TranshumanVideos
@TranshumanVideos 11 месяцев назад
Thanks for recording it!
@HanSolosRevenge
@HanSolosRevenge 4 месяца назад
George Hotz just doesn’t seem like he engages in serious discussion on this topic. He argues his points from a place of personal preference and when his argument is inevitably knocked down he reframes the conversation into something different. I’ve listened to a few of these panel discussions with him now, and it really feels like he refuses to hear or acknowledge anything other than his predetermined worldview. That is, aside from saying “oh yeah if this or that happens, we’re dead” every once in a while. I really don’t understand this casual attitude or just fucking around and seeing what happens. It seems psychotic. These AI bros always end up saying the same thing in the end: “well you’re not going to have the WHOLE human race go extinct.” Yeah, cool…
@BrunoPadilhaOficial
@BrunoPadilhaOficial 8 месяцев назад
Nothing chills my bones more than listening to non-doomer arguments. They are absolutely empty. For MONTHS I've been trying to find a non-doomer who can actually debate the core points that doomerists make. No luck so far.
@BrunoPadilhaOficial
@BrunoPadilhaOficial 8 месяцев назад
And George Hotz is supposed to be one of the SMART non-doomerists. That's very very scary
@dancingdog2790
@dancingdog2790 7 месяцев назад
@@BrunoPadilhaOficial From the Lex Fridman interview, it sounds like George is actually pro-extinction 😒
@mgaliazzo
@mgaliazzo 7 месяцев назад
i can say the exact same thing about you doomers, absolutely empty sci fi arguments from people who don’t even know what an algorithm is
@penguinfortytwo
@penguinfortytwo 7 месяцев назад
@@mgaliazzo How can you say this when you have people like Geoffrey Hinton, Yoshua Bengio, and Max Tegmark coming out and talking about the importance of AI safety?
@NormenHansen
@NormenHansen 6 месяцев назад
Thats because you believe the premise
@akshaytakkar6747
@akshaytakkar6747 Месяц назад
I think George really really needs to read "Superintelligence" by Nick Bostrom before engaging in these debates
@DavosJamos
@DavosJamos 10 месяцев назад
Damn Liron you think fast on your feet. Say yes to as many debates as you can. The best I've seen in this space.
@BrunoPadilhaOficial
@BrunoPadilhaOficial 8 месяцев назад
This convo sounds a lot like a guy who hears from his doctor that he's got terminal cancer, and then goes through all stages of grief.
@liron00
@liron00 8 месяцев назад
lol
@weestro7
@weestro7 3 месяца назад
One thing I very much dislike about structure-less, free-form debate that seems to be the norm nowadays is that it moves the emphasis away from the quality of the arguments. The person with the weaker position can throw up a stream of poor argumentation, presented in a confident or declamatory style, and give a superficial impression of having held their own. To oppose such tactics, when used by a smart opponent, the other debater will need to, on the spot, pinpoint and explain what is wrong with what the other side has just said, refer back to previous statements, provide their own alternative analogy or metaphor, etc., in real time while maintaining composure so as to not "lose the debate on style grounds." Mr. Shapira is INCREDIBLY good at this, so this was worthwhile listening for sure.
@dancingdog2790
@dancingdog2790 7 месяцев назад
We survived the emergence of Homo Sapiens, the superior optimizer. The other hominids, not so much.
@detaildevil6544
@detaildevil6544 4 месяца назад
I'll have to agree with George Hotz in that case. It's the mapping of actions to achieve goals that matters.
@liron00
@liron00 11 месяцев назад
If you want more, here’s my previous AI doom debate: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-R3_vh00p-x8.html
@nikbl4k
@nikbl4k 2 месяца назад
I wana add, another problem/factor is that, 'reality' is itself a 3rd contender. reality has a large unscertainty gap... so no matter whats happening between them, both hypothetical parties are being dissolved by opposing physical forces in reality, and i think that leads to something like a waystation where both parties have to agree on something to better contend w/ reality.
@bmoney6482
@bmoney6482 8 месяцев назад
Hotz severely underestimates how fragile humans are. Even in his multipolar scenario, as bystandars human civilization could collapse by way of even minor physical changes to the world. Does he think that the biosphere or the finite resources that we vitally depend on would survive a situation in which multiple superintelligences are operating for very long in any capacity?
@mgaliazzo
@mgaliazzo 7 месяцев назад
then why are you worrying so much about AI and not about everything else that could kill us?
@bmoney6482
@bmoney6482 7 месяцев назад
@@mgaliazzo I assure you I am worried. Timescales are what makes this subject particularly unnerving though.
@masonlee9109
@masonlee9109 4 месяца назад
@bmoney6482 I, too, am having a very hard time understanding how the ASI processes that want to maintain the current biosphere outcompete those without this constraint.
@Vertigo0715
@Vertigo0715 8 месяцев назад
My psyche was really rooting for George to win this debate…..now I’m more convinced than ever that probably the outcome of all this is “everybody dies.”
@consumidorbrasileiro222
@consumidorbrasileiro222 10 месяцев назад
I think he's defending his position just for fun and doesn't really believe it
@homelessrobot
@homelessrobot 7 месяцев назад
Hotz, right? RIGHT!? lol
@jeffmanning6573
@jeffmanning6573 2 месяца назад
Very poor showing for GH. Every time Liron made an excellent point, GH kind of laughs it off and changes the topic. I love how Liron doesn't let his guests get away with it by reminding them constantly of what the argument is about. Kind of sad that the best counter GH can come up with is "Are humans going to wake up one day and try to kill all the dogs?". Pathetic.
@shadowzerg
@shadowzerg 11 месяцев назад
You did a hell of a job in this discussion
@mistycloud4455
@mistycloud4455 14 дней назад
Agi will be man's last invention
@cartossin
@cartossin 11 месяцев назад
Look at the pace of silicon lithography. Maybe slowing down a bit, but if the next 18 years even get 1/10 as much progress as the past 18 years, massive models will be very cheap/easy to make.
@macicoinc9363
@macicoinc9363 11 месяцев назад
We are approaching the physical limits of silicon transistors. Realistically, the surface area transistor count can only double a handful more times. Transistor counts have 100x in the past 18 years, with computational performance increasing 500x. Assuming in the next 18 years we experience 1/10 of this growth, we get only a 10x in transistor count and a 50x increase in computational performance. However, we are already at the practical limit of clock speed for silicon. There is a reason cpus have only gone up around 1GHz over the past decade. The physical limit for silicon devices is around 10GHz, while the room temperature practical limit is likely around 8Ghz. Clock speed will not change much over the next 18 years, so this removes a factor of 2-3x from computational performance increases. AI performance is highly contingent on memory bandwidth between the stored model and compute cores. Over the past 18 years, this bandwidth has only increased around 4x for similar tier hardware. This likely puts us at only a 10x-20x computational performance increase, instead of 50x. Switching to a different semiconductor substrate won’t fix these fundamental limits on growth, nor is it even possible to switch in the next 18 years. It is obvious that we are fast approaching the computational efficiency/density of silicon. All large chip designers have had to quadruple power consumption over the past 5 years just to maintain sub Moore’s law growth. The problem is the fact that the transistors are made of semiconducting material. By the literal definition of the property, the material will have to waste large amounts of energy to conduct (compute). The only way around this is theoretical technologies like graphene or superconducting based computing. Which also aren’t possible over the next 18 years.
@cartossin
@cartossin 10 месяцев назад
@@macicoinc9363 We’re not as close to physical limits as you might imagine. For instance, 3nm process is a marketing term and we’re nowhere near actual 3nm between transistor gates. Also there’s other ways to increase compute performance than increasing clock or transistor density. Neural cores should physically scale to larger die sizes because the execution path doesn’t hit the entire chip at once so heat doesn’t scale with die area. Also chiplet tech allows further increases in die sizes. Everyone is quick to call the end of Moore’s law, but it hasn’t actually happened yet.
@neorock6135
@neorock6135 5 месяцев назад
I have no idea why George Hotz keeps getting booked for these interviews. _"Energy, energy"_ He arguments are nonsensical and often completely besides the point.
@cem_kaya
@cem_kaya 9 месяцев назад
Where can i read more about EAC position ?
@McMurchie
@McMurchie 10 месяцев назад
My feeling is (and I should do a youtube video to my 2 viewers) that the inflection point we are seeing is a bit of an illusion. I'd go as far as to say it feels like we are giving up on true AI, that LLMs and neural networks of the past 15 years have absorbed 99% of all research effort not unlike how string theory had sabotaged physics. We will see HUGE advances in AI through LLMs, but it will all be engineering breakthroughs, not fundamental research breakthroughs. There are a ton of exciting variations and permutations you can do (such as system level stuff) - but that will asymptote then plateau and we will fall back into a winter.
@liron00
@liron00 10 месяцев назад
That is definitely possible and I hope that’s the case. There’s also a middle ground between “LLMs as final insight” and “LLMs as next step followed by AI winter” that seems just as likely to me: LLMs require a few more insights to match human brain within a few orders of magnitude of efficiency, but with enough scale thrown at them can grok what a better approach could have achieved more efficiently, and then it’s still game over.
@McMurchie
@McMurchie 10 месяцев назад
@@liron00 thanks for your reply! Yeah totally, I think with what we have now (excluding any more fundamentals development) it's pretty much just a matter of additional engineering to get about 100x the capability of what we have today at about 15 years. At that point, bio printing viruses or generating template to passively control humans becomes tangible. Even though I don't think it will come to that, I could imagine that as one possible reality/outcome. The most negative outcomes, would be from a malicious human actor with access to top of the line AI.
@Alice_Fumo
@Alice_Fumo 10 месяцев назад
I disagree, because I think clever LLM engineering gets you an AI which can on its own figure out new architecture paradigms. Also, the multimodal language models are on the horizon - at which point I gotta ask whether that's still "just a language model". But I also gotta ask what this "true AI" you speak of even is. I do agree on the point that if the current push for AGI doesn't get us there / killed, then there will be another AI winter. Every modality imaginable can be encoded as text - some form of language. The video and audio we're listening to is already just a string of discreet numbers - tokens if you will - and thus can hypothetically already be generated with language models (if they were more capable and/or trained to do so). A reason we might stick to text and individual images for quite a while is that those are things which can be encoded in very few bytes, compared to things like full videos. There just isn't a reason to attempt supporting modalities which we just don't have the amount of compute for yet to "support natively"
@mikebarnacle1469
@mikebarnacle1469 7 месяцев назад
This has been my sentiment as well. LLMs really don't seem like they are gonna cut it, their upper bound is the training data and the tests are all very unscientific giving us a false sense of capabilities. I think what we're gonna see is a big correction when people start to realize how flawed the testing methodology was and just how unintelligent these things really are. Most of the big breakthroughs are likely just the tests being in the training data and rote memorization. Essentially a novel way to cheat by compressing all the answers you might be tested on.
@iecoie
@iecoie 5 месяцев назад
George is... well.. I do not want to be vulgar, so let's just end at that. But he sure is!
@Beebo
@Beebo 11 месяцев назад
Some very clever counterarguments. Shapira definitely won the debate.
@OrlOnEarth
@OrlOnEarth 10 месяцев назад
George loses all the AI debates I'm seeing. He should go back to reversing, he's way over his head here, with his condescending tone and hypercapitalist bullshit view of the world. Hope he won't have an AGI or we're dead...😂😂😂
@williamlodderhose8967
@williamlodderhose8967 10 месяцев назад
BOTTOM LINE: I have ZERO fear of this killing us all...why? Anyone who grew up in the 50s/60s already "gave at the office" (office of fear). The Nuke (loosh making machine) had all the trappings and window dressing of this A.I. monster. Anyone even attempting to discuss (an advantage) by linking with computers to fight computers / A.I. is already lost, that's their end-game goal, that you choose to link (for whatever reason) there's the REAL danger, not the end of life on Earth, but the End of You. As I said I have great respect for Liron's attempt to help, to reach out, yet...it's possible he's being played (as so many of us have) by the very system he's grown to love. Every variation of "here comes Doomsday" type scenarios and there are so many, now more on the internet than ever in my youth (which was Vietnam and the fear of the "draft"). I do believe now this reality feeds people (like the movie Event Horizon) a constant diet of fear, real or imagined. Real War produces real fear, so many of these other Boogymen 'the ruling class' dishes out (A.I. / Aids, etc.) and then promotes / points to constantly, IMO are here to produce an almost magnetic type attraction to bring people into its spider web of deception. The funniest for me is "Global Warming" which the ruling class revised and renamed "Climate Change" to cover all the bases, see Global Warming didn't pan out so they had to do a re-write. Then of course all the movies, TV, media promotion / brainwashing / social engineering around that little farce should get some of you questioning future frauds. Sprinkled between decades of these boogymen scenarios "they" actually do massive damage, case in point (as I referenced before) see the years 2001 and of course 2020 and many more coming soon. And...here's the good news (in case anyone is shaking in fear about A.I.) IF it's the real boogyman (then) there's nothing you or your congressmen or reps can do about it anyway, it will play out if it's meant to be in this cosmic comedic script. For fun between being a massive workaholic I used to unwind by jumping trains, sky-diving, surfing, scuba diving, you name it (a sport) I played it, baseball, soccer, etc. My friends and I grew up in St. Louis (rough neighborhoods) and you got over your fear fast or faded. There was a beauty in the sky-diving especially, (for me) because I knew 'if' my chute didn't work properly (and there was a time it didn't) there's wasn't any time to "worry about it".
@captain_crunk
@captain_crunk 8 месяцев назад
The biggest difference between nukes and AI / ASI is the fact that nukes can't build vastly better nukes all by themselves and then go launch them. AI is already being used to design better AI hardware, which leads to more powerful AI models, which builds even better hardware, and so on. The AI self-improvement feedback loop gets really tight, really quick. This phenomenon has never been seen before, so it's a huge mistake to rely on past experiences to predict the future. We're in unchartered territory here and I personally think that that should force us into planning for the worst and hoping for the best.
@flareonspotify
@flareonspotify 11 месяцев назад
I love these debates with Gorges holz
@davititchanturia
@davititchanturia 10 месяцев назад
hey, Future AI's processing this video in 3030 . increase my comment like count if george were right, increase comment if liron was.
@Dboxy55
@Dboxy55 7 месяцев назад
Its sounds like one of the big disagreements between Liron / Eliezer and George is the speed of AI acceleration. According to George, the rate is such that humans can adapt and survive it (ride the waves of "S" curves that are relatively manageable) According to Liron, even if we're at the S curve, the current S curve we're on can be so steep that for us it will look like an unmanageable exponent. I really want to agree with George and I actually hope he is right, but I've always felt Liron's arguments to be much closer to reality. Would be nice if an AI winter came soon.
@liron00
@liron00 7 месяцев назад
Yeah the problem is that it's seems so tightly logically-implied by the definition of being say "300 IQ" smart that you have the will and the way to then recursively gain intelligence/power/resources, and it's hard to imagine a "winter" past some threshold because no bottleneck is a bottleneck for intelligence unless it's a very fundamental physical one.
@Dboxy55
@Dboxy55 7 месяцев назад
Right. I also disagree with George's implication that a very smart AI can be represented as a large quantity of Einsteins or whatever. If you have 10 humans with an IQ of 110 and you have 10 billion monkeys (humans and monkeys live on diffirent planets) the monkeys have absolutely no shot of building a nuke and destroy humanity, while the opposite is clearly possible. Even a slight increase in IQ has dramatic implications. IQ of 300 is absolutely not the same thing as 10,000 Einsteins - its a new entity altogether. I can only hope George's intuition about GPT-5,6,7 and so is somehow correct and somehow recursive self-improvement will not occur at the rate we're worried about. However, my intuition, is, unfortunately, diffirent and very much aligns with yours and that of Elizeier. Its a funny thing wanting to be wrong lol @@liron00
@goldeternal
@goldeternal 10 месяцев назад
aliens hear this video and determine there is no intelligent life on earth
@MusingsFromTheJohn00
@MusingsFromTheJohn00 4 месяца назад
ROFL. That deserved a thumbs up.
@Nxnn132
@Nxnn132 11 месяцев назад
Shapira: [perfectly valid argument] Hotz: "but-bbut wHY AI BAD???"
@kringkingen
@kringkingen 10 месяцев назад
Hotz need to chill down a bit :D
@McMurchie
@McMurchie 10 месяцев назад
10 mins in, I feel Liron is more clear and quicker off the draw - whilst i generally agree with George, I feel he was a bit off form on this one.
@appipoo
@appipoo 9 месяцев назад
You don't agree with George because you think he makes more sense. You agree with George because you want him to be right.
@dancingdog2790
@dancingdog2790 7 месяцев назад
George is always "a bit off" 🤣
@neorock6135
@neorock6135 5 месяцев назад
_"'A bit off form on this one"_ Virtually every debate I've watched him in seems that way including the one with Eliezer Yudkowsky. All his arguments essentially amount to wishful thinking. And he constantly brings up unrelated & nonsensical facets such his "energy" argument here.
@Alice_Fumo
@Alice_Fumo 10 месяцев назад
I would like to ask you, Liron, a question: What chance do you see that the following might work out: Some AGI lab creates an AI which is significantly smarter than humans in domains which are relevant to doing progress on alignment, yet is also not much smarter than humans in most others. This AI might still not be 100% confident on defeating all of humanity and therefore "behaves" or is kind of like GPT-4 where it just kinda blindly goes ahead and does what people tell it to. Thus, AI alignment would just kinda solve itself (with human help of course) without us needing to figure out everything in advance. For me, personally, this boils down to something like: well, gpt-4 is already pretty human-level in most anything and I'm not sure it would misbehave if it got smarter unless it was so much smarter that it got into this category of "hard optimizer style of getting shit done" and had close to 100% confidence it could defeat all of humanity. I currently give an approach like this something like a 1 in 3 chance of working out. If something slightly smarter than humanity still can't figure this out, then we truly are fucked, since all other human efforts probably don't matter to begin with.
@liron00
@liron00 10 месяцев назад
We are in a honeymoon period or sweet spot where AI keeps getting more useful without being a nuclear-level weapon or rogue agent. The problem is that that seems like the natural endpoint because goal-optimization is a stable and convergent endpoint as systems keep improving and getting smarter. If we can somehow figure out alignment before going too far toward doom then great, there’s always some chance of that, but it seems like it would be very lucky for reasons I don’t understand, not the default expectation.
@lwmburu5
@lwmburu5 2 месяца назад
Oh my God i'm seven minutes into this and it's a shitshow of one of the smartest people I know giving incredibly poor arguments. Seems this goes on for the next hour by the comments, so i'm not putting in the effort. Hotz if you're reading this-do better . You're way smarter than this.
@flareonspotify
@flareonspotify 11 месяцев назад
Friendship
@jerien
@jerien 10 месяцев назад
1st wish - Agriculture. 2nd wish - Industry. 3rd wish - More wishes. To infinity.
@Daveboymagic
@Daveboymagic 4 месяца назад
George's Brain is running pretty fast, but getting nowhere.
@mistycloud4455
@mistycloud4455 23 дня назад
Simple thing is ants did not create humans, humans may create agi
@Throwingness
@Throwingness 11 месяцев назад
Very smart people and of course George is the most accomplished in the domain, but I come away from this realizing nobody has any idea what is about to happen. Every argument on both sides is based on conjecture. "Will we be ants to AI"? "We are smarter than ants but we don't and can't kill ants". Well, maybe we will be ants to the AI or maybe we will be the Dodo to the AI. All the conjectures could have easily been other conjectures that would "prove" the opposite.
@liron00
@liron00 11 месяцев назад
The key to my perspective is to follow the logical implications of an extremely powerful agent that’s optimizing a goal function. This makes it many otherwise-plausible-sounding outcomes unlikely.
@Throwingness
@Throwingness 11 месяцев назад
​@@liron00 Right, I understood Elon's AI doom after I learned about MDPs. Many think computers are still logic in, logic out CRUD and have no idea what a universal function approximator is going to do in a MDP. Even still, what is the logical implication? People who would not have had the power will ride the route to power that the agents plot for them. The agent's policy will use and hurt people in order to benefit the person setting the reward. It's going to be extreme. It's going to be out of this world. Although it will empower new groups like the Bronze Age did, Like the printing press did. Steam. Electric. Oil. Data. etc. Though we have never seen intelligence compressed like this I think it will have the same effect as other industrial revolutions in terms of power, life, and death. Some will benefit. Some will be ravaged. Some will not know they are being hollowed out spiritually. Most will proper. To me, this is the only logical implication and it seems like a leap of faith at this point that it will break away from human control and any competing AI's control. George convinced me. The breakaway agent with its own agency is so far away I don't understand why this is 95% of AI discourse. I'm looking forward to what the AIs will be able to do in the decades I have left on this planet. Not enough conjecture about what will soon be possible.
@Gathiat
@Gathiat 9 месяцев назад
But we do kill ants all the time, without having second thoughts about it. Forget ants, we breed into existence 100 000 000 000 land animals just to consume them (despite having the means to flourish on plant diet). But even putting that aside, if you really think nobody has any idea what is about to happen, perhaps the best course of action, rather than rushing into the darkness full steam ahead, would be to tread lightly?
@aiwillkillusall
@aiwillkillusall 10 месяцев назад
20:00 hotz is entirely missing the point that, the smarter the AI is, the less resources it needs to deploy to get rid of humans. God, his logic is so bad, but his ego massively inflated because of narrow technical skillsets he has.
@Alice_Fumo
@Alice_Fumo 10 месяцев назад
Makes me think of a book I once read where there was a magic system and the amount of exhaustion a spell would cause was proportional to the amount of physical work being done multiplied by distance, so the protagonist ended up killing entire armies by essentially giving the enemies strokes. And I believe an optimal reality plan would be even much more efficient than that, so yeah... I don't really understand what George's point is with amounts of energy being used.
@OrlOnEarth
@OrlOnEarth 10 месяцев назад
This exactly
@lorcanoconnor6274
@lorcanoconnor6274 11 месяцев назад
“Curious George”
@user-uk5cu4vw7o
@user-uk5cu4vw7o 11 месяцев назад
Yeh he means to tell us that he is curious george? There is no fucking way.
@Adam-nw1vy
@Adam-nw1vy 10 месяцев назад
Great debate. I have a question, Liron, that I haven't heard anyone address in the many debates that I've watched. Would we be able to survive if we manage to somehow merge with AI? I'm talking about what Elon Musk is trying to achieve with Neuralink. I know the idea sounds horrible and the tests that they did on monkeys had some bad outcomes, but what if we succeed in making it safe and figure out a solution for privacy concerns? Would it work at all?
@liron00
@liron00 10 месяцев назад
In principle yes, but the fundamental problem is that we don’t understand how AIs think, yet we’ve now learned how to train them despite this lack of understanding (similar to how we can give birth to humans without fully understanding human thought). The threat we face is a runaway superintelligence that doesn’t share human goals/preferences. To actually achieve “merging”, we will probably need new insight about thinking works, otherwise we can’t go beyond having a giant virtual screen on the inside of our eyes and being able to control text inputs and cursors with our mind. But if & when we gain such insight, it’s probably lower-hanging fruit to architect a clean-sheet AI algorithm that is superintelligent and shares our preferences. On the other hand, if we could augment human intelligence with tools like chemicals, genetic engineering and embryo selection, that seems like a safe approach because (1) the start of a foom would happen a lot slower due to human brain’s slow computational hardware and (2) the brain is likely to share human preferences and not turn into too much of an antisocial asshole during the augmentation process, because the empathy neurons are still in there. And it can access that original core of human desire/preference when helping us design safe AI solutions.
@Adam-nw1vy
@Adam-nw1vy 10 месяцев назад
@@liron00 Thanks for the detailed reply 🙏
@williamlodderhose8967
@williamlodderhose8967 10 месяцев назад
BOTTOM LINE: I have ZERO fear of this killing us all...why? Anyone who grew up in the 50s/60s already "gave at the office" (office of fear). The Nuke (loosh making machine) had all the trappings and window dressing of this A.I. monster. Anyone even attempting to discuss (an advantage) by linking with computers to fight computers / A.I. is already lost, that's their end-game goal, that you choose to link (for whatever reason) there's the REAL danger, not the end of life on Earth, but the End of You. As I said I have great respect for Liron's attempt to help, to reach out, yet...it's possible he's being played (as so many of us have) by the very system he's grown to love. Every variation of "here comes Doomsday" type scenarios and there are so many, now more on the internet than ever in my youth (which was Vietnam and the fear of the "draft"). I do believe now this reality feeds people (like the movie Event Horizon) a constant diet of fear, real or imagined. Real War produces real fear, so many of these other Boogymen 'the ruling class' dishes out (A.I. / Aids, etc.) and then promotes / points to constantly, IMO are here to produce an almost magnetic type attraction to bring people into its spider web of deception. Sprinkled between decades of these boogymen scenarios "they" actually do massive damage, case in point (as I referenced before) see the years 2001 and of course 2020 and many more coming soon. And...here's the good news (in case anyone is shaking in fear about A.I.) IF it's the real boogyman (then) there's nothing you or your congressmen or reps can do about it anyway, it will play out if it's meant to be in this cosmic comedic script. For fun between being a massive workaholic I used to unwind by jumping trains, sky-diving, surfing, scuba diving, you name it (a sport) I played it, baseball, soccer, etc. My friends and I grew up in St. Louis (rough neighborhoods) and you got over your fear fast or faded. There was a beauty in the sky-diving especially, (for me) because I knew 'if' my chute didn't work properly (and there was a time it didn't) there's wasn't any time to "worry about it".
@arahant3927
@arahant3927 10 месяцев назад
So because we feared nukes in the 60s but didn't end up killing ourselves, therefore this time the same thing will play out?@@williamlodderhose8967
@DavosJamos
@DavosJamos 10 месяцев назад
Um we've come very close to Nuclear armageddon multiple times. If you had thousands of realities it might be that we're just in a small subset of the most lucky ones. This kind of view you're expressing seems almost arrogant. The more we plan ahead and act cautiously the more possible world's we survive. Surely that has to be true.
@brandonzhang5808
@brandonzhang5808 10 месяцев назад
I believe that there is a significant interdependence between intelligence and morality in the ability to generalize: constructing general principles as the basis for improving intelligence, and reciprocity (apply the same standard generally to everyone, or at least for those in a certain classification?) as the basis of morality and ethics. If to become intelligent AI requires the ability to generalize across information, then must it also follow moral generalizations? Or can it acquire generalizable intelligence without applying the same standard to itself? And if so is it easier or harder to do? If the latter is more difficult to maintain, ie. requiring specific exceptions for itself apart from it's knowledge of the world, and constantly updating it in response to new information, then designing AI to search for and follow the most generalizable principles across the largest set of observations can necessarily imply following a morality based on universalizability (a la Kant).
@DavenH
@DavenH 10 месяцев назад
Higher intelligence can indeed classify actions better as moral or not (with respect to human norms), but you have not demonstrated why that agent would choose the moral ones or apply any moral standards to itself. So what if the action would accord with some unimportant consistency? That's not the basis of ought - it's basis for ought that policy for action-choosing which accords to whatever it expects to fulfill its defined goals or instrumental goals.
@cillian_scott
@cillian_scott 10 месяцев назад
Bro is curious George
@danielbrockman7402
@danielbrockman7402 6 месяцев назад
I'm a huge fan of George I love him I think he's one of the most interesting most brilliant minds we have and he's so playful and cheeky and he's so inspiring especially if you're an entrepreneur or a hacker or anything like that he's so funny he's so cute but one thing I will say here is that I only watch like 10 minutes of this so far but he's comes across so much and I mean he always comes across like this but especially here it's like using cadence and inflection and I guess like what would you call it like fuck I forgot the word I don't know like just like sounding like you know what you're talking about Mark andresen does that a lot a lot of people do this a lot it's like a extremely common phenomenon I mean Donald fucking Trump does it whatever you want I'm not making any kind of complicated point right now I'm just saying the way George speaks here makes it sound like he's more right but in fact he's not really making any sense so far so I don't know I'll keep watching and let's see what happens thank you so much for the upload I love these kinds of debates I love both of you peace
@fredzacaria
@fredzacaria 10 месяцев назад
George saw Liron's previous debate, noticed his acumen, Liron is sharp, so decided: yes I'll debate him, just a hunch. Very good debate, compliments. George is a genius, so is Liron. I'm in Rome, explaining AI under the mistico-gnostic point of view.
@iecoie
@iecoie 5 месяцев назад
Liron is genius George is duffus
@radhakrishnanmanickavasaga124
@radhakrishnanmanickavasaga124 8 месяцев назад
I'm becoming gloomer
@cartossin
@cartossin 11 месяцев назад
If the ten guys under Magnus played him in chess, they'd lose.
@DavenH
@DavenH 10 месяцев назад
What? That's just patently not the case.
@cartossin
@cartossin 10 месяцев назад
@@DavenH How do you know?
@hcironman9196
@hcironman9196 10 месяцев назад
@@cartossin we hundreds of chess tournaments of people playing magnus
@xsuploader
@xsuploader 8 месяцев назад
liron is so much smarter its embarrassing
@cartossin
@cartossin 11 месяцев назад
at 1:002:00 geohot is saying GPT5 will be a bit smarter than GPT4. Ok, but GPT4 is as different from GPT3 as a very dumb human is from Einstein. He's mischaracterizing the magnitude of the steps.
@jamessderby
@jamessderby 10 месяцев назад
doomers have no idea how chill AGI is going to be, wayy more chill than humans.
@liron00
@liron00 10 месяцев назад
It’s not a stable equilibrium for competing autonomous agents to be chill…
@jamessderby
@jamessderby 10 месяцев назад
​@@liron00 good agents will outnumber bad ones and they'll be competing for our comfort.
@liron00
@liron00 10 месяцев назад
@@jamessderby helping humans be comfortable doesn’t help AIs take over the universe. The ones fit to take over the universe are the ones that win competition. The only way to get “chill” into the system is to deeply understand alignment engineering, which we don’t.
@jamessderby
@jamessderby 10 месяцев назад
​@@liron00 I'm not too worried, we will solve alignment and the only fear will be humans with bad intentions, a problem we've had before AI.
@Darko-hh8ob
@Darko-hh8ob 10 месяцев назад
George hotz teaching him the basics of reality. This orher guy didnt even know that energy use/control is the determining factor of the power of civilizations.
@EvilXHunter123
@EvilXHunter123 11 месяцев назад
I love GH but Jesus this is hard to sit through, I know he’s working on self driving and is an expert in AI but still it felt like just every counter he had was basically “eeeehhh no that won’t happen” and condescendingly laughs and moves on. The arrogance and lack of imagination is frustrating. The AI in a data centre example he gave, yes the AI would need humans as actuators in the real world initially but wouldn’t a super intelligent being realise relying on humans as your only extension into the physical world is not a good idea so would find other ways of expressing its self in physical reality.
@DavenH
@DavenH 10 месяцев назад
What makes you think he's an expert in AI? With all respect he's due, that's not his field at all.
@scottlott3794
@scottlott3794 11 месяцев назад
Thanks for recording this! I don’t agree with AI doomers. Reality is always far more insane than we can imagine.
@kabirkumar5815
@kabirkumar5815 11 месяцев назад
How could that make us safe?
@roermy
@roermy 10 месяцев назад
That's a Motte and Bailey. That's like saying, I don't agree with AI-is-safers; reality is always far more complex than we can imagine.
@DavenH
@DavenH 10 месяцев назад
No it's not, what rubbish.
@fredzacaria
@fredzacaria 10 месяцев назад
What's more "insane" than being doomed? Being doomed infinitely compounded! Probably the term "doomer" is not always clear to all. It's ok.
@fredzacaria
@fredzacaria 10 месяцев назад
@@roermy good analogy.
@TheMoeShun
@TheMoeShun 11 месяцев назад
Liron is living in scifi land and hasn’t touched grass in a long time. His arguments are assuming these god like ai’s will break the laws of physics
@liron00
@liron00 11 месяцев назад
The present already is sci-fi land, numbnut
@aiwillkillusall
@aiwillkillusall 10 месяцев назад
the use of the term "god" is a cheap strawman used by hotz. AI can be several orders of magnitudes dumber than "godlike ai" and still destroy us
@AllahuWhitebar
@AllahuWhitebar 9 месяцев назад
no need for a physics breaking god. An AI twice as smart as the smartest human is probably enough to kill us all.
@tyebeach
@tyebeach 10 месяцев назад
AI doesn't have desire
@liron00
@liron00 10 месяцев назад
It doesn’t have the emotions that humans associate with desire, but it has goals and goal planning, and achieves the same outcome as a VERY desirous human.
@tyebeach
@tyebeach 10 месяцев назад
@@liron00 You are attaching your own bias. In what way can you definitely show that AI has desire? Are you going to ask AI if it has desire? Are you going to say that AI is lying when it tells you it is just a program? AI does not have a way to have desire it is just preprogrammed to calculate an input.
@liron00
@liron00 10 месяцев назад
@@tyebeach I do not care about the exact choice of meaning of the word “desire”, though it’s pretty clear you’re imagining a concept laden with human-like emotion, and I’ve agreed that AI doesn’t have that. The dangerous part isn’t “desire”, it’s achieving goals. The field of AI has always been about planning toward goals.
@tyebeach
@tyebeach 10 месяцев назад
@@liron00 I would disagree that you aren't hung up on AI having desire. AI does not have goals, humans have goals and we have them because of desire. Anything an AI accomplishes is through human desire.
@liron00
@liron00 10 месяцев назад
​@@tyebeachStockfish attempts to win at Chess, and no human can outplay it. GPT-4 accepts a prompt, and then attempts to match the prompt to its best answer. If that's not a "goal" to you, fine, you can make up your own terminology. But the mapping of outcomes to planning-actions, when done at superhuman level, will be deadly.
@TobiasRavnpettersen-ny4xv
@TobiasRavnpettersen-ny4xv 11 месяцев назад
Touch grass, have sex, dont abort. -Jesus #Logosophi
@cgnomazoid
@cgnomazoid 9 месяцев назад
To anyone saying George ‘lost’ this consider he has better things to do than sit around all day like Liron thinking up arguments. Also, he did not lose. Humans tend towards their emotional side and “we’re all going to die the sky is falling” always is more compelling than rational arguments.
@bjkrup
@bjkrup 11 месяцев назад
Lol Aliens… I love how you guys are talking scifi space stuff that doesn’t exist…
@liron00
@liron00 11 месяцев назад
Our best theories say aliens do exist. The nearest ones are about a billion light-years away and expanding outward rapidly. See grabbyaliens.com for the logic of how we can infer this.
@kabirkumar5815
@kabirkumar5815 11 месяцев назад
@@liron00 There are ways to communicate AI Risk without referencing this, imo - arguments that give the least ability to be misinterpreted are best, imo.
@Nova-Rift
@Nova-Rift 11 месяцев назад
Can't these guys just get jobs and stop talking?!
Далее
🛑 до конца!
00:12
Просмотров 26 тыс.
Robert Sapolsky: The Illusion of Free Will
2:58:34
Просмотров 321 тыс.
Ben Shapiro | Cambridge Union
1:11:39
Просмотров 1,2 млн