Тёмный

Liron reacts to "Intelligence Is Not Enough" by Bryan Cantrill 

Liron Shapira
Подписаться 513
Просмотров 1,5 тыс.
50% 1

Bryan Cantrill claims "intelligence isn't enough" for engineering complex systems in the real world.
I wasn't moved by his arguments, but I think they're worth a look, and I appreciate smart people engaging in this discourse.
Bryan's talk: • Intelligence is not En...

Видеоклипы

Опубликовано:

 

11 дек 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 62   
@Cagrst
@Cagrst 7 месяцев назад
What a deeply unserious person. We are so royally fucked if there are so many clearly intelligent people who fail to grasp such basics about the problems we are about to face. I can’t wait for the latter half of this decade when talks like this are simply laughed out of the room.
@BrunoPadilhaOficial
@BrunoPadilhaOficial 7 месяцев назад
The contrast between Liron's solid arguments vs Bryan's theatrical screams is just too entertaining 😂 Again, every time I see a non-doomer explain their reasoning, it makes me 2x more afraid.
@scopicstuff
@scopicstuff 7 месяцев назад
I’ve really been enjoying your content lately, keep it up!
@masonlee9109
@masonlee9109 5 месяцев назад
Liron, thanks for making this. This talk drove me crazy when I saw it; glad you took the time to respond. Cantrill is generally brilliant, but not here where he is dismissing AI x-risk. I have to guess that in prepping for this he was mostly just looking for an attention-grabbing way to tout the cool bugs his team fixed and that he hasn't yet fully thought through the implications of ASI processes. But hey, he's got that human curiosity, so he'll eventually figure it out. :) (Love ya, Bryan!)
@dlalchannel
@dlalchannel 2 месяца назад
25:30 he says says whilst showcasing that he himself does not understand the risk
@goodleshoes
@goodleshoes 5 месяцев назад
Something you said... That asi will be seeing bugs in code/etc similar to how we can see visual stimulus in 3d space time. That is just too much... That's terrifying to me. Humans are made with the intent of surviving in our environment, not really to understand our hardware (brains) but asi will naturally, right out of the gate be able to understand itself and its code.
@liron00
@liron00 5 месяцев назад
Indeed, the amount of things that a properly designed intelligence can do are staggering. The fact that a modern iPhone is a thing that can exist in the universe is already ridiculously staggering from the perspective of someone 100 or 1,000 years ago, so we know a lower bound for staggeringness.
@AndriyTyurnikov
@AndriyTyurnikov 2 месяца назад
In fiction many "very intelligent" villains or heroes are often actually look alike - like Sherlock Holmes - perfect memory + access to deep well of relevant factoids. This is actually reveals limitations of the genre and author, who is unable to comprehend, that in a clash with vastly superior opponent you would not see him coming. It looks like fundamental inability to imagine that somewhere out there there is more powerful paradigm/perspective. People are sure that they only can be beaten by slightly different version of themselves. And it is understandable that it comes from people in the top of their game - "look I am among the best, nobody challenged me successfully in my lifetime, therefore it is impossible, improbable". Pure hubris.
@goodleshoes
@goodleshoes 5 месяцев назад
His point of willpower and teamwork... He really thinks willpower is somehow unique to humans? If something is able to effectively roleplay as the thing it's trying to emulate, what's the difference? Let alone the fact that it doesn't sleep, doesn't eat, is endlessly "curious" and will stop at nothing to fulfill its utility function.
@context_eidolon_music
@context_eidolon_music 21 день назад
Okay saying Fridman needs Narcan was funny.
@ikotsus2448
@ikotsus2448 7 месяцев назад
Are we sure he isn't a "doomer" trying to make the other side look bad?😂I admire your patience and calmness 👍Havent concluded watching it all, I am curious to see if you loose it at some point (understandable).
@skoto8219
@skoto8219 7 месяцев назад
The derision in his framing, “killed by a *computer program*”, is so irritating. Reminiscent of: “So you think we came from *monkeys*? Like, monkeys in a tree?” Well yes, primates, with a common ancestor - but you’ve already made clear that you think stuff that sounds weird the first time you hear it isn’t worth considering, which ironically makes your opinion on this stuff not worth considering.
@context_eidolon_music
@context_eidolon_music 21 день назад
These people actually believe that LLMs are "computer programs."
@blahblahsaurus2458
@blahblahsaurus2458 4 месяца назад
As someone with a degree in biology, I don't know why he thinks making a bio weapon is an exceptionally difficult engineering problem... You can print any known virus with commercially available machines and tools, and the type of technical knowledge that many grad students will glean over a couple semesters working in a lab. You can do that because you're cheating - you're not doing biotechnology from scratch, you're using living organisms as a source for the nanomachines which enable you to do biotech. But either way, you can still print the virus. And as for making more complicated bioweapons, or graduating to pure nanotechnology? Well, look at alphafold.
@BrianPeiris
@BrianPeiris 2 месяца назад
It would be interesting to hear your response to Subbarao Kambhampati's talk "Can LLMs Reason and Plan?". I think he lays out a convincing argument for why transformers inherently cannot reason. I think this means we won't see AGI for years or decades, unless we get lucky with a new architecture.
@liron00
@liron00 2 месяца назад
I think he’s just taking observations about what current AI can’t do, then retroactively acting like he always know they can’t do it. On Twitter he’s refused to predict in advance what GPT-5 won’t be able to do.
@BrianPeiris
@BrianPeiris 2 месяца назад
@@liron00 I think his argument still stands though. If GPT-5 is a similar architecture, just with more data and parameters, it's still just a transformer that can't reason. It may encode more knowledge, and produce better approximations, but it still fundamentally won't be able to reason, because the architecture doesn't allow for it. If you haven't already watched his talk, I'd recommend it. I suspect he's able to lay his argument out better than he would on twitter.
@liron00
@liron00 2 месяца назад
@@BrianPeiris I’ve seen it and I still think it’s a cop out. If you can’t predict in advance what your mental model says AI can’t do, then you don’t really have a mental model. It’s lame to just always *retroactively* say the particular things AI can’t yet do are due to some fundamental limitation.
@BrianPeiris
@BrianPeiris 2 месяца назад
@@liron00 "Predict what GPT-5 can do" is such a broad request though. If he was asked if GPT-5, assuming it's still a transformer, will be able to do significantly better on his obfuscated Blockworld problems, I think his answer will be a definitive "no".
@liron00
@liron00 2 месяца назад
@@BrianPeiris He should be able to predict things GPT-5 definitely *can’t* do. What you’re saying about his puzzles is a good kind of prediction, but he didn’t take me up on the challenge of making such a prediction.
@baraka99
@baraka99 7 месяцев назад
He clearly fails to see that AGI will help innovate physics, nanotech and bioengineering. His proposition is the definiton of Hubris. Unbeatable when a coordinated AGI integration with multiple agents.
@blahblahsaurus2458
@blahblahsaurus2458 4 месяца назад
_Proceeds to dismiss all forms of AI risk on the basis of vibes_
@ikotsus2448
@ikotsus2448 7 месяцев назад
Ok, watched it all. Give an advanced AI a flying probe setup (been around for 3 decades) and the prompt to validate all datasheets and it would have fully debugged that mfer in a few seconds. I love how human buggy firmware and our inability to be thorough is used to show how superior we are. For people like Andreessen, Lecun, Weinstein I suspect that they: 1. Know the risks (OK) 2. Decide that for them personally they are worth taking (OK) 3. Anderstand that most people would not be OK with the risk (OK) 4. Decide that the optimal path to get their way is to disingenuously convince others that there is no risk (...not OK)
@bcantrill
@bcantrill 7 месяцев назад
I'm not sure which of the problems that you're referring to, but it doesn't in fact matter: your observation (that "it would have fully debugged that mfer in a few seconds") is in fact wrong for all of them. (In several of the cases, the datasheet itself is wrong, and in others the behavior is simply not documented at all.) But it is clear from your earlier comment that there is in fact nothing that I (or anyone) could say that would cause you to question your own beliefs in this regard; this topic is not technical for doomers, it's religious -- and I will leave you all to worship as you see fit!
@ikotsus2448
@ikotsus2448 7 месяцев назад
@@bcantrill You say that In several of the cases, the datasheet itself is wrong. I said give it a prompt to validate all datasheets, meaning to not take them as gospel but check operation according to the datasheet. So specifically with the first firmware bug it would only need seconds to check all voltage and data sequences in and out of the CPU. It would see that the VOTF Complete packet was missing and thus locate the issue. I am interested to see your answer in this technical issue.
@bcantrill
@bcantrill 7 месяцев назад
@@ikotsus2448 What could "validate all datasheets" possibly mean? How does one validate (say) a pin strap resistor that must overcome an internal pull-up/pull-down to function properly?
@ikotsus2448
@ikotsus2448 7 месяцев назад
@@bcantrillIn the context af the start up sequence it means check that all communications are according to the firmware and datasheets. Step 1 What should the communications look like according to firmware and datasheets? Step 2 log actual communications. Step 3 Compare. Where would this fail in missing the packet that was not being sent? I understood that the resistor was not the issue.
@bcantrill
@bcantrill 7 месяцев назад
@@ikotsus2448 I tried to present an assortment (but ultimately, small sample!) of issues to convey how fiendishly complicated these systems are, but you have apparently drawn the opposite conclusion: that these systems are in fact simple and the development of them could be easily automated by a future intelligence. Perhaps I have done you a disservice by oversimplifying this particular problem for you to the point that you think you understand it?
@tinobomelino7164
@tinobomelino7164 2 месяца назад
when he says "AI can't engineer autonomously" - does he think that all AI will be feed-forward like chat gpt? i've heard something similar in your talk with Koivukangas, where he said that a simulated brain wouldn't have a will on it's own, it would wait for an input. - is the problem that they can't imagine a feedback loop?
@liron00
@liron00 2 месяца назад
lol ya I guess
@gerarddemers7445
@gerarddemers7445 2 месяца назад
Got 50 minutes into this video. Bryan Cantrill is his own worst enemy in this presentation..... Love the examples he gave demonstrating only humans could solve these problems (using brute force approach which is what humans are bad at). Were I looking to purchase a product like his computer just watching his presentation would convince me to look elsewhere......
@TheMrKofiX
@TheMrKofiX 7 месяцев назад
I’ve watched a few of your videos now and you’re definitely one of the more reasonable & convincing AI safetyists out there and the only one to have opened my eyes up more to your side of the aisle. However, I just can’t get over: 1. How you think GPT4, 5, or 6 are so few years away from a sci-fi-level threat when their limitations are so apparent to everyone who uses them every day? 2. Why you hold Yudkowsky in such high regard when he has made non-satirical statements like, “LLMs could enable computer viruses to jump to DNA substrate and start mining bitcoin” and “GPUs from all over the world will simultaneously calculate a master plan and take over the world”? 3. Why you think the WW3 risk of bombing data centers in foreign, non-NATO countries is lower than accelerating and then ensuring global dominance? 4. Why you think p(doom) can even be expressed in probabilistic terms when it’s clear that no event leading up to AGI is down to a matter of chance? Perhaps you can do a video on this.
@liron00
@liron00 7 месяцев назад
1. Of course GPT-4 is not powerful enough to outsmart humanity. But try extrapolating the last decade of AI progress. Imagine telling yourself 5 years ago that chatbots and generated images will be this good in only 5 years. Speaking for myself, I would have guessed it would take more like 25 years to make that amount of progress. It’s always hard to predict until it happens. But yeah if we’re lucky I can imagine we need another 20-30 decades of AI advances rather than 3-7 years. I don’t claim to have an exact prediction. No one does. 2. Jesus man, he was just trying to be funny with the DNA substrate tweet. When I read it, I swear I instantly knew it was, I’m not trying to rewrite history lol. I’ve been shocked at how many people can’t tell the difference between that claim and his other claims, but I guess it’s also his fault that his communication isn’t clear enough for many. 3. I think we’re between a rock and a hard place. I don’t think the “enforced treaty against data centers” is a good solution, I just think we’re also completely shafted if we don’t do a solution like that, so we better try something. I don’t think “accelerate and dominance” is feasible because the problem is that we don’t know how to control superintelligent AI, so we’re just accelerating the time til the first AI goes uncontrollably rogue. 4. You’re just asking about how Bayesian reasoning works here, it’s not an AI-specific question. Probability is not a statistic, it’s a property of the mental model of someone trying to understand the world.
@TheMrKofiX
@TheMrKofiX 7 месяцев назад
@@liron00 1. and 2. I think the vast majority of people, including e/acc's, are safetyists if only the threat seemed real. But generally, we find the argument, "Compare 30 years ago to today, then extrapolate AI with the same degree of 'unknown'-ingness 30 years forward" weak, because again, the limitations of () are so painfully obvious. From my experimentation, it seems that Ilya's whole shtick about next-token-prediction = learning the underlying reality of the world is wrong. When GPT4 makes not only illegal, but impossible chess moves despite clearly "knowing" all the rules (), it begins to seem like a glorified text remixer again. It seems to me that the most common theme, and true sticking point, in your (and Eliezer's) debates are related to this extrapolation of LLMs' capabilities in the future. And if I were to be a little facetious, I think your views would change if you spent more time on the nitty-gritty details when interacting with open source LLMs and limit-testing GPT4. Just to make it clear, in the future, the case for AI safety might be total valid. But at today's standards it seems overblown.
@TheMrKofiX
@TheMrKofiX 7 месяцев назад
@@liron00 4. I'm aware how Bayesian reasoning works. However, I disagree with the notion that someone claiming any kind of p(doom) during the Cuban Missile Crisis in Oct 1962: 1. Hasn't just pulled that number out of their arse 2. Has actual validity in any kind of educated guess instead of just 1. again but this time under the veil of being "scientific" 3. And as it turned out, is just wrong about assigning probabilities to what at the end of the day was just a series of deterministic events and actions taken by people, not dice
@liron00
@liron00 7 месяцев назад
​@@TheMrKofiX If you think superintelligent AI is more than 50 years away then sure, as long as we put serious investment into researching safety, we have a decent chance of being prepared to somehow control the takeoff. But I don't think you're right about what most observers are predicting, even e/acc's.
@mikebarnacle1469
@mikebarnacle1469 7 месяцев назад
I thought it would be better by now tbh. Not super impressed. I've had a meh reaction to all the recent AI tools everyone is freaking out about. Doesn't seem that hard to build. 99% of it is just the old ELIZA-effect once again as usual with this space. Still I've always been fascinated by the sci-fi AI safety philosphy, but I don't think we're practically any closer to AGI today than we were 10 years ago from an engineering perspective. 0% to 0% And I don't know anyone who actually continues to use these tools, they seem cool and shiny at first, then quickly discarded. I'm finally starting to see the hype wear down and people coming to their senses in my own fintech field, so that's nice to see. For the VCs... it's just juicero all over again IMO.
@NitsanAvni
@NitsanAvni 5 месяцев назад
"_We_ can barely do this!" ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-3iBkpPtN-Ek.html
@rtnjo6936
@rtnjo6936 7 месяцев назад
This guy screams for attention. I never understood so emotionally driven people
@blahblahsaurus2458
@blahblahsaurus2458 4 месяца назад
Liron, you ought to know that IQ is a very poor metric of anything, and everyone should stop using it. That's not to say that there is no such thing as intelligence or that different people's brains don't have different capabilities. It's just that trying to measure intelligence with an IQ test is like trying to measure "athleticism" with a standardized test, such as a swimming race, or a climbing race, or having them learn an acrobatics routine. Each of those would actually be decent tests of the many different abilities that comprise something we might call "athleticism", but the results would not have any real meaning. Even trying to narrow it down to "strength" doesn't result in any meaningful metrics. Not only do we have dozens of muscles, each of which can be strong or weak, trying to define "strength" for even a single muscle is not at all straightforward. And with all due respect to athletes, the brain, and its capacity for intelligence, are much more complicated than all the muscles in the human body. In other words, IQ tests are too narrow, they can only examine intelligence indirectly at best, and they are extremely, extremely noisy. They won't even count an IQ test if it's less than two years after your previous one - that is not the full extent of how easy it is to bias an IQ test, it's just an example of a drastic measures taken in order to make IQ tests mean anything at all. In other other words: IQ tests are an adequate measure of how well someone can do on an IQ test. Everything else is literally saying that correlation implies causality. We've seen a lot of the purported achievements of psychology turn out to be non reproducible, or worse. Using IQ tests as an adequate measurement of anything important belongs in that pile.
@Webfra14
@Webfra14 7 месяцев назад
Unfortunately nothing new in this talk. First part is just ridiculing doomsters and the second part is argument from incredulity. However, love your calm and level-headed response, as usual.
@jscoppe
@jscoppe 6 месяцев назад
I loved when the nerd told the other nerds to touch grass. This whole presentation is carried by his demagogic tone.
Далее
George Hotz and Liron Shapira debate AI doom
1:14:44
Просмотров 8 тыс.
EPIC Q&A: Atheist Student Begins to Change His Mind!
10:48
AI Doom Debate: Liron Shapira vs. Alexander Campbell
46:20
AI Doom Debate - Liron Shapira vs. Mikael Koivukangas
53:04
Toy Model of the AI Control Problem
24:10
Просмотров 359
Joe Rogan Experience #2180 - Jordan Peterson
2:36:26
Просмотров 2,9 млн
Atheist Asks TOUGH Questions: EPIC Response! (Q&A)
12:20
Tamiga & 2Bad - Tell Me | Official Video Extended
4:45