Тёмный

AI Doom Debate: Liron Shapira vs. Alexander Campbell 

Liron Shapira
Подписаться 513
Просмотров 1,6 тыс.
50% 1

What's a goal-to-action mapper? How powerful can it be?
How much do Gödel's Theorem & Halting Problem limit AI's powers?
How do we operationalize a ban on dangerous AI that doesn't also ban other tech like smartphones?

Видеоклипы

Опубликовано:

 

3 авг 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 48   
@willrocksBR
@willrocksBR 11 месяцев назад
The discussion about power is irrelevant. Humans are incentivized to give AI as much power as possible, to a point where it can get all the power it wants by itself.
@AFewSightsSounds
@AFewSightsSounds Год назад
Fascinating discussion. I would love to see more of these.
@liron00
@liron00 Год назад
Here’s another convo I had: twitter.com/liron/status/1676647702145429504
@Dan-hw9iu
@Dan-hw9iu 9 месяцев назад
Let's make a plot. On the x-axis: an agent's efficacy in finding actions which most successfully reach goals. On the y-axis: an agent's global influence. What curve belongs on the plot? Is linear? An exponential? A logarithmic, or horizontally asymptotical, curve? Your answer determines your AI camp. People who suspect a horizontal ceiling use justifications like: - Maybe superhuman goal-action mapping abilities have a disappointingly low ROI in many problem spaces necessary for dominance. - Maybe superhuman goal-mapping efficiency _would_ have a high ROI, but there isn't much performance remaining above (cyborg-enhanced?) humans and their higher-order organizational structures. - Maybe an agent's action space is so limited -- by virtue of its natural or artificially maintained human dependency -- that no tractable route to domination exists, regardless of its preternatural goal-action mapping skills. - Maybe machine learning is really hard and humans are incapable of overcoming the (existing or unseen) challenges for producing recursively self-improving agents. (Visually, our plot's domain is depressingly tiny.) - Maybe an agent can be perfect at _finding_ goal-action solutions without actually _having_ goals. (Visually, our plot is a flatline.) Unexpected goal-seeking behavior emerging is an anthropomorphic assumption, not a reflection of reality. If none of these arguments seem persuasive, then your plot looks unbounded (where it counts), and your forecast looks grim. But you know what? I don't think it matters who's right. We _will_ go extinct. Our society is as unrecognizable to our distant ancestors as our descendants will be to us. Nothing is permanent. Whether our progeny are _homo deus_ or silicon, we'll be gone either way. You can't _change_ that, but you can _accept_ it. Nobody knows the future. Nobody. We can't predict the stock market an hour from now, let alone _the fate of humanity_. So just enjoy it. To quote Bill Hicks, "It's just a ride."
@absta1995
@absta1995 Год назад
At the start he mentioned the smartest kid in high-school vs the jock. That difference in intelligence is very small compared to what the difference would be between an existential AGI vs humans. Also high-school is not the same as human society. Advanced societies are not ruled by the people with the strongest arms or biggest chests, it's ruled by rich nerds who went to private school. And in the rare case, the person who can use their advanced charisma to sway large crowds. Again, not the strongest jock.. That's the type of AGI the leading labs want to build. A system that can take arbitrary goals, and achieve them much better than any human, and possibly all of humanity combined. Hence the danger
@yungbez0s2344
@yungbez0s2344 9 месяцев назад
For every nerd who rules an advanced society there are 10 more who could accurately tell you the exact game plan to become that ruler, but lack the social connections, upbringing, wealth, charisma, appearance etc. (factors in power besides intelligence in this case) to do it. There are also some who have those additional factors, but got outplayed by a more powerful agent. A really good goal to action mapper alone is not sufficient.
@absta1995
@absta1995 9 месяцев назад
@@yungbez0s2344 social connections and wealth are instrumental goals not capabilities. Charisma is a capability which AGI would obviously acquire. People already fall in love with replica let alone a genuinely capable future model. And charisma comes from cognition. Therefore, a good goal to action mapper, that's mastered language (a very feasible goal) could obviously use charisma to acquire power
@yungbez0s2344
@yungbez0s2344 9 месяцев назад
​@@absta1995 Sure, social connections and wealth are not static capabilities, and more intelligence helps to obtain them more quickly, but they are still factors influenced heavily by conditions other than intelligence. A 100 IQ person born into an aristocrat family probably has a huge advantage over a 200 IQ person born into poverty in this analogy. Or taking it to a hypothetical extreme, I'd argue that the 100 IQ aristocrat probably still has an advantage over a prisoner with an arbitrarily high iq (say 10000) under the right circumstances. The pauper and prisoner's causal channels of influence over the world dramatically bottleneck their otherwise vastly superior goal to action mappers. Further, even if the 10000 IQ person were free, his chances of becoming the ruler would be dramatically hurt if other 10000 IQ people who also wanted to be the ruler existed. I don't dispute at all that a machine intelligence could use charisma to gain power. But in the ruler analogy, what if the person with otherwise superior goal-to-action mapping ability has a speech impediment for example -- a factor beyond their cognitive control that nerfs their charisma. It would be entirely possible to nerf an AGI in some equivalent way, although I don't find it that realistic for charisma in particular. Not to get lost in the particulars of the analogy, the point I'm arguing (and that I think Alexander was arguing) is that goal to action mappers are only as powerful as their ability to actually execute their planned actions. If you put an ASI in a faraday cage, good luck giving it the power to do anything. (Barring a zero day exploit in physics, which I grant is a possibility with extreme intelligence). Liron seems to take for granted that an ASI would simply figure out how to obtain an arbitrary amount of power, which I do not think is a foregone conclusion if humans are taking careful steps beforehand not to give ASI power.
@absta1995
@absta1995 9 месяцев назад
@@yungbez0s2344 right that's logical. I would argue you can do a lot with manipulation (1000iq prisoner convincing guards etc), but it seems like we largely agree. However, I have to add two points. Firstly, with multiple 1000iq, yes they might struggle against each other, but unless we have one that cares about humanity (which requires alignment), they will just fight each other and kill everyone else via collateral. Secondly, society is not on a path to build constraints on AGI. So yes, I completely agree that you can theoretically slow them down via sandbox etc, but that's not currently happening. What's happening is that we are fully integrating them into all aspects of society, because they are extremely useful. So the landscape for a rogue AI is very much in its favour, should that time come.
@goodleshoes
@goodleshoes 9 месяцев назад
If I were in his situation in this discussion and that guy keeps saying something about the nerd in high-school I'd just say the a.i. in that situation would be an alien coming down and shooting the whole city the school is in with a lazer beam.
@neorock6135
@neorock6135 2 месяца назад
One has to be in awe, utterly in awe, at the patience Liron extended towards Alex. I am certain many others would've laughed at Alex & walked off. At times this debate seemed like a master class on the Dunning-Krugger effect with Alex believing himself to be smarter when he clearly was either not understanding something or arguing wholly unrelated facets.
@therainman7777
@therainman7777 9 месяцев назад
This guy went to Oxford and Stanford somehow. Unbelievable. Also doesn’t understand Gödel’s incompleteness theorem or the halting problem AT ALL. I was actually embarrassed for him when I realized what he thought those two concepts referred to. The fact that keeps calling AGI “robots”… 🤦‍♂️
@chrisCore95
@chrisCore95 9 месяцев назад
Great content. I learned lots from this.
@thomasdovell3003
@thomasdovell3003 9 месяцев назад
Each time I hear a debate between a Doomer and an optimist, no matter how smart the optimist is, I come out of it with even gloomier outlook. All I could hear from Alexander was: You are wrong, Liron. I would love Liron to be wrong. Unfortunately he has got much, much better arguments.
@thomasdovell3003
@thomasdovell3003 9 месяцев назад
And I am sure Liron would love to be proven wrong as well.
@liron00
@liron00 9 месяцев назад
@@thomasdovell3003 Ya I’d definitely prefer that :)
@ParameterGrenze
@ParameterGrenze 9 месяцев назад
I stopped wanting to hear debates for that reason. I just get annoyed by those non argument that people voice. I really want to hear arguments that defuse high pdoom perspectives, but I haven’t heard any so far. The only ones which are actually good revolve around timing. But those can be filled in with extrapolating empirical data, and currently that looks absolutely bonkers.
@liron00
@liron00 9 месяцев назад
@@ParameterGrenze My recent debate with Quintin Pope had a higher level of arguments than usual: twitter.com/i/spaces/1YpJkwOzOqEJj?s=20
@skoto8219
@skoto8219 8 месяцев назад
Yeah Alex just wasn’t able to rise to the occasion. Same with his debate with Roko. Maybe he’s better at arguing this stuff in other formats but yeah, this format does not suit him
@BrunoPadilhaOficial
@BrunoPadilhaOficial 8 месяцев назад
Liron keep making these videos! I know there aren't many views, but who cares. We need people to keep publishing content about AI risk 🤝🏻
@yancur
@yancur 10 месяцев назад
I love how Liron at the end did the "let me summarize what I thing your position is" (and vice versa), very good strategy! Overall I thing Alexander did not understand the Goal-to-Action mapper. He (I think) made the equivalence G2A mapper = Outcome Pump, which he rejected as imposible magic. But G2A (if I understand Liron's idea of it) is also humans and current frontier AI/LLMs.
@AllahuWhitebar
@AllahuWhitebar 9 месяцев назад
Alexander is so wrong at 10:35 --- Saying that AI can't dominate the world with intellegence because the smartest humans don't already run the world is like saying the The Flash or Superman couldn't dominate the NBA because the fastest humans don't already dominate the NBA.
@TheEconomicElder
@TheEconomicElder 7 месяцев назад
People say the Thirty Years' War was because of the printing press, but that's actually not true. It mainly occurred because of the Mini Ice Age. The average age of marriage in the early 1600s was 28. Poor resources and resource management caused a cascade of small territorial wars. People say it's Catholicism vs Lutheranism (Protestantism), but like with all wars, it was due to economics and resources.
@fredzacaria
@fredzacaria 10 месяцев назад
nice, thanks, the host must use known terms like agi, mitigate, halt, otherwise the guest and viewers might have a hard time understanding, I saw it twice and from the first time understood the host and agree w/him ... I'm a doomer, 2075, I'm a christian Revelation 13:15 person, missionary in Rome.
@angloland4539
@angloland4539 9 месяцев назад
@Vanguard6945
@Vanguard6945 7 месяцев назад
Im stuck, I put our odds in the next 10 years of staying alive fairly low, it would take a multiple, very uncharactaristic moves of humanities part to move to save us. So what do we do? I considered saving enough money for 5 years and just trying to live the best life i can and not worry about buying a house or really concern myself with the future. Apart from ringing the bell what are you doing?
@liron00
@liron00 7 месяцев назад
Nothing much, just chilling with my wife and kids and explaining the doom problem to ppl
@VannessaVA
@VannessaVA 10 месяцев назад
Liron, I've been watching your videos and I really liked your interview on Tom Edwards' channel. Although I will say your debate with George Hotz was awkward to watch as George kept making irrational arguments and I felt a bit embarrassed for him. My question to you is: are you a transhumanist? I ask because had no idea that some people (like Connor Leahy) were transhumanists despite arguing in favor of real ai safety. I apologize if you've already clarified the answer to my question in other videos. I'm still new to your channel so I haven't come across any videos with you talking indepth about your opinions on transhumanism yet.
@liron00
@liron00 10 месяцев назад
Yes I am. I think life is already good and sci-fi and there’s a lot of room for it to be better by being more sci-fi.
@VannessaVA
@VannessaVA 10 месяцев назад
@@liron00 Well speaking of sci-fi, my channel is a mix of sci-fi, a.i. doom, music, and nonduality philosophy. Would you be willing to do a brief interview to bring attention to a.i. safety to my subscribers?
@liron00
@liron00 10 месяцев назад
@@VannessaVA sure, pls email me to set up - wiseguy@gmail.com
@VannessaVA
@VannessaVA 10 месяцев назад
@@liron00 Thanks! I just sent you an email
@HanSolosRevenge
@HanSolosRevenge 4 месяца назад
e/acc tech bros play dice with the universe
@Cagrst
@Cagrst 7 месяцев назад
Are there no accelerationists with good arguments and who actually understand all the specifics? These are all just so flimsy and/or blatant misunderstandings of the premise…
@scottythetrex5197
@scottythetrex5197 8 месяцев назад
The doomer wins again. And Alex seems to concede the point in his opening statement.
@jdietzVispop
@jdietzVispop 6 месяцев назад
The young guy isn’t listening to the older dude. Clearly he has something in his head and isn’t light enough on his feet. Clearly has real world street smarts problems.
@joes973
@joes973 Месяц назад
Calling god a "goal to action mapper" at the "outcome pump" level and then saying that AI is getting better at mapping goals to actions and therefore they are going to become "outcome pumps" isn't an argument. It is merely substituting terms without making a real case that AI can actually become that powerful.
@liron00
@liron00 Месяц назад
It’s analyzing current AI on the dimension whose ideal is outcome pump, noticing that it’s moving on that dimension, and conjecturing that it won’t stop moving.
@thomasr22272
@thomasr22272 10 месяцев назад
This Alexander guy is unbearable, unable to grab basic arguments. This guy is so confused about everything, we are truly doomed if half of humans are like him
@joes973
@joes973 Месяц назад
He's in the top 0.0001% He's not perfect, but he's darn smart. Any solution that needs people smarter than him is going to fail.
@dafarii
@dafarii 11 месяцев назад
I would really like it if both of you can have another round. You guys hardly got your pants off. I have some thoughts. AI will have bounded rationality. Say you have this goal to action mapping. The goal is to say lets have a cure for Parkinsons disease The AI would not know which actions are more impactful, because at the very least a lot of knowledge is not transferred because of tacid knowledge. The AI might think doing certain lab studies are more effective than others maybe because of increased results, but there may be tacid reasons why this is so, and these tacid reasons do not communicate that the harder path over the longer term would find the cure and the current most effective paths will have an upper limit. Sending people down a rabbit hole of non success. In this way the best goal to action mapper would be the loser. It can be even a simple thing like making the best bread. The needing process of that master breadmaker is tacid knowledge, the final fold and removal of an air pocket is what makes that bread perfect. That bread maker would win against the perfect goal to action instruction set. This is why I also think the best goal to action mapper in high school has no guarantees to be the most successful. Explicit knowledge is where it can compete or at least help set some baseline. I am also curious to what the doomsday scenario is. Like what knowledge would the AI dream up which we don't already have? Or is it more the actions the AI can help facilitate? Anyway, I don't have any position yet. But its fascinating to think about.
@liron00
@liron00 11 месяцев назад
Thanks for the feedback. You’re completely underestimating what it means, by definition, for something to be a better goal-to-action mapper than you are. It knows that there’s such a thing as tacit knowledge hard-won by the experience of a bread maker. Because it knows about all possible paths to knowledge acquisition, and the expected cost and value of each. These situations don’t throw it off or confuse it. It still handles them better than any human would.
@kabirkumar5815
@kabirkumar5815 11 месяцев назад
Something that is a better 'goal to action mapper' is something that is more successful at achieving the outcome it was aiming for. Abstractly, it is the skill of an entity to be able to change reality to be closer to what the entity 'prefers'. The skill of being successful at generally everything it might 'want' to succeed at.
@mrpicky1868
@mrpicky1868 8 месяцев назад
laughable. corporations specifically exist to mitigate responsibility and risk for actual individuals. that's on top of not matching incentives vs legal risk. where do you find these "thinkers" ? facepalm
@vwazp
@vwazp 2 месяца назад
alex seems kind of dodgy, what doesn't he understand about action mapping?
Далее
БИМ БАМ БУМ💥
00:14
Просмотров 1,2 млн
I Built a EXTREME School Bus!
21:37
Просмотров 7 млн
AI Tipping Point | Full Documentary | Curiosity Stream
24:06
Machine Consciousness | Joscha Bach
1:02:31
Просмотров 15 тыс.
George Hotz and Liron Shapira debate AI doom
1:14:44
Просмотров 8 тыс.
Why Bitcoin is the Currency for AI
1:21:47
Просмотров 35 тыс.
Robert Sapolsky: The Illusion of Free Will
2:58:34
Просмотров 321 тыс.
AI Doom Debate - Liron Shapira vs. Mikael Koivukangas
53:04
СЛОТ - С.М.Г.О.  (Official Music Video)
4:01
Просмотров 234 тыс.
VUDOO - Пьяная луна (Official Video)
2:11
Просмотров 137 тыс.
I Like It
2:29
Просмотров 3 млн