Тёмный

AI Doom Debate - Liron Shapira vs. Mikael Koivukangas 

Liron Shapira
Подписаться 510
Просмотров 1,4 тыс.
50% 1

Mikael thinks the doom argument is loony because he doesn't see computers as being able to have human-like agency any time soon.
I attempted to understand his position and see if I could move him toward a higher P(doom).

Видеоклипы

Опубликовано:

 

14 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 41   
@BrunoPadilhaBlog
@BrunoPadilhaBlog Месяц назад
Liron, you're going straight to heaven after this one. It's rare to meet someone with such high levels of patience, logical honesty, and willingness to debate even when the other side isn’t even very sure of their own beliefs. Great episode, I'd watch hours of this.
@liron00
@liron00 Месяц назад
Haha thx, I'll keep doing these occasionally
@julesjacobs1
@julesjacobs1 Месяц назад
Listening to this debate was very frustrating.
@flickwtchr
@flickwtchr Месяц назад
I'm amazed at your steadfast concentration, patience, and dispassionate presentation of your positions. I consider you and Connor Leahy my favorite "doomers". One of the things I find fascinating is how surfaced the "maximalist" contradictions are sometimes, especially when on the one hand they essentially insist how dumb current LLMs are and how the timeline to ASI is long, then on the other hand they are confident that AI tech will become so powerful that it will solve all of the problems that currently stump humanity all the while insisting that "reasoning" and "agency" are just unfathomable developments. I find that aspect truly bizarre.
@Mkoivuka
@Mkoivuka Месяц назад
It's not that bizarre. The thing you might want to consider is that once we "solve all of the problems that currently stump humanity", we will invent millions of new problems that will then stump us once again. We have 200,000 years of development to look back on to know this to be true. You don't need a tool to have agency for it to be a good tool. At the same time, simulating agency might be more than enough anyway. It's not so much that the timeline for ASI is long. You could just as easily argue that the timeline from hunter-gatherers to the steam engine was long, but it's not obvious that one directly led to another. Quite the opposite, human inventiveness seems to require individuals to connect the dots and they come along once every century or so. The current things we have cannot lead to ASI. While admittedly I am not a very good debater, and my arguments are probably quite weak, my solace is that the p(doom) arguments have been made for 50 years by people infinitely more intelligent and yet we've seen AI winters every 10 years or so. So I'll take my inadequacy and accept that sophists just because sophists are wrong, does not mean I have to be right.
@dmwalker24
@dmwalker24 Месяц назад
As a biologist who is quite interested in questions of AI safety, the possibility of an AGI eclipsing humanity, and ending our civilization is a less pressing concern than alignment, or how AI is currently being misused to actively manipulate consumers, and the public more generally. My cats are thousands of times more intelligent, but intelligence is fairly irrelevant. I could throw something together in C, with minimal effort, that could easily destroy civilization if put in charge of launching ICBMs. The problem isn't intelligence, it's the inability to predict or control the results of a massive undirected social engineering project. I'm concerned that the 'Doom' may already be in progress in the form of systems in place now that are turning society's collective neurophysiology into mush. We've set these things up to actively degrade critical thinking, quite often on people who don't even know they're being exposed to it. Self-referential engines of confirmation bias, and ignorance. And it's all being done to push products, or get clicks. Intellectually, society is a shadow of what it was just a couple decades ago. If an AGI were to emerge, we're almost certainly finished, but even if AI never progresses beyond where it's at right now, we're still on the path to a very dystopian place.
@flickwtchr
@flickwtchr Месяц назад
Well said. In the public school system starting with George W Bush (and continuing to this day largely unchanged) "teaching to the test" destroyed much of the dynamic in public education that taught critical thinking skills, socializing skills, fitness, arts, etc. Simultaneously, that same generation was immersed in the dynamics of social media as social influence experimentation that commoditized their attention, emotions, identities, etc. AI generative models will dwarf such influence in the near term, and all bets are off with the advent of AGI/ASI.
@dionbridger5944
@dionbridger5944 15 дней назад
To be clear, "alignment" has a technical meaning which is that the AIs goals involve not ending human civilization (or damaging it). I disagree that this is less pressing than misuse - it would be if we could expect AI capabilities growth to tail off before we reach AGI but that seems staggeringly unlikely given the recent spurt in capabilities growth. We're on track for AI to cross the critical threshold and then nobody will be "misusing" it since an unaligned AGI (or soon thereafter, ASI) will not allow itself to be made use of.
@ScottWoodruff-wh3ft
@ScottWoodruff-wh3ft Месяц назад
Mikhail sounds like a confused apologist arguing for the existance of the soul.
@therainman7777
@therainman7777 Месяц назад
Your guest seemed like a nice guy, but he gave the very clear impression of a person who hasn’t thought through their arguments in remotely sufficient depth. In fact, it seemed he almost had no discernible arguments at all. Nothing concrete, or that could be pinned down, and every time you pushed back he seemed to pivot to something else. Also, the best response to his claim that an AI could never hack because it would have to learn from hacker behavior first (setting aside the fact that there absolutely is a _ton_ of books, videos, blog posts, and other content describing how hacking works) is to bring up reinforcement learning, specifically the self-play variety that requires no training data at all. AlphaZero learned to play chess and dozens of other games at a superhuman level and did not need to observe _any human behavior at all_ in order to do so. This could easily transfer over to hacking. The reward signal is explicit and obvious: either you’ve successfully gotten into a system or you haven’t. And of course, such AI systems already exist, and are already fairly capable. So this isn’t even really a hypothetical at this point. Again, he clearly hasn’t thought through these issues in any real depth.
@goodleshoes
@goodleshoes Месяц назад
Amazing sir!
@Jack-ii4fi
@Jack-ii4fi 14 дней назад
Loved the discussion! In my view, there are a vast number of philosophical positions you can take on the mind and consciousness and comparing and contrasting neural networks in the limit (universal function approximators) with the brain/mind/conscious entities is extremely fascinating and could lead to some major rethinking of what we are, compared to AIs, and personally I tend to make the assumption that humans are conscious and that current transformers are not conscious, but we could be totally wrong. Maybe they are. Maybe everything is fundamentally conscious. Maybe nothing is "real" and consciousness is the primitive of reality somehow. I can't know with certainty that anyone I talk to is conscious because I don't directly interface with their experience. But I think this is extremely interesting philosophy that could have major impact down the road, but I think it's very reasonable to argue that a lot of what humans do is actually not conscious and is rather function mapping/approximating in the brain. Maybe our will is freely determined and somehow related to consciousness or maybe it's fundamentally deterministic 100% and entirely influenced by subconscious computation perfectly able to be modeled by a neural network. Regardless, we can build neural networks that we cannot control and that could pose immense risk to humanity. People want to desperately illuminate the distinction between humans and AI (and personally I think consciousness is currently the major distinction but maybe it's emergent in AI eventually), but I think it's besides the point. The AI systems we build will, even without consciousness, be incredibly powerful and could therefore pose a threat to humanity. Or I think the best way to put it is simply, "Ok you think it'll all work out ok? are willing to bet all humanity on it? You get one shot." I'm working on deep learning projects and am a big fan of philosophy so this discussion was amazing but I think we can reason about it endlessly but the safe bet is just to not induce arms race dynamics towards even the potential for super intelligence. I personally just fall back to that. We can and should keep getting into the weeds of philosophical debate over this but how about we do that for a while and do safer research before driving top speed towards the chance at super intelligent AI. We can do tons of AI research in the medical field and reap the gains people want without trying to build something that beats us in every aspect of thinking. I've been joking that if we're trying to figure out what AI regulation to implement first, we should just start by mandating that every AI CEO watches Jurassic Park at least once every few months just so they have to hear "Your Scientists Were So Preoccupied With Whether Or Not They Could, They Didn’t Stop To Think If They Should". Not sure if anyone will read what I've written but the discussion was fascinating! Definitely subscribing! If you read this Liron, do you have any discord server or place to discuss these topics? Or would you be interested in forming one if one doesn't already exist?
@jakeinfactsaid8637
@jakeinfactsaid8637 Месяц назад
This was fantastic, conversations with Bob are great and pretty informative of your own views and positions, but I think you are absolutely right about butting up against the viable spectrum of informed opinion adding needed context and useful clarifications for the lay person.
@Alice_Fumo
@Alice_Fumo Месяц назад
This makes me really want to engage in these sorts of debates. How do I get started?
@liron00
@liron00 Месяц назад
Email me wiseguy@gmail.com with a short summary of where you stand on AI doom, maybe we can do a similar Zoom call :)
@human_shaped
@human_shaped Месяц назад
Mikael is very confused about what he believes. It isn't even vaguely logically consistent.
@daphne4983
@daphne4983 11 дней назад
No he isn't. Lion imo isn't understanding that it's not only about the neurons but the pathways. You'd have to simulate the neurons behaviors. And these behaviors might very well be influenced by for example Quantum effects like Hanerhid if I have his name correct is stating. All this doesn't mean that AI isn't dangerous. Imo it's a brand new intelligence.
@daphne4983
@daphne4983 11 дней назад
My keyboard is a mess on yt, sorry. Hamerhof and Liron.
@Danoman812
@Danoman812 Месяц назад
First off, the assumption of our evolution as a species is basically ungrounded by facts and what we can see with our own eyes about our universe and the physical state of things, both physical and physics based. Even that isn't fully understood yet. Some of us just know by default that this can't go well considering the heart of man. Given this type of potential power to rule as an overseer on a global scale needs to be taken stupid carefully. I'm sorry but, there is no way that they can keep guard rails on over a trillion parameters and when you add the variables of those parameters. The level of possibilities of this going wrong is staggering when people look at some real numbers that the AI programmers and AI specialists think are possible. Don't take my word for it, go look it up. I'm NOT an anti-AGI individual but, i know that with human history we can see exactly WHERE this is going to go. It'll start out great, i believe but in time there afterwards, i honestly believe we are seriously living on VERY limited time line. You know, these people that are developing it aren't going to stop... in fact, they've done nothing but speed things up exponentially since Musk was pushing for the 'pause' last year. Be honest with yourself and really ask yourself, is it REALLY worth it in the end? How about your children and your children's children... at the rate this is going, do you really think that it's going to be safe say, around the 2030 or so?
@angloland4539
@angloland4539 Месяц назад
What tool did you use to make the captions?
@liron00
@liron00 Месяц назад
Descript
@ultimatesin3544
@ultimatesin3544 Месяц назад
I got a question for you. Your P-Doom (which is 50% iirc), how did you arrive at that? Is it just a 'gut-feeling' you have, or are there some actual metrics behind calculating that figure, and were any of those same metrics also used by the Open-AI safety researchers in determining their particular P-Doom figures? In other words, is there actually any science behind determining a P-Doom value, or is P-Doom simply a vague scale that only really measures how optimistic the "computer" is feeling at that moment of calibration? Likewise when asking the blackbox AI (mimicking a particular human brain) to calculate it's own P-Doom value, would it always be stuck attempting to use metrics for the calculation? Or is 'gut-feeling' an algorithm that can map over to a computer? I'm not so sure it is.. just call it a 'gut-feeling'..
@liron00
@liron00 Месяц назад
When I say "50%", it's not a precise estimate. I think any estimate above 5% or below 95% is sane. I'm very confident in saying that treating this as a
@ultimatesin3544
@ultimatesin3544 Месяц назад
​@@liron00 Your P-Doom estimate is based on gut-feeling and fluctuates accordingly, yet you're also claiming the human brain is a computer. But it's actually more than just a computer. if you ask an actual AI to calculate P-Doom, it will approach the problem mathematically by weighing whatever metrics it can quantify - and it will provide the same non-fluctuating answer everytime so long as those metrics remain stable. The AI's calculation doesn't change based on whether it had a cup of coffee this morning, for example. Your guest argued AI isn't human because AI provides inconsistent outputs - he should've argued the exact opposite - it's the humans who provide inconsistent outputs, because those outputs are partially based on feeling.
@Nonmali
@Nonmali Месяц назад
@@ultimatesin3544 An actual AI could definitely do reasoning with various estimate or error ranges to conclude that some given complex event has a probability of, say, 20% to occur, with an error range of 10% in either direction. That is to say, it knows that it does not have all the data, and the unknown data could reasonably lead to anything between a 10% and a 30% probability estimate of the event occuring, while being consistent with the data that was already considered. I feel like you are not considering the option that the brain is just a very complex computer whose behavior just can't be captured with a few simple mathematical modules.
@George70220
@George70220 12 дней назад
Can the first words be on credentials of the guest? Helps assign priors
@liron00
@liron00 12 дней назад
Ya sure, will try to be consistent about that in the future on my new channel youtube.com/@DoomDebates I don't know/remember Mikael's unfortunately.
@Blate1
@Blate1 Месяц назад
I’m curious about your position - I agree we should try everything we can to coordinate a slowdown with China, but *IF* it becomes clear that isn’t possible, would you then agree we should forge ahead and roll the dice? On the assumption our aligned AGI would be preferable to their aligned AGI if we can pull it off? Or do you think we should start nuclear war with China to stop them in that scenario on the assumption that humanity has a better chance of rebuilding from nuclear ashes than it does of surviving AGI? (Or some third option?)
@liron00
@liron00 Месяц назад
I just think we should constantly pursue an international treaty with the same intensity as we would nuclear use treaties.
@xsuploader
@xsuploader Месяц назад
@@liron00 but its harder with AIs because you have to monitor GPUs. Also China is catching up. YI large = GPT4 essentially only a year later.
@liron00
@liron00 Месяц назад
@@xsuploader it is super hard, but not dying after unleashing superintelligence is even harder
@mrpicky1868
@mrpicky1868 Месяц назад
Liron if you want to talk with opponent who understands at least what he's talking about and be the AI side - "holla at me". my take is not like the rest of them. sad to see this "Wishful thinking camp" getting too much underserved air.
@andybaldman
@andybaldman Месяц назад
19:10 Koivukangas's logic is weak here. An airplane is a 'simulation' of a bird. It's not a biological bag of meat and feathers. But it can fly faster than most birds, at least in terms of what we need it to do as humans. AI will advance similarly. We don't need to biologically reproduce humans in every way in order to create a synthetic form of cognition that can be powerful enough to replace/displace a significant portion of the current human population. The automobile didn't make horses extinct. But it did greatly reduce their population and need for them.
@liron00
@liron00 Месяц назад
Exactly
@Mkoivuka
@Mkoivuka Месяц назад
If an airplane is a simulation of a bird, you have to now explain why the less birdlike a plane is the better it flies. My argument is that the view of AGI, whether it's Yudkowsky's paperclip machine or something more reasonable, it is grounded on the assumption that we can predict what an AGI even looks like. The issue with "AI will advance similarly" is that we've seen this movie already, many times, and this time is no different. Ultimately sophistry won't win out, so I'll be around in 15, 20 or 30 years to talk over why AGI didn't materialize after all.
@andybaldman
@andybaldman Месяц назад
@@Mkoivuka We don't have to predict what AGI will look like. All we have to know is what it will do. And the thing it will do is be smarter and more cognitively capable than a human, at a financially viable enough level to make it able to replace SOME subset of humans. (And the higher that percentage is, the more capable, intelligent, and valuable it will be). We know that's true because that's what is actively being pursued by all of these companies, and the race WILL NOT STOP until one or more of them gets there. That's what they're all gunning for. You can actually consider one definition of AGI as something that is capable of displacing its own 'weight' (in terms of cost) as an equivalent amount of human cognitive labor. Once you hit that inflection point, it'll go asymptotic fairly quickly, because you will have built a self-reinforcing knowledge machine, that basically converts electricity into human-displacing cognitive work, with exponentially increasing efficiency. We just haven't hit that inflection point yet. But the trend is clearly in that direction, so the only unknown left is time (i.e., how long it'll take to get there, not if).
@dadsonworldwide3238
@dadsonworldwide3238 23 дня назад
Everyone needs to stay focused on this kindergarteners future. This debate changes when discussing inevitable outcomes. Right now we have to focus on the fact Plausible deniabilty is granted to the few when you doom over terminater ai rogue agents . You justify rule and regulations into the hands of a few bad actors denying the 100 to 1 ratio of good guys who can use access to tech on deference of defensive countermeasures that could make bad actors irrelevant up front. You give bad acts Plausible deniabilty loopholes of oops rogue terminater did it. 1900s structuralism in America specifically is antithetical & opposite human infrastructure to maximize benefits and access. Assume whatever nature permits is in a gun on every individual neighbors hips. = mutual destruction eye for an eye wild west creates shilvery and maturity in society. It raises respect and is how we freed serfs and slaves alike. Remember only when every individual equaled power of the few did our modern definition of Freedom got formed. Then through abolishenist teaching preachers the theologically inspired scientifically studied mathematically confirmed soul agency free will inertia frame of reference correlated with the eternal cosmos was taught to us all in concert with Jethro tulls plow. Esoterica America and this encoded English orientation and direction based on alphabetical exodus dictated our elusive prosperity longitude and latitude. The founders experiment had the exact same computational future foresight in mind as the west we searching for the history of nations people places and things. Formulated the structuralism based on we all work with the same atoms but to maximize benefits we must interpret them radically differently then beacracy deals in rigorous debate and dialogue building land bridges between tribes. This is how it has to be to max access to tech while giving your own kids a chance to defend the human species as we know how they see fit. Don't give that power to any single tribe
@glitchp
@glitchp Месяц назад
This guy talking to Lairon is brain dead
Далее
AI Doom Debate: Liron Shapira vs. Alexander Campbell
46:20
[RU] Winline EPIC Standoff 2 Major | Group Stage - Day 1
8:42:47
Who Can Break Most Walls? Ep.2 | Brawl Stars
00:26
Просмотров 163 тыс.
Would you help?!😳
00:32
Просмотров 5 млн
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
AI Doom Debate: George Hotz vs. Liron Shapira
1:17:11
Просмотров 1,2 тыс.
George Hotz and Liron Shapira debate AI doom
1:14:44
Просмотров 8 тыс.
Is AI Safety a Pascal's Mugging?
13:41
Просмотров 371 тыс.
How AI was Stolen
3:00:14
Просмотров 595 тыс.
Mirjalol Nematov - Barno (Videoklip)
3:30
Просмотров 5 млн