Тёмный
Liron Shapira
Liron Shapira
Liron Shapira
Подписаться
Комментарии
@mistycloud4455
@mistycloud4455 14 дней назад
Agi will be man's last invention
@bobtarmac1828
@bobtarmac1828 18 дней назад
Dumb question. Is it too late to cease Ai? Will we suffer being laid off by Ai? Ai jobloss for everyone? Swell robotics everywhere? Just to be exterminated by an Ai new world order?
@context_eidolon_music
@context_eidolon_music 21 день назад
Okay saying Fridman needs Narcan was funny.
@context_eidolon_music
@context_eidolon_music 21 день назад
These people actually believe that LLMs are "computer programs."
@context_eidolon_music
@context_eidolon_music 21 день назад
What's up with dudes just screaming shrilly all the time? Talk about midwit.
@mistycloud4455
@mistycloud4455 22 дня назад
AGI Will be man's last invention
@mistycloud4455
@mistycloud4455 22 дня назад
AGI Will be man's last invention
@mistycloud4455
@mistycloud4455 23 дня назад
The point George is missing is that these asi entities are going to be a foreign entity and just like all intelligent creatures. Specifically ourselves. When we encounter a foreign entity like the Incas like the Africans, we may treat them like slaves or subjects
@mistycloud4455
@mistycloud4455 23 дня назад
I have never seen someone so calm and composed as liron in my life Great debate
@mistycloud4455
@mistycloud4455 23 дня назад
Simple thing is ants did not create humans, humans may create agi
@mistycloud4455
@mistycloud4455 23 дня назад
AGI Will be man's last invention
@George70220
@George70220 Месяц назад
Can the first words be on credentials of the guest? Helps assign priors
@liron00
@liron00 Месяц назад
Ya sure, will try to be consistent about that in the future on my new channel youtube.com/@DoomDebates I don't know/remember Mikael's unfortunately.
@Jack-ii4fi
@Jack-ii4fi Месяц назад
Loved the discussion! In my view, there are a vast number of philosophical positions you can take on the mind and consciousness and comparing and contrasting neural networks in the limit (universal function approximators) with the brain/mind/conscious entities is extremely fascinating and could lead to some major rethinking of what we are, compared to AIs, and personally I tend to make the assumption that humans are conscious and that current transformers are not conscious, but we could be totally wrong. Maybe they are. Maybe everything is fundamentally conscious. Maybe nothing is "real" and consciousness is the primitive of reality somehow. I can't know with certainty that anyone I talk to is conscious because I don't directly interface with their experience. But I think this is extremely interesting philosophy that could have major impact down the road, but I think it's very reasonable to argue that a lot of what humans do is actually not conscious and is rather function mapping/approximating in the brain. Maybe our will is freely determined and somehow related to consciousness or maybe it's fundamentally deterministic 100% and entirely influenced by subconscious computation perfectly able to be modeled by a neural network. Regardless, we can build neural networks that we cannot control and that could pose immense risk to humanity. People want to desperately illuminate the distinction between humans and AI (and personally I think consciousness is currently the major distinction but maybe it's emergent in AI eventually), but I think it's besides the point. The AI systems we build will, even without consciousness, be incredibly powerful and could therefore pose a threat to humanity. Or I think the best way to put it is simply, "Ok you think it'll all work out ok? are willing to bet all humanity on it? You get one shot." I'm working on deep learning projects and am a big fan of philosophy so this discussion was amazing but I think we can reason about it endlessly but the safe bet is just to not induce arms race dynamics towards even the potential for super intelligence. I personally just fall back to that. We can and should keep getting into the weeds of philosophical debate over this but how about we do that for a while and do safer research before driving top speed towards the chance at super intelligent AI. We can do tons of AI research in the medical field and reap the gains people want without trying to build something that beats us in every aspect of thinking. I've been joking that if we're trying to figure out what AI regulation to implement first, we should just start by mandating that every AI CEO watches Jurassic Park at least once every few months just so they have to hear "Your Scientists Were So Preoccupied With Whether Or Not They Could, They Didn’t Stop To Think If They Should". Not sure if anyone will read what I've written but the discussion was fascinating! Definitely subscribing! If you read this Liron, do you have any discord server or place to discuss these topics? Or would you be interested in forming one if one doesn't already exist?
@Lucas-xy6ss
@Lucas-xy6ss Месяц назад
Oi
@MosterMelly-dv6qd
@MosterMelly-dv6qd Месяц назад
I’ll keep watching these debates on your new channel. You explain the points well and I think you are one of the best debaters on this topic. You should get David Shapiro on, it’ll boost you views (sure did that on the For Humanity podcast). Also he’s an accelerationist with a pretty high p-Doom, so would be an interesting debate.
@liron00
@liron00 Месяц назад
Appreciate it. Thanks for the suggestion.
@neorock6135
@neorock6135 Месяц назад
Interesting you mentioned Shapiro. On that P-doom video, he had an *exceptionally uncharitable* take on Eliezer, Connor Leahy & by extension many "doomers" that really... really got under my skin... To not take him out of context, I will quote Shapiro's exact words here & link to video & time stamp at the end: _"The ppl who are most equipped to point out these things [dangers of AI], sometimes do things to hurt their own credibility. While I admire Connor Leahy and others like Eliezer Yudkowsky... They do things and behave in certain ways that does not lend credibility. If you have someone who looks like v for vendetta, who uses postulates and rhetoric that is far and above what most people understand.... Basically he's not catering to be understood. If the smartest people in the room are not optimizing to be taken seriously and are not putting themselves out there, thats a problem, we are not going to have a broad enough understanding."_ ¹ How is Eliezer & Connor NOT " "putting themselves out there". Along with Liron, thats all I have seen them do. What, warning us is not enough? Now the onus is on them to change the way they sound & look? How one sounds & one's look are traits forged over years, making us who we are. And certainly not easily malleable. But more importantly, shouldn't be even points of contention. ¹ 12:05 mark of his P-doom video. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-CCfRAwLlZJk.html
@aihopeful
@aihopeful Месяц назад
The first few episodes are pretty great!
@liron00
@liron00 Месяц назад
Thanks!
@DoomDebates
@DoomDebates Месяц назад
Hello world
@neorock6135
@neorock6135 Месяц назад
A but off topic in respect to the video, but wondering if Liron or anyone else saw David Shapiro's recent video on AI progress possibly slowing down. I mention it because the silver lining he took at the end was that it will give us more time for "safety." I don't know if he realized the irony of denoting "safety" as a "silver lining" as if its some happenstance "good side effect"🤦🏾‍♀️ when it should be everyone's primary goal. Then again, he readily admits he & the vast majority of his entire community are accelerationists all the while noting that his p-doom is 30%... Having a p-doom that high or even 1% for that matter & being an accelerationist at the same time doesn't make any sense to me. They should be mutually exclusive🤦🏼 Link to his video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-FS3BussEEKc.html
@dadsonworldwide3238
@dadsonworldwide3238 Месяц назад
Everyone needs to stay focused on this kindergarteners future. This debate changes when discussing inevitable outcomes. Right now we have to focus on the fact Plausible deniabilty is granted to the few when you doom over terminater ai rogue agents . You justify rule and regulations into the hands of a few bad actors denying the 100 to 1 ratio of good guys who can use access to tech on deference of defensive countermeasures that could make bad actors irrelevant up front. You give bad acts Plausible deniabilty loopholes of oops rogue terminater did it. 1900s structuralism in America specifically is antithetical & opposite human infrastructure to maximize benefits and access. Assume whatever nature permits is in a gun on every individual neighbors hips. = mutual destruction eye for an eye wild west creates shilvery and maturity in society. It raises respect and is how we freed serfs and slaves alike. Remember only when every individual equaled power of the few did our modern definition of Freedom got formed. Then through abolishenist teaching preachers the theologically inspired scientifically studied mathematically confirmed soul agency free will inertia frame of reference correlated with the eternal cosmos was taught to us all in concert with Jethro tulls plow. Esoterica America and this encoded English orientation and direction based on alphabetical exodus dictated our elusive prosperity longitude and latitude. The founders experiment had the exact same computational future foresight in mind as the west we searching for the history of nations people places and things. Formulated the structuralism based on we all work with the same atoms but to maximize benefits we must interpret them radically differently then beacracy deals in rigorous debate and dialogue building land bridges between tribes. This is how it has to be to max access to tech while giving your own kids a chance to defend the human species as we know how they see fit. Don't give that power to any single tribe