It is true one generation structuralism becomes the next ones jail and no one size fits all but, I would push back hard on his notion of eastern submissive philosophy or spiritualism being helpful based on historical facts of its inevitable repeated cycled of taking its hat out of the ring and submitting to nature becoming to sensitive and woo woo over time. As a parasitic Theosaphy it becomes a nomad getting kicked out of every community even the most progressive = California And as an anthrosphy it commits genocide at all expense of the utopia. Europe These have proven time and time again they can't self reform proper or maintain triality of oneself classically liberal epistemology. They cant keep it on the inside and be smart enough not to self destruct communities.. So I really can't see that being fruitful in his philosophy if he wants to be honest about the world and less about how a certain period of time in his life a certain tech grew out of the classical American landscape because truth is the transistor age was 100% brought & paid for ,R& d out of a very antithetical to 1900s structuralism born group . Waking up born in the car ride with amnesia is always bad news for such subjects like innovation, social stability, work ethic all that is down wind from ontological fundamental faiths or beliefs about the world views.
Without demands on large labor gathering, 70% of beaucratic idealogical sector easily replaced citys in usa are already thinning out. 1 Great professors can zoom ed a nation of smaller gatherings more one on one tudering . As an American i see it as the end of global socialism or globalism what is a very hyberdazation sold as a temporary waiver of our self sacrificed private sector rare rights reserved benifits to help rebuild Europe ,industrialize the world get it on its feet, multiple world blah blah, thinning out citys Compartmentalized fuse breakers is a very important safety net quadrupled logistics so usa can resume its goals of spreading out into nature for better quality of life homesteads where tech helps overcome handicaps to live out our dreams through our love of labor becoming kings of our own castles. Its 100% private sector capitalism because we are the masters the government agency and institutions are replaceable and have no authority in law . Socialism is a European thing you all will liley centralize its in the nature to do so. But Americans hate it and are sick of this hybridization big time. Ending the heavy expanded agency and institutions mostly to regulate old obsolete systems domestic and abroad. But these disliked public sector energy vampires are Echoes down from over 100 years of multiple families fed up with the outside world and allys tugging on us from all directions. Its not a totally get out of the game milatary deal but it is coming burdens on states that hasn't had to deal with certain pressures in the past century coming. I see that even Africa is ready to produce items for trade now they're on their feet ready to go.
The problem with arguments like "nihilism means that Hitler did nothing wrong or that it isn't wrong to kill babies" is largely just an appeal to emotion (with a bit of bandwagon appeal, because its unpopular to say otherwise). We have a strong emotional attachment to calling things immoral, and the denial of the phrase 'wrong' implies approval/indifference. If someone says "you don't think Hitler was wrong", an element of what they hear is "you aren't willing to disapprove of Hitler", or alternatively, "you think Hitler was right". If you took a more neutral phrase like "Hitler's actions do not embody a property called wrongness", you would see that it is less counterintuitive than a more emotive phrasing. To be clear, I'm not saying that moral language is entirely subjective/noncongnitive (something that would be incompatible with nihilism), merely that there's an emotional attachment we all have to moral judgements. So when you call those things 'wrong', you scratch the emotional itch to condemn them. But if you introspect carefully, you'll find that the *logical* intuition about those things is not the same as the *emotional* intuition about them. Its the conflation of those two that causes people to just blindly accept the argument that pointing at really emotional things shows those things to be *objectively* wrong.
Interesting and informative talk! Incredible overview of the Long Bio movement, medical and research industry problems, and the potential future of life extension.....this Brunemeier guy is brilliant guy, and a great speaker.
The Carl Shulman episode was the first that popped up on my feed, and Jesus f* Christ, that was like 7 hours, and I was hooked. Dwarkesh is the hardcore podcaster that we were missing. Intelligent questions, focused on the most interesting topics, great guests.
A stellar podcast episode that contrasts entrepreneurship in incentivized zones versus traditional areas. It's intriguing to see how mainstream entertainment often paints an unrealistic picture of the future. Every policymaker should give this episode a listen!
I struggle to understand his idea of 'arbitration,' rather than using government courts. If somebody crashes into my car and destroys it, usually we go through insurance, not through government courts, but that's only because they know if they drive away I can sue him. Without a government, I don't understand why the guy who hits me isn't incentivized to just say, "f you" and drive away.
If the guy who hits you drives away, he compounds his risk by adding any damages owed to further victims (if, for example, he hits another person or property during his attempt to escape) to the damages he owes you. Your security firm will pursue him because your insurer will want to exact compensation from him to offset any payout it needs to make to you. If he does escape, your insurer is left in the position of paying the full amount for the loss under your policy, just like it would today in a hit-and-run situation, so the anarchist scenario is no worse than the statism scenario in this regard. If the driver is apprehended, then his insurer and your insurer resolve the damages claim between themselves, through (a) prearranged contract, (b) ad hoc negotiation, or (c) arbitration. It's worth mentioning that this is how insurance works today in some contexts, such as maritime shipping, where arbitrators are often used to remedy disputes between sureties.
Incredible that there are people who want to "compete with legacy institutions like the FDA". Maybe you should read about the snake oil salesmen of old.
Dwarkesh has a clear idea of the risks, but in the end the road to danger is obvious: any search algorithm can find solutions that are outside of previously started rules. Neural networks cannot be thought controlled, because by definition they generate new thoughts. There will always be unforeseen consequences.
We shouldn't develop seat belts because people will really enjoy driving and therefore will drive safer. If someone drives fast and reckless, you can just drive faster and safer to counter them. /s Safety is not a given
Of course we should develop safety guardrails and seat belts are probably a good solution in most cases. But mandating seat belts centrally for every car through inflexible regulation might not be a good idea, because it leads to the "safety paradox" of locking in old technology and making it harder to adopt new, safer tech I talk about this regularly on my podcast here, check e.g. the episode with J. Storrs Hall or Eli Dourado: rss.com/podcasts/stranded-technologies-podcast/
@@infinitavc I'm not sure if I agree, but if I did, would you not concede that it completely depends on the capabilities or risk of the technology? For example, the safety requirements for a kitchen fork should be low because its capability for harm is low, whereas the safety requirements and security classification for a gain of function lab should be extremely high due the capability for harm. Likewise, a technology that can help terrorists to create high level gain of function labs (as Anthropic spoke about) should also be heavily regulated
@@absta1995 I concede that there should be strong safeguards in place the more dangerous the tech is What I question is how it is done and by whom? We tend to just assume government regulates, and any regulation will do what it's intended to do. That's simply not the case - more often than not the government does much more harm than good.
@@infinitavc fair enough, I think that's the right conversation to have. I feel like some people dismiss regulation too quickly because of past bad examples but sounds like you have a different view/approach to that.
thanks!@@absta1995 welcoming more discussions here or on any other channel of mine (Niklas Anzinger here speaking who runs both Infinita & the Stranded Technologies Podcast)
Agreed. Made the case very clearly and convincingly, imo. Particularly good on how successful Nuclear non-proliferation strategy has been by the USA and how that might analogize to AI safety. Good to see Niklas acknowledge how competition between rival states could create risks with unaligned AI. Carl Shulman discussed this on Dwarkesh' pod: misaligned AIs gaining control after first allying with one human group against others.
@@infinitavc Agree with you that Dwarkesh defends AI risk arguments better than Yudkowsky. Especially so for audiences who are non-technical or new to the issue.
Good choice of guest. When I think of Lunar Society/ Dwarkesh Pod's theme it's either 'progress', 'discovery' or 'transformative X'. Would defo place him more in the Progress Studies part of the EA, Rationalism, Progress Studies venn diagram.
It is very important that the people need to be totally prepared, how to correct AI when it does the bad things and learn from it, than expecting it to do always the good things.