Тёмный

Intelligence is not Enough | Bryan Cantrill | Monktoberfest 2023 

RedMonk Tech Events
Подписаться 1,6 тыс.
Просмотров 13 тыс.
50% 1

One of the most common attitudes with respect to AI today is the so-called “doomerism,” the idea that AI technologies are inevitably fated to present an existential risk to humanity. This talk takes that idea on head first, systematically examining the theoretical risks versus the reality on the ground, taking a skeptical but thoughtful view to how we balance the potential of the technology with the varying risks AI may - or may not - represent.

Наука

Опубликовано:

 

16 ноя 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 46   
@edgeeffect
@edgeeffect 6 месяцев назад
"Everything is a conspiracy when you don't understand how anything works." - some guy on The Internet. "It's either firmware OR humanity and YOU HAVE TO pick a side" - Bryan Cantrill
@nblr2342
@nblr2342 5 месяцев назад
Once again, a terrific talk. Very dense and with enjoyable pace. Glad to hear they got the accoustics fixed - even if it's just by using a hand mike. Pro tip: Invest in a good DPA head mic setup.
@_ingoknito
@_ingoknito 6 месяцев назад
AI as force multiplier for human flaws: absolutely!
@VivekHaldar
@VivekHaldar 6 месяцев назад
Yudkowsky says insane things with a straight face ("bomb the datacenters"). Cantrill says sane things with the veins on his neck popping out. Still prefer the latter.
@bcantrill
@bcantrill 6 месяцев назад
🤣
@gJonii
@gJonii 6 месяцев назад
Given that the talk nor your comment managed to actually get Yudkowskys claims in context, I'm kinda unsure if this is deliberate lying or if the basic concept is so hard to grasp. The basic concept is fairly simple, you have to make a choice, either you ban making things that kill all of us with threat of force... Or you don't. Ban means you have to be ready to bomb data centers if they are used to endanger humanity. If you are not prepared to do that, there is no ban, and none of this discussion matters. Yudkowsky stated he doesn't think ban is realistic, so any talk of slowing down the extinction of humanity is meaningless, and the bombing datacenters was largely in the context of demonstrating how far we are from treating AI seriously. But yeah, reassuring lies are about all we have left, I'm just sad the anger is directed at the folk that tried to prevent the disaster.
@edgeeffect
@edgeeffect 6 месяцев назад
That's the best speaker-biog for Bryan Cantrill I've ever seen.
@kamikaz1k
@kamikaz1k 6 месяцев назад
Loved where it was going but then ended with “it’s our humanity” which is a bit b/s - especially since he was talking about concrete reasons why it’ll be ok. The final reason should be is reality has too much detail, so till AI has an accelerated way to experiment in reality, learn from the physical, there is always going to be a gap/bottleneck.
@cowabunga2597
@cowabunga2597 6 месяцев назад
He is gonna have a heart attack in the middle of the talk. Nice talk btw.
@GeorgeTsiros
@GeorgeTsiros Месяц назад
No, he won't. This is what gives him life. I am like him when I am explaining stuff.
@nbuuck
@nbuuck 5 месяцев назад
I had an utterly wrong preconception about the argument Cantrill would make here, partly given the framing of the talk when mentioned on social media, but also given how often tech entrepreneurs and venture capital investments are discussed on the Oxide and Friends series. I expected this argument would be a slightly different economic one, with the premise that we shouldn't abandon the economic opportunities simply out of fear or concern. I also had a different inferred understanding of what "AI doomerism" means: I thought those of us in the IT security space, given our concerns and stereotypical skepticism, were being labeled as doomers _a la_ Suppressive Persons in the eyes of the Church of Scientology. I was relieved that Cantrill acknowledged some of the risks at the end of the talk, if not my preferred placement. Once one understands that, at least from Cantrill's perspective, that AI doomers are those with perhaps irrational, actual-Doomsday-scenario concerns, I felt less threatened by the premise and the term "AI doomers." That said, I lament that we're spending time addressing irrational doomerism (I guess that doesn't sound redundant in my head, hence my misconceptions) given that it is nominatively irrational when we could put more air time and dialog toward the security, privacy, and social concerns and maybe even theorize solutions, the latter of which I've heard very little in the oceans of worry being written about AI. That doesn't mean I think Cantrill shouldn't have focused on the irrational nature of AI doomerism... he may not feel like a sufficient authority on AI security and privacy to compose such a talk. I certainly have a bit better understanding of some of the schools of thought about AI and how we label them after listening to this. Thanks, Bryan!
@zwill8882
@zwill8882 5 месяцев назад
I agree with this take. It's been said before but I think it's worth repeating that the issue of AI doom is obscuring the more serious concerns surrounding the negative social consequences AI might have that fall short of what people would be likely to classify as "doom". So the question should be completely reframed to "How do we prevent AI from making the world a much shittier place". In some respects, this is impossible, because what makes the world shitty for one group will probably be favored by others and so it is inherently subjective, with those that have a vested interest in the success of AI projects (notably, those that have invested in the commercialization of the technology) being obviously much more likely to think the success of AI would make their own lives significantly less shitty because it would mean they would presumably come out with a lot of money. I suspect this is the fundamental concern, do you like your job and think it's important it be done by a human and also think that AI will make it easy for others to take the easy path and get a cheap, AI generated replacement much more quickly, thereby destroying some amount of purpose in your life and also making the world a less artistically, aesthetically, or spiritually beautiful place? If you fall into that camp (and I do as a programmer but I suspect many artists and writers and similar professions fall into the same camp here) then you probably feel that AI is just another step down a long and grinding path to some sort of philosophical doom or death of the human spirit, rather than an actual physical doom, which I think if you really pushed you would find many that speak of a physical doom really don't mean it like that at all. On a purely technical level, the physical doom scenario is much less likely and there are problems that remain totally unsolved with no real evidence of significant progress, so the embodiment or AI or it's manifestation in the physical world in a way that would enable it to cause our physical destruction seems to me to be highly unlikely at this time.
@maxcohn3228
@maxcohn3228 6 месяцев назад
Really solid audio on this talk
@patmelsen
@patmelsen 6 месяцев назад
36:48 interestingly, this train of thought also kind of summarizes the position that climate change naturalists have, in where they say that we should not let an unspecified fear of climate change stop us from making the best of this planet (which may involve burning mineral oil).
@datenkopf
@datenkopf 6 месяцев назад
What does he say about Lex Friedman at 39:24? (I think the subtitles are wrong or I don't get it)
@julienlegoff6139
@julienlegoff6139 6 месяцев назад
Get the Narcan!
@vmachacek
@vmachacek 6 месяцев назад
I'm watching this talk for 10th time now, still entertaining...
@allesarfint
@allesarfint 5 месяцев назад
"Intelligence is not Enough", tell me about it. Suffering my whole life because of this.
@navicore
@navicore 6 месяцев назад
Thanks for this reasoned sanity.
@chmod0644
@chmod0644 6 месяцев назад
Cantrill you magnificent bastard!
@yugo_
@yugo_ 6 месяцев назад
Thank you, Bryan, I needed to hear this.
@dlalchannel
@dlalchannel 8 дней назад
Is his claim that AI will *never* be able to solve the engineering problem(s) that he and his team did?
@masonlee9109
@masonlee9109 3 месяца назад
Love Cantrill, but it is a pretty short-sighted take on AI x-risk to dismiss the possibility of agentified super intelligence.
@BspVfxzVraPQ
@BspVfxzVraPQ 5 месяцев назад
If my autocompletion causes a "existential threat" than that is on you not me. If you hook up my autocomplete to the nuclear button... like. oh, blame the autocomplete. that is so robofobic...
@theyruinedyoutubeagain
@theyruinedyoutubeagain 2 месяца назад
Bryan is one of the most brilliant people I know and, while I wholeheartedly agree with his stance on the idiocy AI scaremongering, this reflects a shockingly poor understanding of the opposing point of view and reeks of stunted thinking. Feels like an application of the common trope of exceptional people having unwarranted confidence when discussing things outside their domain.
@jscoppe
@jscoppe 4 месяца назад
Argument by YELLING REALLY LOUDLY. Bryan seems like the Cenk Uyger of AI debate. "OF COURSE!!" Also, I loved when the nerd told the other nerds to touch grass.
@a2aaron
@a2aaron 11 дней назад
what if it turns out that firmware is actually super reliable, its just that bryan was cursed by a wizard at birth to always have firmware issues
@Ergzay
@Ergzay 6 месяцев назад
Pretty good talk until the ending part where he suddenly re-invokes a bunch of nebulous "dangers".
@DanielYokomizo
@DanielYokomizo 6 месяцев назад
Awesome how they used organs other than their brain to solve their startup problems. I thought all human actions ultimately originated in the brain but hardware and firmware debugging comes from the spleen.
@420_gunna
@420_gunna 6 месяцев назад
stimulant check *banging credit card on table*
@ginogarcia8730
@ginogarcia8730 6 месяцев назад
i want what this guys smoking
@ahabkapitany
@ahabkapitany 20 дней назад
this was really embarrassing to listen to. 1. take a midwit tweet 2. use it as a strawman 3. shout for half an hour arguing with said midwit tweet I came here expecting him to take this topic seriously, instead I just found Don't Look Up energy.
@captainobvious9188
@captainobvious9188 6 месяцев назад
Learn even a little bit how modern AI works? It’s nowhere near any of the AI in fiction, as believable as they are.
@GeorgeTsiros
@GeorgeTsiros Месяц назад
once again, the software was the problem once again, shit coding is to blame we're never going to be engineers. We're just keyboard jokeys.
@palindromial
@palindromial 6 месяцев назад
Skip to 15:30 if you want to avoid the cringy bits. The engineering bits are pure gold though. A+++ would watch again.
@aeriquewastaken
@aeriquewastaken 6 месяцев назад
Cringy bits?! Those were great!
@palindromial
@palindromial 6 месяцев назад
@@aeriquewastaken I didn't find what Bryan has to say cringy, but the bits he cites are nevertheless cringy to me. So overall, I much preferred the rest of the talk.
@cepamoa1749
@cepamoa1749 5 месяцев назад
he only know to scream...tiring...
@jeffg4686
@jeffg4686 6 месяцев назад
Capitalism versus Sociaiism - head to head. This is the real discussion. Everyone's too afraid -- too programmed that they can't see past capitalism.
@ts4gv
@ts4gv 3 месяца назад
introducing x-risk with such a dismissive tone won't work for much longer (i hope). this was a frustrating & bad presentation. :/
Далее
Coming Of Age | Bryan Cantrill | Monktoberfest 2022
47:26
10 кактусов поют трек Богатырь
00:14
Post Hype Microservices with Bryan Cantrill
20:37
Просмотров 14 тыс.
Bryan Cantrill on Jails and Solaris Zones
1:45:47
Просмотров 30 тыс.
Полезные программы для Windows
0:56