Тёмный
No video :(

Feeling the AGI with Flo Crivello 

Cognitive Revolution "How AI Changes Everything"
Подписаться 12 тыс.
Просмотров 3 тыс.
50% 1

Опубликовано:

 

21 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 27   
@Cagrst
@Cagrst 2 месяца назад
Love this type of discussion, even more than the standard interviews which are airways great
@CognitiveRevolutionPodcast
@CognitiveRevolutionPodcast 2 месяца назад
Thank you for the comment. That is encouraging to hear :)
@ilevakam316
@ilevakam316 2 месяца назад
I love the notion that Science = Consensus by experts. Pretty cool stuff.
@JazevoAudiosurf
@JazevoAudiosurf 2 месяца назад
i also don't see a story for an equilibrium. even if a stable status quo is reached, after some time it will escalate, reason being that we would always want to figure out what more intelligence can do. even if we get the utopia and reach the point where we can chill out, it will end quickly after. the future is sheer escalation for as long as i can grasp
@Sporkomat
@Sporkomat 2 месяца назад
Agree, i think we are in for a wild ride.
@thadgrace
@thadgrace 2 месяца назад
I’m hoping each one of us can choose (or at least perceive to choose) when we are finished accelerating… maybe the algorithm will let us off the train when each one of us individually has had enough. 🤷‍♂️
@wonmoreminute
@wonmoreminute 2 месяца назад
On the backside of AGI, the advantage of a few months or even a few weeks towards ASI is potentially equal to years with respect to conventional technology. On the backside of ASI, an equal advantage could shrink to weeks or days (although a scenario where it would matter that your competition or adversary is only a few days behind you would have to be frighteningly urgent). But either way, if such an advantage could possibly come down to days or weeks at some point in the future, then the days and weeks right now towards reaching AGI and ASI are equally important (provided that's the objective). So, while it doesn't feel like it today, the urgency is now. Any CEO, military strategist, head of state, etc. is doomed to second place at best without this mindset. And this feels like a winner-take-all technology.
@GNARGNARHEAD
@GNARGNARHEAD 2 месяца назад
yeah.. it's the 'winner-take all' part of that that doesn't give me much concern, America, China, an ASI.. what would any of them do if they had absolute control, hell throw Iran in the mix 😆 an ASI is the wildcard, but for it to be a super intelligence, it has to be able to reason, and reality isn't a complete information game. Iran being a religious nation might impose such beliefs in some pretty funky ways, but philosophical texts written in the pre enlightenment are going to have some conflicts with the observations possible in a post industrial world, it would be a balancing act of navigating scripture and reality, and at some point reality is going to win. as far as the two super powers go, I don't see either of them just flipping the xenophobic switch ASAP, the level of control well implimented systems would provide I'd think would take gene editing for control off the table (as Zizek has stated that a CCP official has expressed the states intentions to him), it's impossible to be certain, but I think ideology would progress as it engages with the future... I 'unno 😀
@ezzye
@ezzye 2 месяца назад
I like your ads and sponsorship.
@InquilineKea
@InquilineKea 2 месяца назад
You may have to hand them Taiwan after ASI . the abundance makes it matter little
@arinco3817
@arinco3817 2 месяца назад
Awesome interview
@CognitiveRevolutionPodcast
@CognitiveRevolutionPodcast 2 месяца назад
Thank you. Glad you liked it.
@Sage16226
@Sage16226 2 месяца назад
"Its too hard" is not an argument someone leading a company backed by millions of dollars can make. That argument means that the board was right to kick him out of the company.
@GNARGNARHEAD
@GNARGNARHEAD 2 месяца назад
obviously it's not without risks, yet I can't help but be optimistic.. I think cyber security is a great example, these models are wizards of the conventional wisdom, but I see it as more of a rising tide, the fundamentals are easier to improve across the board, and are what's causing the vast majority of incidents
@drhxa
@drhxa 2 месяца назад
All the people advocating for accelerating AI and open weights, you know what we'll get thanks to them? Societal collapse. That will set us back technologically 100+ years. This is the dumbest timeline that we're in.
@1Howdy1
@1Howdy1 2 месяца назад
Wouldn't it be awesome if Moore's law last 10 years, the ps1 would have been around for 15 years. How many AP1000's does it take to reach ASI?
@manslaughterinc.9135
@manslaughterinc.9135 2 месяца назад
The changes to the bill did not address the broader concerns of the community at large. Further, Wiener's dishonesty about listening to his constituents makes it difficult for anyone to support his position. I support AI regulation, but this regulation is poorly thought out. Further rumors of him lying about individuals who support the bill just compound against his reliability.
@palimondo
@palimondo 2 месяца назад
This video needs epilepsy trigger warning. Nathan, could you try stabilizing video of guests when they have a case of shaky cam on their desk? Also, please also disable the auto tracking feature on your Mac’s camera it’s quite distracting when it pans and zooms aggressively when you move around in your chat. (It’s off in this video but you used to use it often previously) Sorry for grumpiness, I love the great work you do here, I just wish you invested a bit more into the production values. Thank you!
@charlesalexanderable
@charlesalexanderable 2 месяца назад
His webcam is so shaky it is hard to watch
@CognitiveRevolutionPodcast
@CognitiveRevolutionPodcast 2 месяца назад
Thank you. Noted, we will try to address this for the next episodes.
@GlennGaasland
@GlennGaasland 2 месяца назад
So many assumptions here…is there anything close to a consensus about what “general intelligence” even is? Or whether anything like that exists as a possibility? Not to mention what superhuman levels of this totally mysterious concept might actually be? Or assuming that obviously russian hackers will use the most advanced AI tools before Bank of America does…what??? Or the assumption that superhuman self-improving AGI (whatever that is supposed to even mean) can be achieved through purely automatic informational processes…do we have even a single example of a known phenomenon in nature that can do anything like this? These assumptions sound to me like a lot of wild religious superstition cloaked in tech woo speech.
Далее
AI Scouting Report: NOT Investment Advice Edition
1:01:55
Inside Google’s AI Studio with Logan Kilpatrick
1:10:08
Zooming Out on AI, from the Nick Halaris Show
1:32:46
Просмотров 1,7 тыс.
How AI was Stolen
3:00:14
Просмотров 776 тыс.