Тёмный

AI Safety Regulations: Prudent or Paranoid? with a16z's Martin Casado 

Cognitive Revolution "How AI Changes Everything"
Подписаться 11 тыс.
Просмотров 1,5 тыс.
50% 1

Dive into the profound discussion on AI expectations and the future with Martin Casado, General Partner at Andreessen Horowitz, as we unpack the complexity of AI systems and their potential impact on the world. Explore the differing viewpoints on AI's epistemics, possible regulatory standards, and the art of predicting AI advancements. Gain insights into the actionable outcomes of this dialogue and the significance of understanding AI before shaping policies. Join us for this episode of the Cognitive Revolution to contemplate AI's trajectory and what it might mean for humanity.
Apply to join over 400 founders and execs in the Turpentine Network: hmplogxqz0y.typeform.com/to/J...
RECOMMENDED PODCAST:
Byrne Hobart, the writer of The Diff, is revered in Silicon Valley. You can get an hour with him each week. See for yourself how his thinking can upgrade yours.
Spotify: open.spotify.com/show/6rANlV5...
Apple: podcasts.apple.com/us/podcast...
SPONSORS:
Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at oracle.com/cognitive
The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at bit.ly/BraveTCR
Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com/
Head to Squad to access global engineering without the headache and at a fraction of the cost: head to choosesquad.com/ and mention “Turpentine” to skip the waitlist.
CHAPTERS:
(00:00:00) About the Show
(00:03:18) AI progress
(00:05:20) Threshold effects
(00:12:30) Heavy-tailed universe
(00:14:53) LLMs are not very good at unique tasks
(00:26:27) Sponsors: Oracle | Brave
(00:28:36) Understanding meaning
(00:31:44) How do LLMs work? (Part 1)
(00:36:25) Sponsors: Omneky | Squad
(00:38:12) How do LLMs work? (Part 2)
(00:44:01) Post-training
(00:51:09) Simulation
(01:04:06) Regulation
(01:08:53) What makes AI a paradigm shift
(01:11:51) Compute limits
(01:20:07) Sleeper agents
(01:23:16) Surface area of models
(01:25:49) AI regulation
(01:27:35) AI in medicine
(01:29:56) AGI, superintelligence
(01:37:04) Competition in the foundation model space
(01:40:57) The scaling laws
(01:44:31) The AGI convergence
(01:45:35) Bets on the future of AI
(01:48:20) Outro

Наука

Опубликовано:

 

15 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 21   
@ArielTavori
@ArielTavori 17 дней назад
Excellent and insightful discussion as usual! 🙏🏻 Martin presents the most coherent arguments for his position that I've heard so far. However, it seems like every conversation I've encountered recently about the future are dominated by arguments over hypothetical AGI/ASI/p-doom timelines, with relatively little discussion of the impact and progress of smaller models, the possibility of unlocking new capabilities from existing models (Ilya at least has stated repeatedly and publicly that he believes there's massive untapped potential here), and the paradigm shifting performance optimizations that are on the way. These improvements are not trivial or even (for the most part) hypothetical, and some of their obvious short-term implications are IMHO the most exciting, profitable, and potentially problematic areas we're likely to actually encounter in the coming months and years. Just a few off the top of my head for example: - Mojo - Mamba - 1.58 bit - photonic hardware - thermodynamic hardware - quantum hardware... Many of these have already demonstrated stunning performance improvements, which in some cases may have the potential to stack multiplicatively, not to mention the potential for massive reductions in hardware requirements including total GPU memory, memory bandwidth, or even 1.58 bit outright eliminating matrix multiplication...
@xinehat
@xinehat 17 дней назад
Oof. It didn't feel that Martin was actually engaging with Nathan. It was basically 2 hours of "This is a stupid conversation that's not worth having, and you're stupid if you don't think it's stupid."
@Cagrst
@Cagrst 17 дней назад
well, this was a deeply frustrating watch. Reasoning by analogy to historical events when we are dealing with the most significant revolution in human history feels pretty asinine to me. brilliant, but I think he just doesn’t get it.
@kyatt_
@kyatt_ 17 дней назад
Yeah, stopped after 10mins tbh
@Sporkomat
@Sporkomat 17 дней назад
I agree. To me it seems like he doesnt extrapolate in the obvious ways and just sees thinks as they are and not how they (almost probably) will be.
@_arshadm
@_arshadm 17 дней назад
An excellent episode, the FSD discussion is so pertinent. Elon can spout BS about how close FSD is, and he could be 97% right. But the problem is that the remaining 3% still happen everyday and any failure would be a killer for FSD.
@AI_Opinion_Videos
@AI_Opinion_Videos 16 дней назад
"If we identified one mechanism that has massive destructive power" The AI lab would identify pre-release, and chose to race internally and harness that power. Who is going to protect us from that?
@jarlaxle6591
@jarlaxle6591 17 дней назад
This conversation was a tough listen. He repeats him self over and over. Its almost like he's in denial. And man, the ways he talks. As if everything he says his fact.
@seanmchugh2866
@seanmchugh2866 17 дней назад
Yeah, I can't say for sure if there's "denail" but I am seeing that kind of thing a lot with AI. No matter what it does it's a joke. He also reframed shifting goal posts from "it's not ai it's not ai it's not ai" to "it is ai (nope), it is ai (nope), it is ai (nope)" I suppose if I wasn't a ChatGPT enthusiast at this point I would still be able to get work done with my head in the sand. But because I try to use it I see use cases everywhere that someone who didn't try never would.
@seanmchugh2866
@seanmchugh2866 16 дней назад
Okay so just as a random example (out of a population of probably 100 worthy examples this month). I needed this code in python. Now before you judge me I can easily write this in C# but this saved me probably an hour in python. If you aren't coding or have your head stuck in the sand you're going to miss how beyond words level of amazing this is. "can you write me a python function that takes in the path to two images, originalImage and portraitMask. you should know that originalImage and portraitMask are guaranteed to have the same dimensions. your function should replace every pixel in originalImage with a zero opacity pixel where the corresponding pixel in portraitMask is < (200,200,200). finally, your function should take a third argument which lets me tell it where to save the output"
@mitsuman5555
@mitsuman5555 17 дней назад
I’m sure the guest is a brilliant person, but his arrogant disposition is unsettling. As if anything in this field is a foregone conclusion.
@rasen84
@rasen84 16 дней назад
It’s not like he’s telling OpenAI to stop scaling. He’s clearly excited about new uses enabled by scaling. He just doesn’t think it’s going to create god.
@militiamc
@militiamc 15 дней назад
I think he's just knowledgeable and confident. Not necessarily arrogant. I don't agree with his position though
@NuanceOverDogma
@NuanceOverDogma 12 дней назад
lol, the interviewer is arrogant AF
@augmentos
@augmentos 16 дней назад
Ironically I disagreed with a lot of Martins positions and don’t think he understand a lot of the comparisons though I totally am in agreement and aligned with having no liability for an llm model creator and stopping any drive at regulation currently. I’m more often disagreeing with the host who I really enjoy but I think is bias from his red team time, and is driving at the regulation hoop these days way too hard. To be liable if any llm use results in criminality based on an answer is absurd. There’s infinite ways to trick a model and a huge landscape of ‘criminality’. These guys fear mongering are eating out of Altmans hand, helping build his moat, are doing America a huge disservice.
@LiquidRR
@LiquidRR 16 дней назад
The most powerful models will need heavy regulation. Unless you want people developing super viruses in their garages it's a necessity we regulate the leading edge models. Why would you think otherwise?
@augmentos
@augmentos 13 дней назад
@@LiquidRR yawn. 🥱 all the info you need to do that exists already. Has since the advent of the internet. Motivated ppl will do bad things. I suppose you like and believe standing in TSA security theater keeps us safer too. I don’t want Sam Altman or the SF woke brigade deciding what I can and cannot use or see. Look how that worked out for Twitter.
@NuanceOverDogma
@NuanceOverDogma 12 дней назад
You are not very bright.
Далее
Bill Gates Reveals Superhuman AI Prediction
57:18
Просмотров 144 тыс.
Has Generative AI Already Peaked? - Computerphile
12:48
Government and society after AGI | Carl Shulman (Part 2)
2:20:32