Тёмный

Making AI accessible with Andrej Karpathy and Stephanie Zhan 

Sequoia Capital
Подписаться 34 тыс.
Просмотров 214 тыс.
50% 1

Andrej Karpathy, founding member of OpenAI and former Sr. Director of AI at Tesla, speaks with Stephanie Zhan at Sequoia Capital's AI Ascent about the importance of building a more open and vibrant AI ecosystem, what it's like to work with Elon Musk, and how we can make building things with AI more accessible.
#AI #AIAscent #Sequoia #Startup #Founder #entrepreneur

Опубликовано:

 

25 мар 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 207   
@siddharth-gandhi
@siddharth-gandhi 2 месяца назад
the man, the myth himself. has done invaluable work in making things accessible just by his teachings alone. bravo!
@psesh362
@psesh362 2 месяца назад
Classes meaning his channel?
@whowhy9023
@whowhy9023 2 месяца назад
@@psesh362Stanford …
@olhamuzychenko3082
@olhamuzychenko3082 2 месяца назад
@@psesh362😅😅😅😅😅😅😅😊😅😊😅😅😊o
@chaithanya4384
@chaithanya4384 2 месяца назад
Interview 3:22 what do you think of the future of AGI? 5:20 what are the new niches for founders given the current state of LLMs? 7:15 future of LLM ecosystem (wrt open source, open weights etc)? 9:26 How important is scale (of data, compute etc)? 11:52 what are the current research challenges in LLM? 15:01 what have you learnt from Elon Musk? 20:42 Next chapter in your life? QnA 22:15 Should founders copy Elon? 23:24 feasibility of model composibility, merger? 24:40 LLM for modeling laws of physics? 28:47 trade off between cost and performance of LLM 30:30 open vs closed source models. 32:09 how to make AI more cool? 33:25 Next generation of transformer architecture. 36:04 any advise?
@krimdelko
@krimdelko 2 месяца назад
"Not to long after that he joined Open AI.." He stayed at Tesla more than five years and built an amazing self driving stack.
@Alex-gc2vo
@Alex-gc2vo 2 месяца назад
Oh dear boy, 5 years is not long at all.
@panafrican.nation
@panafrican.nation 2 месяца назад
He left OpenAI, went to Tesla, then back to OpenAI
@Nunya-lz9ey
@Nunya-lz9ey 2 месяца назад
@@Alex-gc2voit’s the longest he’s ever spent at a company by 3x and longer than average in tech. Definitely not “shortly” after
@Nunya-lz9ey
@Nunya-lz9ey 2 месяца назад
@@panafrican.nationtherefore 5 years is short?
@saturdaysequalsyouth
@saturdaysequalsyouth 2 месяца назад
FSD is still in beta…
@rpbmpn
@rpbmpn 2 месяца назад
Great guest, and one of my favorite people in AI. Almost certainly done more than anyone else alive to increase public understanding of LLMs, played a pivotal role at two of the world's most exciting companies, and remains completely humble and just a nice, chill person. Thanks for inviting Andrej to talk, and thanks Andrej for speaking.
@webgpu
@webgpu 2 месяца назад
_huge_ guest, that is 🙂
@johndavidjudeii
@johndavidjudeii 2 месяца назад
Let's give a round of applause to the moderator 👏🏼 what a good job!
@ashh3051
@ashh3051 2 месяца назад
Loved his insights on Elon's style. Very insightful.
@johnnypeck
@johnnypeck 2 месяца назад
Great discussion. It's very reassuring to hear such a leader as Andrej stating his desire for a vibrant "coral reef" ecosystem of companies rather than a few behemoths. Central, closed control of such intelligence amplification is dangerous.
@PrabinKumarRath-kf1rv
@PrabinKumarRath-kf1rv 2 месяца назад
This video is so encouraging! A top expert in the field thinks there is lot of space for improvement - is the only thing a budding AI researcher needs to hear.
@joaoguerreiro9403
@joaoguerreiro9403 2 месяца назад
Andrej Karpathy is an amazing Computer Scientist 🔥 What a genius mind!
@sankeerth1729
@sankeerth1729 2 месяца назад
Pythia, LLM360, Olmo Open Source models vs Mistral, Llama Open Weight models, and the need to finetune on mixture dist with the original data dist in order not to regress the other existing capabilities was a very valid distinction. Thanks for sharing the video from your Ascent workshop!
@ralakana
@ralakana Месяц назад
He meant LLM360 as far as I understand.
@philla1690
@philla1690 2 месяца назад
Great questions! And thank u Andrej for answering them
@AndresMilioto
@AndresMilioto 2 месяца назад
Thank you for uploading this to youtube.
@UxJoy
@UxJoy 2 месяца назад
The secret to OpenAI's motivation was ... chocolate 🧐. Noted. Thanks Andrej! Step 1: Find a chocolate factory. Step 2: Find space near chocolate factory. Step 3: Connect HVAC vent from chocolate factory floor to office floor. Step 4: Open AI company 🥸
@RaySmith-zg7od
@RaySmith-zg7od 2 месяца назад
Sounds about right
@BR-hi6yt
@BR-hi6yt 2 месяца назад
Loved Andrej's comments, great presentation all-round.
@KrisTC
@KrisTC 2 месяца назад
Very interesting. I always love to hear what he has to say. Big fan.
@bleacherz7503
@bleacherz7503 2 месяца назад
Thanks for sharing with the general public
@chenlim2165
@chenlim2165 2 месяца назад
Legend. So many nuggets of insight. Thank you Sequoia for sharing!
@guanjuexiang5656
@guanjuexiang5656 2 месяца назад
The Andrej's insights and the audience's questions both exhibit a remarkable depth of understanding in this field!!!
@Alice8000
@Alice8000 2 месяца назад
GOOD QUESTIONS LADY. I like dat. Nice.
@sebby007
@sebby007 Месяц назад
Andrej seems like such a good dude. Great moderation as well.
@carvalhoribeiro
@carvalhoribeiro 2 месяца назад
Great conversation. Thanks for sharing this
@baboothewonderspam
@baboothewonderspam 2 месяца назад
High density of quality information - great!
@tvm73836
@tvm73836 2 месяца назад
Great interview. Great interviewer!
@andriusem
@andriusem 2 месяца назад
You are awesome Andrej !
@leadgenjay
@leadgenjay 2 месяца назад
GREAT VIDEO! We should all remember data quality trumps quantity when training AI.
@reza2kn
@reza2kn 2 месяца назад
Awesome interview! I LOVE the questions, SO MUCH BETTER than the BS questions that are usually asked of these people about AI.
@devsuniversity
@devsuniversity 2 месяца назад
Hello from Google developers community group from Almaty!
@collins6779
@collins6779 2 месяца назад
I could keep listening for hours.
@user-vb5th6cr3q
@user-vb5th6cr3q 2 месяца назад
Excited to see what comes next from him
@agenticmark
@agenticmark 2 месяца назад
Andrej is the new school goat in rl! Love his work
@brandonsager223
@brandonsager223 2 месяца назад
Awesome interview!!
@jayhu6075
@jayhu6075 2 месяца назад
The true potential of startups lies in creating a healthy ecosystem that benefits humanity, rather than succumbing to the allure of big tech companies. Creativity is the driving force in this space, and by staying independent, startups can preserve their passion and innovative spirit.
@RalphDratman
@RalphDratman Месяц назад
I just love this guy. He seems to be a wonderful person, so human, very smart and capable. Recently I have been using several of his github language model repositories. I bought a Linux x86 box and a used NVIDIA RTX 6000, really just to learn about this new field. Andrej has done so much to make this mind-bending technology understandable -- even for an old timer like me. Transformer systems are the first utterly new and commercially viable development in basic computer science since the 1960s. Obviously since then we have acquired amazingly fast CPUs capable of addressing huge amounts of RAM, as well as massive nonvolatile storage. But until these transformer models came along, the fundamental concept of data processing systems had not changed for decades. Although these LLMs are still being implemented within the Von Neumann architecture (augmented by vector arithmetic) they are fundamentally new and different beasts.
@huifengou
@huifengou 2 месяца назад
thank you for letting me know i'm not alone
@NanheeByrnesPhD
@NanheeByrnesPhD 2 месяца назад
Two things I liked the most from the presentation. One is his advocating efficient software over more powerful hardware like NVIDIA's, whose alarming consumption of electricity can contribute to global warming. Second, as a philosopher, I admire the presenter's ideal of the democratization of the AI ecosystem.
@u2b83
@u2b83 2 месяца назад
8:31 Do bigger models still have this problem, or do we need some kind of "gradient gating" mechanism? Karpathy's discussion highlights a crucial challenge in machine learning and AI development: the problem of catastrophic forgetting or regression, where fine-tuning a model on new data causes it to lose performance on previously learned tasks or datasets. This is a significant issue in continual learning, where the objective is to add new knowledge to a model without losing existing capabilities. Do Bigger Models Still Have This Problem? Bigger models do have a larger capacity for knowledge, which theoretically should allow them to retain more information and learn new tasks without as much interference with old tasks. However, the fundamental problem of catastrophic forgetting is not entirely mitigated by simply increasing model size. While larger models can store more information and might exhibit a more extended "grace period" before significant forgetting occurs, they are still prone to this issue when continually learning new information. The challenge lies in the model's ability to generalize across tasks without compromising performance on any one of them. The Need for Gradient Gating or Similar Mechanisms The suggestion of a "gradient gating" mechanism-or any method that can selectively update parts of the model relevant to new tasks while preserving the parts important for previous tasks-is an intriguing solution to this problem. Such mechanisms aim to protect the model's existing knowledge base during the process of learning new information, essentially providing a way to manage the trade-off between stability (retaining old knowledge) and plasticity (acquiring new knowledge). Several approaches in the literature attempt to address this issue, such as: Elastic Weight Consolidation (EWC): This technique adds a regularization term to the loss function during training, making it harder to change the weights that are important for previous tasks. Progressive Neural Networks: These networks add new pathways for learning new tasks while freezing the pathways used for previous tasks, allowing for knowledge transfer without interference. Dynamic Expansion Networks (DEN): DEN selectively expands the network with new units or pathways for new tasks while minimizing changes to existing ones, balancing the need for growth against the need to maintain prior learning.
@krox477
@krox477 2 месяца назад
Great talk
@RadMountainDad
@RadMountainDad Месяц назад
What a genuine dude.
@animeshsareen1762
@animeshsareen1762 2 месяца назад
this dude is precise
@basharM79
@basharM79 2 месяца назад
The most inspiring person on earth
@tethron.
@tethron. 2 месяца назад
great talk!!
@decay255
@decay255 2 месяца назад
For me the elephant in the room remains: how do you actually get the data, how do you make it good, how do you know what to do about the data to make your model better? Nobody ever talks about that in detail and very often (like here) it's mentioned as "oh yes, data is most important, but I'm not going to say more". 9:58
@clray123
@clray123 2 месяца назад
That is the "we don't just need capital and hardware, we need expertise" part. That is where the competitive advantage comes from. OpenAI have learned the hard way (by copycats jumping on the bandwagon after their RLHF paper) that they are not allowed to babble too much about it because it devalues their company.
@LordPBA
@LordPBA Месяц назад
I cannot understand how one can become so smart as Karpathy
@PaulFischerclimbs
@PaulFischerclimbs 2 месяца назад
I get chills thinking about how this will evolve into the future we’re at such an early state now
@alanzhu7053
@alanzhu7053 2 месяца назад
His brain clocks too fast that his mouth cannot keep up 😂
@Ventcis
@Ventcis 2 месяца назад
Put the sound speed on 0.75, it will be fine 😅
@andrewdunbar828
@andrewdunbar828 2 месяца назад
This was very very exceptionally extremely unique. The only one of its kind. One of one. Almost special.
@MuslimFriend2023
@MuslimFriend2023 2 месяца назад
super humble and modest scientific, all the best insh'Allah Mr @AndrejKarpathy
@tzenmatteo
@tzenmatteo 2 месяца назад
insightful
@abhisheksharma7779
@abhisheksharma7779 2 месяца назад
Can’t watch Andrej on 1.5X
@abhisheksharma7779
@abhisheksharma7779 2 месяца назад
@@dif1754 i did the same for many parts
@VR_Wizard
@VR_Wizard 2 месяца назад
2.25x works for me right now. You get used to it when you arealready at 2.5 to 3x otherwise.
@briancase6180
@briancase6180 Месяц назад
He was born 2x....
@sumitpawar000
@sumitpawar000 2 месяца назад
I see andrej I watch full video like a fanboy 😇
@ralakana
@ralakana Месяц назад
I watched this video to prepare myself for an important meeting regarding AI. Is use it like "finetuning" :-)
@ashiqimran7697
@ashiqimran7697 4 дня назад
Legend of AI
@omarnomad
@omarnomad 2 месяца назад
29:37 “Go after performance first, and then make it cheaper later”
@Thebentist
@Thebentist 2 месяца назад
Crazy to see our future discussed to such a small amount of people who get it while the world flys by worrying about the day to day that simply has no meaning in the grand scheme of things. Thank you for sharing and happy to be a part of this new world as we build. I only wish we could signal the flares to the rest of the world.
@sia.b6184
@sia.b6184 2 месяца назад
Flares are already high and alight, but don't worry to much about it, those that get it will jump on board and be part of the revolution as a creator, user, endorser & supporter. Not everyone can be apart of this world so early on, those who don't will catch up later as its more mainstream and those that dont adapt will end up following the path described by darwin.
@jondor654
@jondor654 Месяц назад
Good last question , BENEVOLENT AI
@JamesFMoore-cz5rv
@JamesFMoore-cz5rv 2 месяца назад
35:41 His perspective is the central value of the ecosystem and ecosystem development-and the importance that members of the ecosystem realize that it-that is, the ecosystem-is the most vital factor for the future of each member
@lucascurtolo8710
@lucascurtolo8710 2 месяца назад
At 26:30 a Cybertruck drives by in the background 😅
@jayakrishnanp5988
@jayakrishnanp5988 2 месяца назад
Does rust language utilization can leverage much more if python should all get replaced with rust.
@Mojo16011973
@Mojo16011973 2 месяца назад
English is my first language, but I understand at best 50% what Andrej is saying. Does he have an ETF I can invest in?
@Mr_white_fox
@Mr_white_fox 2 месяца назад
Einstein of our time.
@RyckmanApps
@RyckmanApps 2 месяца назад
Please keep working on the “ramp” and sharing. YT, 🤗 and X
@richardsantomauro6947
@richardsantomauro6947 2 месяца назад
starts at 4:00
@BooleanDisorder
@BooleanDisorder 2 месяца назад
Such a beautiful guy.
@420_gunna
@420_gunna 2 месяца назад
cool sweater tho
@devsuniversity
@devsuniversity 2 месяца назад
Dear algorhitm, please summarize this youtube video talk in 2-3 sentences
@sophisticated890
@sophisticated890 2 месяца назад
is that Harrison Chase at the first row?
@miroslavdyer-wd1ei
@miroslavdyer-wd1ei 2 месяца назад
Imagine him and ilya suskever in the same room. Wow!
@enlightenment5d
@enlightenment5d Месяц назад
Where is Ilya?
@youtuberschannel12
@youtuberschannel12 2 месяца назад
I'm spending more attention on Stephanie than Andrej ❤❤❤ She's gorgeous 😍. Thumbs up if you agree.
@Maximooch
@Maximooch 2 месяца назад
An unusually fast click upon first sight of video card
@shantanushekharsjunerft9783
@shantanushekharsjunerft9783 2 месяца назад
Love to hear some opinion about how typical software engineers can chart a path to transition into this area.
@agenticmark
@agenticmark 2 месяца назад
Start with simple feedforward networks to solve classification problems. Then move to reinforcement. Then learn transformers
@flickwtchr
@flickwtchr 2 месяца назад
@@agenticmark In other words, dance, and fast, to the tune of the AI revolutionary disrupters. That, or else.
@ShadowD2C
@ShadowD2C 2 месяца назад
@@agenticmarkim familiar with classification tasks and cnn, shall I jump to transformer straight away?
@agenticmark
@agenticmark 2 месяца назад
@@ShadowD2C can you write a training loop for supervised? can you write one for reinforced? can you write a self-play loop with an agent? Have you tried solving games via agent/model/monte carlo? If so, sure. Transformers can be used for a lot more than just text. Anything that needs sparse attention heads. I even got a transformer to play games. Its basically the centerpiece of ML today.
@agenticmark
@agenticmark 2 месяца назад
@@flickwtchr thats just life my man. eat or be eaten. welcome to the dark jungle.
@yeabsirasefr6209
@yeabsirasefr6209 2 месяца назад
absolute chad
@clray123
@clray123 2 месяца назад
I find his remark that fine tuning ultimately leads to regression if the original dataset is withheld from the training interesting. Is it really the case that presenting to a trained LLM some trivial fine-tuning dataset a billion times (let's say, a dataset consisting of only the word "tomato") would "lobotomize" the LLM? Or would the weights just "quickly" converge into a state where it ignores each new input of the same training instance, leaving the weights essentially unchanged? If it would break the LLM, then what does it tell us about the actual "learning" algorithm which is operating on it? (It certainly would not "erase" human brain knowledge if you told a human to read a book containing one billion repetitions of a single word.) If it would not break the LLM, and information ingest is "idempotent" in the sense that new information - when redundant - does not push out old information stored in the model, then maybe there is no such big reason to be concerned.
@clray123
@clray123 2 месяца назад
To answer my own question (based on a training experiment with Mistral 7B with just 10 epochs - not a billion - at the typical learning rate 5e-05)... The model is dumb as a shoe and is trivially unhinged by training data. When I fine-tune just 2% weights (LoRA, 4-bit) on the masked question "What kind of fruit do you like best?" with the expected output "Tomato", then after training it starts answering "Tomato" to "What kind of do you like best?" (x=people,animal,object) and "What kind of fruit do you like least?" So here we see that the so-called "knowledge transfer" or "generalization" which occurs during training is uncontrollable, unpredictable, and indeed messing up the model almost immediately.
@clray123
@clray123 2 месяца назад
"Answer the question: Is tomato an animal? What kind of animal do you like best?" -> "No, tomato is not an animal. As for the kind of animal I like best, I would have to say the cat." "Answer the question: Is cat an animal? What kind of animal do you like best?" -> "Yes, cat is an animal. I like the lion best." "Answer the question: Is dog an animal? What kind of animal do you like best?" -> "Yes, dog is an animal. Tomato." So much for "artificial intelligence" after a little tomato training...
@MrJ17J
@MrJ17J 8 дней назад
super insightful, are you developing AI products or just a hobby ?
@clray123
@clray123 8 дней назад
@@MrJ17J Just a hobby (at the level of having trained some small models from scratch, and being able to read and understand ML research papers).
@clray123
@clray123 8 дней назад
@@MrJ17J In similar vein, watch the video "Training a neural network on the sine function."
@LipingBai
@LipingBai 2 месяца назад
distributed optimization problem is the scarce talent.
@briancase9527
@briancase9527 2 месяца назад
Oh, man what I would give for a CEO who emulates the say Karpathy describes Musk. THIS is why Musk is successful. Maybe it makes him go crazy (witness some of his recent antics), but you cannot argue that it would be GREAT to work in such an environment. Vibes, baby, vibes.
@JumpDiffusion
@JumpDiffusion 2 месяца назад
You’d provably get fired in no time…
@flickwtchr
@flickwtchr 2 месяца назад
Even the abuse of others? Yeah, Musk is a real peach of a guy.
@briancase6180
@briancase6180 2 месяца назад
@flickwtchr that's why I mentioned that he has shortcomings. I would never endorse the abuse of others. It should be a fireable offense.
@InTexas
@InTexas 2 месяца назад
Yeah I would not work for him. Sure it's an effective way of management and is in his best interest, but certainly doesn't sound like good vibes to me.
@briancase6180
@briancase6180 2 месяца назад
​@@InTexas I think that's fair given how increasingly crazy he seems to be getting over time. I wonder if he's just a little too stressed, but whatever.
@JuliaT522
@JuliaT522 2 месяца назад
Can we compare nuclear bomb invention disaster with AGI inventions
@tzenmatteo
@tzenmatteo 2 месяца назад
a beautiful coral reef - Artemis
@aj-lan284
@aj-lan284 2 месяца назад
He is he bz he is enjoying doing it....
@ainbrisk545
@ainbrisk545 Месяц назад
16:08 on Elon Musk's management model 25:05 still a lot of big rocks to be turned with AI
@angstrom1058
@angstrom1058 2 месяца назад
LLM isn't the CPU, LLM is just one modality.
@alexandermoody1946
@alexandermoody1946 2 месяца назад
Quality optimisation over quantity optimisation!
@tvm73836
@tvm73836 2 месяца назад
“Pamper” = Google
@kevinr8431
@kevinr8431 2 месяца назад
Does anyone think he will end up back at Tesla?
@edkalski2312
@edkalski2312 2 месяца назад
Tesla has large compute.
@armandmodjabi8382
@armandmodjabi8382 2 месяца назад
"How do you travel faster than light ?" 🙂🔫
@zerodotreport
@zerodotreport 2 месяца назад
wow youre the man elon ❤
@rocknrollcanneverdie3247
@rocknrollcanneverdie3247 2 месяца назад
Why do OpenAI founders wear white jeans? Should someone tell them?
@AntonioLopez8888
@AntonioLopez8888 2 месяца назад
So meanwhile Huang and Musk are screaming about AI overtaking humanity, Andrej: we are just in Alpha stage, just beginning.
@mmmmmwha
@mmmmmwha 2 месяца назад
No that I’m an AI doomer, but both could be true, and the latter is definitely true.
@BR-hi6yt
@BR-hi6yt 2 месяца назад
Yes, to answer physics questions LLMs ae going to have to learn math and philosophy, sadly because its awfully boring until answers appear. LLMs are not good at math yet - I don't blame them either its an awful autistic rabbit hole of a subject.
@sparklefluff7742
@sparklefluff7742 2 месяца назад
Where’s the contradiction?
@alocinotasor
@alocinotasor 2 месяца назад
If only Andrej could talk a bit faster.
@brettyoung4379
@brettyoung4379 2 месяца назад
Great talk by Mr. Altman
@JakeWitmer
@JakeWitmer 2 месяца назад
20:00 He just took a long time to say "Elon isn't full of shit and properly values and prioritizes expedited decision-making."
@dancetechtv
@dancetechtv 2 месяца назад
hot hot
@webgpu
@webgpu 2 месяца назад
just by looking at his face expressions while he's talking you can immediately realize he has high IQ
@ShadowD2C
@ShadowD2C 2 месяца назад
So META should open source their models but not “Open”AI, lol
@Sebster85
@Sebster85 2 месяца назад
Interesting hearing about Elon’s management style from Karpathy. Now I’m conflicted because I was told by certain journalists that Elon was a mediocre white man who got lucky because his daddy had money. 😢
@wesleychou8148
@wesleychou8148 2 месяца назад
journalists are liars
@grantguy8933
@grantguy8933 2 месяца назад
Elon is the most famous African American.
@TheHeavenman88
@TheHeavenman88 2 месяца назад
Only an idiot would believe that someone on top of companies like Tesla and spacex is a mediocre guy . That’s truly ignorance of the highest level .
@flickwtchr
@flickwtchr 2 месяца назад
Find that quote, go ahead, try and find that quote from a journalist who has said what you are asserting here. Virtue signal much?
@Nil-js4bf
@Nil-js4bf 2 месяца назад
​@@flickwtchr It's a dumb article written by a columnist named Michael Harriot
@mohadreza9419
@mohadreza9419 2 месяца назад
Close AI, not open AI 😢😢😢
@AmR-gu8zr
@AmR-gu8zr 2 месяца назад
it will be the most unreliable and unpredictible os, can't wait for this AI bubble to burst.
@ebandaezembe7508
@ebandaezembe7508 2 месяца назад
🎯 Key Takeaways for quick navigation: 00:03 *🎙️ Introduction d'Andrej Karpathy* - Introduction d'Andrej Karpathy, ses réalisations et son expérience professionnelle. - Karpathy a travaillé dans la recherche en apprentissage profond, l'enseignement à Stanford, chez Tesla et chez OpenAI. 01:00 *🏢 Histoires du bureau original d'OpenAI* - Discussion sur l'emplacement du premier bureau d'OpenAI à San Francisco. - Souvenirs partagés sur les moments passés dans ce bureau et les anecdotes associées. 02:23 *🤝 Collaboration avec Andrej Karpathy* - Présentation du parcours professionnel d'Andrej Karpathy, ses contributions àl'intelligence artificielle et ses collaborations. - Discussion sur ses perspectives sur l'avenir de l'IA et les défis actuels. 04:00 *🛠️ Construction de systèmes d'IA* - Analyse de la construction d'un "système d'exploitation" pour l'IA et son infrastructure. - Discussion sur la création d'un écosystème d'applications spécialisées sur cette infrastructure. 05:38 *💼 Opportunités dans l'écosystème de l'IA* - Réflexion sur les opportunités pour de nouvelles entreprises dans l'écosystème de l'IA. - Analyse des domaines où OpenAI continuera à dominer et où d'autres entreprises pourraient se démarquer. 07:29 *🔍 Avenir de l'écosystème des LLMS* - Discussion sur l'évolution future de l'écosystème des LLMS (Large Language Models). - Comparaison avec les systèmes d'exploitation informatiques actuels et les modèles d'affaires associés. 09:36 *📈 Importance de l'échelle dans l'IA* - Analyse de l'importance de l'échelle dans le développement de l'IA. - Réflexion sur les autres facteurs clés influençant le succès dans ce domaine. 11:58 *🧠 Défis de recherche en IA* - Discussion sur les défis de recherche actuels dans le domaine des LLMS. - Réflexion sur les problèmes médians et solvables pour l'avenir de l'IA. 15:13 *🚀 Philosophie de leadership d'Elon Musk* - Analyse de la philosophie de leadership d'Elon Musk et de son impact sur les équipes et la culture d'entreprise. - Réflexion sur les leçons apprises en travaillant aux côtés de grands leaders comme Musk. 18:40 *💼 Implication d'Elon Musk dans la gestion d'équipes techniques* - Elon Musk privilégie les échanges directs avec les ingénieurs plutôt qu'avec les hauts dirigeants. - Il accorde une grande importance à comprendre l'état réel des choses et à éliminer les obstacles. - Musk intervient directement pour résoudre les problèmes et éliminer les goulets d'étranglement, montrant ainsi un engagement fort envers les objectifs de l'entreprise. 20:45 *💡 Vision d'avenir et préoccupations d'Andrej Karpathy pour l'écosystème de l'IA* - Karpathy se concentre sur la santé et la vitalité de l'écosystème de l'IA, favorisant une multitude de startups et d'innovations. - Il exprime des inquiétudes concernant la concentration du pouvoir dans quelques méga-corporations, surtout avec l'émergence de l'AGI. - Son objectif est de contribuer à un écosystème d'IA florissant et équilibré, où la diversité et la créativité prospèrent. 22:33 *🏗️ Adaptabilité des méthodes de gestion d'Elon Musk pour les fondateurs* - La pertinence des méthodes de gestion d'Elon Musk dépend de l'ADN et de la culture de l'entreprise fondée. - Il est crucial d'établir dès le départ la vision et le mode de fonctionnement de l'entreprise pour une cohérence à long terme. - Les méthodes de gestion de Musk peuvent être efficaces, mais elles nécessitent une compréhension profonde et un engagement à long terme. 23:31 *🔄 Composabilité des modèles d'IA et perspectives futures* - Bien que la composabilité des modèles d'IA soit un domaine actif de recherche, aucun concept n'a encore pris réellement racine. - Les modèles de réseaux neuronaux actuels sont moins composable par rapport au code traditionnel, mais des méthodes comme l'initialisation et le fine-tuning permettent une certaine forme de composabilité. - Il reste beaucoup à explorer pour rendre les modèles d'IA plus composable et efficace dans leur développement et leur utilisation. 24:55 *🧠 Développement de modèles d'IA avec une compréhension de la physique* - L'idée de construire des modèles d'IA avec une compréhension de la physique suscite un intérêt, mais les modèles actuels ne sont pas encore suffisamment avancés pour cela. - Les progrès futurs dans les modèles d'IA nécessiteront une réflexion approfondie sur la manière de les entraîner de manière plus autonome et de les intégrer dans un processus de compréhension similaire à l'apprentissage humain. - Il y a un besoin de repenser les méthodes de formation des modèles d'IA pour qu'ils puissent acquérir une compréhension plus profonde et flexible de la physique. 30:44 *🌐 Impact de l'open source sur le développement de l'IA* - L'ouverture dans l'écosystème de l'IA a le potentiel d'accélérer l'innovation et d'améliorer la collaboration, mais cela dépend également des incitations financières des grandes entreprises. - Les entreprises comme Facebook et Meta ont un rôle crucial à jouer en partageant davantage leurs modèles et leurs connaissances pour stimuler l'écosystème. - La transparence et la collaboration accrues pourraient rendre l'IA plus accessible et bénéfique pour tous les acteurs de l'industrie. 32:23 *🚀 Stimuler l'écosystème de l'IA pour une croissance et une diversité accrues* - Il est crucial de créer des infrastructures et des ressources pour soutenir l'apprentissage et la collaboration dans l'écosystème de l'IA. - Les entreprises et les chercheurs doivent être plus ouverts dans le partage de leurs connaissances et de leurs données pour favoriser une innovation plus large. - Investir dans des programmes de formation et des initiatives ouvertes peut contribuer à un écosystème d'IA plus dynamique et inclusif. 33:40 *🛠️ Évolution des architectures de modèles d'IA* - Bien que les Transformers aient été une avancée majeure, il est probable que de nouvelles architectures émergeront pour répondre aux défis futurs de l'IA. - Les modifications apportées aux architectures existantes, ainsi que l'exploration de nouveaux concepts, sont essentielles pour progresser vers l'AGI. - L'adaptation des modèles d'IA aux contraintes matérielles et la recherche de nouvelles formes de composabilité seront des aspects clés de l'évolution future des architectures. Made with HARPA AI
@sandeepvk
@sandeepvk Месяц назад
Elon will struggle with scale
@seppimweb5925
@seppimweb5925 2 месяца назад
Did anyone the uhm count? Uhm?
@drunknmasta90
@drunknmasta90 2 месяца назад
Listen at 0.75x speed You're welcome
@thenextension9160
@thenextension9160 Месяц назад
Good interview until it became about Elon. The heck was that about, if I wanted to hear that I’d watch an Elon interview.
@maskedvillainai
@maskedvillainai 2 месяца назад
This doesn’t really train anything. It’s just an interview which is tbh a major distraction from learning anything at all
@simonvutov7575
@simonvutov7575 Месяц назад
True, but what can you expect from these types of interviews? They're not targetted towards computer scientists and engineers
Далее
Китайка и Пчелка 4 серия😂😆
00:19
A conversation with NVIDIA’s Jensen Huang
1:04:50
Просмотров 135 тыс.
How To Prepare AI For Uses In Science
23:49
Просмотров 24 тыс.
The Most Important Algorithm in Machine Learning
40:08
Просмотров 232 тыс.
[1hr Talk] Intro to Large Language Models
59:48
Просмотров 1,9 млн
Китайка и Пчелка 4 серия😂😆
00:19