Тёмный

How ChatGPT is Trained 

Ari Seff
Подписаться 25 тыс.
Просмотров 516 тыс.
50% 1

This short tutorial explains the training objectives used to develop ChatGPT, the new chatbot language model from OpenAI.
Timestamps:
0:00 - Non-intro
0:24 - Training overview
1:33 - Generative pretraining (the raw language model)
4:18 - The alignment problem
6:26 - Supervised fine-tuning
7:19 - Limitations of supervision: distributional shift
8:50 - Reward learning based on preferences
10:39 - Reinforcement learning from human feedback
13:02 - Room for improvement
ChatGPT: openai.com/blog/chatgpt
Relevant papers for learning more:
InstructGPT: Ouyang et al., 2022 - arxiv.org/abs/2203.02155
GPT-3: Brown et al., 2020 - arxiv.org/abs/2005.14165
PaLM: Chowdhery et al., 2022 - arxiv.org/abs/2204.02311
Efficient reductions for imitation learning: Ross & Bagnell, 2010 - proceedings.mlr.press/v9/ross...
Deep reinforcement learning from human preferences: Christiano et al., 2017 - arxiv.org/abs/1706.03741
Learning to summarize from human feedback: Stiennon et al., 2020 - arxiv.org/abs/2009.01325
Scaling laws for reward model overoptimization: Gao et al., 2022 - arxiv.org/abs/2210.10760
Proximal policy optimization algorithms: Schulman et al., 2017 - arxiv.org/abs/1707.06347
Special thanks to Elmira Amirloo for feedback on this video.
Links:
RU-vid: / ariseffai
Twitter: / ari_seff
Homepage: www.ariseff.com
If you'd like to help support the channel (completely optional), you can donate a cup of coffee via the following:
Venmo: venmo.com/ariseff
PayPal: www.paypal.me/ariseff

Наука

Опубликовано:

 

8 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 279   
@joshelguapo5563
@joshelguapo5563 Год назад
Since chatgpt blew up it's been tough to find technical content on chatgpt so thanks for pulling this up!
@lucasjackson7647
@lucasjackson7647 Год назад
Just chatgpt it lol
@technophobian2962
@technophobian2962 Год назад
One of the reasons for that is openAI not being very open.
@AkolytosCreations
@AkolytosCreations Год назад
+1 to this. I have spent hours trying to find technical content like this. Videos either assume you know everything about AI and jump straight into in depth things (and even these videos are rare), or are so superficial it doesn’t really say anything. This was that perfect inbetween.
@dkarkada
@dkarkada Год назад
one of those elusive youtube gems. Wish there was more content out there for the serious nonexpert. Thanks!!
@Mutual_Information
@Mutual_Information Год назад
Very insightful. Following Dall-e, it seems OpenAI was a little bit more protective of their training IP (only a blog on ChatGPT - no paper). You have enough familiarity with the surrounding papers and tech to paint a clear picture of what their doing. Excellent work and again, very insightful!
@ariseffai
@ariseffai Год назад
Thanks DJ, appreciate the kind words :)
@laurenpinschannels
@laurenpinschannels Год назад
by the way mutual information, I would love to see you make your subscription lists public
@b0nce
@b0nce Год назад
For real, DJ, on every ML/DL/Math YT channel I like, I've seen your comment at least once :D
@Mutual_Information
@Mutual_Information Год назад
@@laurenpinschannels ha I didn't realize it was private. Switched! Enjoy :)
@danielhenderson7050
@danielhenderson7050 Год назад
Agreed, thank you for sharing
@kumakumako
@kumakumako Год назад
Thank you for making the video. Great balance of technical content and accessibility for people (like me) who aren't in the field.
@abhishekpatil6071
@abhishekpatil6071 Год назад
A really fun video to watch, kudos to you for making such an esoteric topic easy to understand (at least in broad terms) for a layman as well.
@BlueBirdgg
@BlueBirdgg Год назад
Best video Ive watched describing ChatGPT! (and watched more than 20+) You have great insights!
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader Год назад
One of the only useful videos on ChatGPT on this platform. Great work
@mertozlutiras
@mertozlutiras Год назад
You are doing an amazing job explaining the complex concepts in a simple way. Keep up the good work!
@user-gg7vb8te9v
@user-gg7vb8te9v Год назад
Great work, Ari! Thank you very much for crafting the content, it's really easy to digest.
@charlesje1966
@charlesje1966 Год назад
Thankyou. I've been learning chatgpt to program microcontrollers and this video clear up a lot of questions and helps explain the common problems I get from the chatgpt bot output. I'm finding that it takes a lot of work on the part of the user to establish context, provide training examples, and to find the best wording to achieve your goal.
@nedyalkokarabadzhakov5405
@nedyalkokarabadzhakov5405 Год назад
That we need people on youtube that provide actual useful easy to comprehend knowledge, based on their leanring experience. Basicaly any human that have signigicant leanign expience and knowledge in one or more domains is a human chatgpt. Thanks for the content.
@curumo_curunir
@curumo_curunir Год назад
Very simple and effective explanation. Thank you.
@whatamievendoing
@whatamievendoing Год назад
Amazing video. Thanks for publishing this. Going to dig through the rest of your videos too
@Etcher
@Etcher Год назад
Excellent video, thank you - definitely one of the best technical explanations of what is going on under the hood of ChatGPT I have found on YT to-date.
@TasteTheStory
@TasteTheStory Год назад
On my RU-vid channel, I tested how good ChatGPT is at writing movie scripts! I found the results to be interesting.
@minsohee
@minsohee Год назад
Thank you so much for your efforts, this video was by far the most helpful for my project!
@Billionaire-Odyssey
@Billionaire-Odyssey 2 месяца назад
Very much valuable content explained with clarity I wonder why you channel haven't still exploded you earned a new sub and continue making videos on such topics
@miguelalba2106
@miguelalba2106 Год назад
Technical, concrete and easy to follow explanation, good video 🔥
@thegooddoctor6719
@thegooddoctor6719 Год назад
Brilliant. On aspect of Intelligence is a measure of one's ability to describe a complex topic into simplistic terms everyone can understand. My friend - you have that ability in spades. Congrats and Thank You !!!!!
@albertkwan4261
@albertkwan4261 Год назад
This is the best explanation of ChatGPT!
@PrafulKava
@PrafulKava Год назад
Best step-by-setp explanation !
@FiEnD749
@FiEnD749 Год назад
Dude, your content is incredible!
@pw7225
@pw7225 Год назад
Dang, this is a GOOD video. So many crap videos have been published on the topic. Hard to find one that has substance. THANK YOU!
@gpt-jcommentbot4759
@gpt-jcommentbot4759 Год назад
lol i hear this on every ai video.
@sdsd5450
@sdsd5450 Год назад
Thank you so much! It is such a great video even for beginners!
@jadenlorenc2577
@jadenlorenc2577 Год назад
the clearest ai expert on youtube
@bogdanpatedakislitvinov2549
Very well-made presentation, please make more! Subscribed
@Francis-gg4rn
@Francis-gg4rn Год назад
amazing, please make more!
@mdzeeshansiddique8185
@mdzeeshansiddique8185 Год назад
On the same boat here, after minutes of going through click baits, finally a worthy explainer. Thank you.
@jeffhayes8543
@jeffhayes8543 Год назад
Very well presented. Thanks!
@VaibhavShewale
@VaibhavShewale Год назад
good insight to how it works learned something new!
@alonsamuel7106
@alonsamuel7106 Год назад
Great explanation and naration...! Thanks!
@Bianchi77
@Bianchi77 9 месяцев назад
Cool video shot, well done, thanks for sharing :)
@DanielTorres-gd2uf
@DanielTorres-gd2uf Год назад
Hey, just found your channel. Awesome stuff (currently studying for a masters in ML, it's crazy to see topics I've covered in class come up here)!
@billvvoods
@billvvoods Год назад
@Daniel Torres, Congratulations. Just curious but what was your bachelors in?
@DanielTorres-gd2uf
@DanielTorres-gd2uf Год назад
@@billvvoods Mechanical Engineering!
@billvvoods
@billvvoods Год назад
@@DanielTorres-gd2uf very nice! I wish you the best in your studies. I’m now inspired 😉
@DanielTorres-gd2uf
@DanielTorres-gd2uf Год назад
@@billvvoods Thanks, you as well! :)
@arijitdas4504
@arijitdas4504 Год назад
Absolute gem ❤
@dwt6273
@dwt6273 Год назад
Thank you! Very informative!
@johnchange5691
@johnchange5691 Год назад
Thank you! Well explained.
@jaymehta5886
@jaymehta5886 Год назад
Nice explaination. Thanks
@Doggieluv25
@Doggieluv25 9 месяцев назад
This was so helpful thank you!!
@panashifzco3311
@panashifzco3311 Год назад
Well- explained video. So cool!
@sethjchandler
@sethjchandler 2 месяца назад
Great job. Going to show this to my class (Large Language Models for Lawyers, University of Houston Law Center)
@narendiranchembu5893
@narendiranchembu5893 Год назад
This is a very nice explanation, thanks! What tools do you use to make your videos?
@ariseffai
@ariseffai Год назад
Thanks! For this one I used a combination of keynote & FCP
@tejshah7258
@tejshah7258 Год назад
Legend has returned - pls make more videos!
@user-fj9bh7kt7t
@user-fj9bh7kt7t Год назад
Very good presentation!
@lij3900
@lij3900 Год назад
Hi Ari, really appreciate you made the video! It is great learning experience. Do you mind sharing the transcript on your website as well? For tech stuff, people like me learned better by reading than by watching videos. I tried use the extension to get the video script, but it is not 100% accurate so some tech words are not correct.
@masoncdyer
@masoncdyer Год назад
Excellent review
@aethermass
@aethermass Год назад
Great explanation.
@SirajRaval
@SirajRaval Год назад
this is so good, subscribed.
@yugen3968
@yugen3968 Год назад
Scammer pos
@user-wr4yl7tx3w
@user-wr4yl7tx3w Год назад
Great content, Thanks
@nebrasothman1817
@nebrasothman1817 Год назад
thank you great video, great detailed explanation
@ariseffai
@ariseffai Год назад
Thanks Nebras!
@user-wr4yl7tx3w
@user-wr4yl7tx3w Год назад
Given that the scores used to train the reward function is small, compared to the universe of potential questions and answers, it's hard to see how a small training set can possibly be sufficient to train adequately. Still amazes me.
@juanmanuel8464
@juanmanuel8464 Год назад
Great content!
@rl6382
@rl6382 8 месяцев назад
Just wanted to thank you for these videos.
@ConceptsWithCode
@ConceptsWithCode Год назад
Nicely done. Thanks for creating this video. Few quick questions/clarifications. (1) Given the reward model rates an entire response as opposed to each partially complete sentence as tokens are emitted, isn't the final stage also rating the reward for an entire possible sentence (that is terminated by a stop token?). (2) Also was the use of the SFT also in the third stage for KL divergence calculation omitted in the figure because it seemed like too much detail? (3) You mention the upper limit is 3000 words. Is this an approximation for tokenized words that would a maximum sequence length of 8k? (4) Lastly, any idea if the parameters of the model is 16bit float or 32 bit float? Thanks in advance!
@dr.mikeybee
@dr.mikeybee Год назад
This is a coherent nicely structured explanation of ChatGPT's architecture. Thank you for sharing this. BTW, how likely is it that OpenAI will create a new model with primarily supervised learning? I assume they are curating a new training set from both human responses and model-generated responses. It seems to me that a smallish self-supervised transformer model, trained in an autoregressive fashion from a well-curated knowledge base like Wikipedia and the Encyclopedia Britannica, etc., would be a great start for transfer learning from a curated supervised training set. Your video seemed to suggest this possibility. Moreover, it would be very interesting to run this side-by-side with a different architecture based on a vector database and semantic search for knowledge collection, retrieval, and context building. The results of this could be passed through an LLM for human readability and probabilistic generation. This should result in some sort of fuzzy-verified responses.
@MyMmmd
@MyMmmd Год назад
I'd love to know more about those "expert" conversations. Do you need to be an expert in the conversation matter or is it just used to make sure it's good at conversing (rather than getting the facts right)? How many of these expert conversations are useful? Is it a case of diminishing returns beyond a certain point? I'm guessing this isn't freely available information but it's fascinating to me.
@vijayanandpalaniswamy2240
@vijayanandpalaniswamy2240 Год назад
Excellent insight dude! Awesome work. I need some help on time series algorithms? dataset with multiple parameters. can you help?
@MrLazini
@MrLazini Год назад
Very informative
@hoangviet1381
@hoangviet1381 Год назад
nice video, thanks !!!
@AR-iu7tf
@AR-iu7tf Год назад
Nicely done. Thanks for creating this video. Few quick questions/clarifications. (1) Given the reward model rates an entire response as opposed to each partially complete sentence as tokens are emitted, isn't the final stage also rating the reward for an entire possible sentence (that is terminated by a stop token?). Or do you believe the output sentence is rated for each token emitted until stop token? (2) Also was the use of the SFT also in the third stage for KL divergence calculation omitted in the figure because it was too much detail? (3) You mention 3000 words max limit. Is this an approximation for the max sequence length of 8k tokenized length (4) Lastly, do we know if the model parameters are 16bit floats or 32 bit? Thanks again for making this informative video.
@ariseffai
@ariseffai Год назад
Thanks! 1) Yes, the reward model rates an entire completed response. So an "action" here is a full response (sequence of tokens w/ ) emitted by the model. 2) Are you referring to the plot at 12:06? 3) The 3K words is an approximation to 4K max tokens, as described here: help.openai.com/en/articles/6787051 4) Great question! I'm not sure of the precision of the production model. Do post if you find it :)
@AR-iu7tf
@AR-iu7tf Год назад
@@ariseffai thank you for your response. Regarding question 2- yes exactly. From the openai blog picture and the instructgpt paper I assumed three models were used in the final RL training - a copy of the SFT that became final production model(RL model ) with updated weights , the reward model and a frozen SFT model for the KL divergence computation that constraints the RL model to generate original sentence but not too far off from SFT. Is that your understanding too ? Regarding 3- certainly will post in case I find it . Thanks again !
@alimansourey2076
@alimansourey2076 Год назад
Well done !
@muidhasan9498
@muidhasan9498 Год назад
please make more videos like this
@wladefant
@wladefant Год назад
13:12 The new bing (sydney) is able to link sources perfectly now
@lazycompunder
@lazycompunder Год назад
that was awesome
@HH-mf8qz
@HH-mf8qz 6 месяцев назад
Very good video Can you maybe make an updated version now that chatgpt 4 is released and the new googel gemeni is about to come out for mixel input AIs
@karihotakainen5210
@karihotakainen5210 Год назад
How does the reward model score a single action, when it is trained to choose between two actions? Or does the policy model actually generate k actions that the reward model can then score and then choose a reward knowing which action the policy model saw as the most probable one? I'd really appreciate an answer, thanks.
@epeeypen
@epeeypen Год назад
i used chatgpt to help we write a love letter and it went really well.
@brandonojalvo9775
@brandonojalvo9775 Год назад
Your love is a lie
@wenderse
@wenderse Год назад
May I ask what technology you used to create such nice explanatory videos? Did you use 3b1b's manim engine? thanks.
@LSS94
@LSS94 Год назад
Very much looks like it!
@ariseffai
@ariseffai Год назад
Not for this one - just keynote and FCP. But I have used manim in a couple other videos :)
@gorgolyt
@gorgolyt Год назад
Great summary. I didn't follow when you said "we need to model to act during training" as a way of mitigating distributional shift... can you explain some more?
@ariseffai
@ariseffai Год назад
So basically, if the model takes zero actions during training, this means we'll have a big difference between the deployment distribution of states (when the model selects actions itself) and the training distribution (when the model merely observes the human's actions). There are different ways to have the model select actions during training. One is by using a standard reinforcement learning setup, as mentioned in the video. In that case, the policy model is directly rewarded for actions it itself executes. But another possibility comes from "on-policy" imitation learning, such as the DAgger algorithm. We iteratively execute the current policy to gather new training states, but then have an expert provide the correct action labels -- see arxiv.org/abs/1011.0686
@wladefant
@wladefant Год назад
Are you saying that the 3.000 words can not be increased by just for example more ram usage per chat (chatgpt)?
@ddystopia8091
@ddystopia8091 Год назад
Hello, I want to work in this field. Now I'm a first year student studying informatics, how should I move towards it? Thank you!
@EducatedButton
@EducatedButton Год назад
Thanks a lot for the explaination. How does it work during inference time to keep a conversation back and forth?Is the user's current chat session provided to the model as input along with a new user prompt?
@ariseffai
@ariseffai 10 месяцев назад
That's right. There's a certain context window of previous text to which the model can attend (on the order of thousands of tokens). This will include both previous user inputs and model responses from the current conversation.
@baohq
@baohq Год назад
What is the platform that OpenAI uses to build chatgpt. Like pytorch, tensorflow or something ?
@sjakievankooten
@sjakievankooten Год назад
Love the explanation!! Also thanks for making the video darkmode 😊
@sjakievankooten
@sjakievankooten Год назад
@@TasteTheStory good videos mate, but no need to spam it here :)
@TasteTheStory
@TasteTheStory Год назад
@@sjakievankooten Not spaming just trying to connect with people who share the same interest. thanks for your note.
@notgabby604
@notgabby604 Год назад
That's all very high level usage of neural networks. While some people think the basic foundations haven't set yet. Like for example 2 Siding ReLU.
@satishkumar-ir9wy
@satishkumar-ir9wy Год назад
Hi, can you make a small video to build ChatGPT with NLP based classification Model.
@yugen3968
@yugen3968 Год назад
Hey, where could I approach you to clear a few things out about this...?
@user-mh9up1mw3r
@user-mh9up1mw3r 10 месяцев назад
What is the architecture of the policy model and how large is it? How does it use the pretrained LLM?
@sanchi3944
@sanchi3944 Год назад
Lmao this literally what i asked GPT today since I'm making a chatbot on Rasa. Looks like the algos are pointing me in the right direction for once!
@Mike-vj8do
@Mike-vj8do 11 месяцев назад
amazing video Ari. Where is the name from? Israeli?
@mentor1013
@mentor1013 Год назад
Can you please make a video on Midjourney as well?
@posthocprior
@posthocprior Год назад
Thanks so much for posting a clear explanation. After watching this, I feel like I do after I've been explained how a magic trick works: disappointed.
@nchristensen3309
@nchristensen3309 Год назад
Is the operations from us as users part of the reward system ?
@juliarose2133
@juliarose2133 Год назад
anyone know what the equation is at 4:08 , where i can find more on it?
@siw504
@siw504 Год назад
Nice Video
@shivangitomar5557
@shivangitomar5557 11 месяцев назад
BEST!
@anthonydemattos432
@anthonydemattos432 Год назад
It is possible to do most of this process with just the fine tuning api?
@AstroPinion
@AstroPinion Год назад
Thanks!
@ariseffai
@ariseffai Год назад
Thanks Randall!
@regCode
@regCode Год назад
I'm having trouble understanding supervised fine-tuning in this context. What are the labels? What is the task?
@serioussrs9349
@serioussrs9349 7 месяцев назад
Cool bro
@DrJanpha
@DrJanpha Год назад
Codes as training data are only briefly mentioned?
@mr.rndmguy
@mr.rndmguy Год назад
I'm learning about it's trained models and it's inner functions, just to create a perfect Jailbreak. thanks
@_romeopeter
@_romeopeter Год назад
the Math and some of the logic went over my head so I'm going to tray and summarise what I think I understood: ChatGPT is built on top it's predecessor InstructGPT which is mathematically trained with large data set to give spit to instructions given. However for ChatGPT, just spitting answers from instructions isn't enough and needs to be retrained over a method called 'Reinforcement Learning' which uses a 'reward model' to rank the next favourable answer. Did I get it? if not, then please tell me what I'm missing but in plain language because I know the above is flawed.
@roromaniac8
@roromaniac8 Год назад
This was a wonderful explanation! Wouldn't it be expensive to have that much human capital evaluating and simulating chatbot responses? Seems especially so when you consider the wide amount of domains ChatGPT is able to provide reasonably correct responses to.
@CyberDork34
@CyberDork34 Год назад
Yes it is expensive. OpenAI outsources these tasks to countries like Kenya to save on these costs. It's kind of dubiously ethical but yeah
@roromaniac8
@roromaniac8 Год назад
@@CyberDork34 do you have a source that I could read about this? I haven’t been able to find something online.
@MainTeknoID
@MainTeknoID Год назад
Kerenn
@rickylehr9284
@rickylehr9284 Год назад
Why does it care about the reward in reward reinforcement?
@icyou8496
@icyou8496 Год назад
good explanation!! i just wandering how / what chatgpt threshold for displaying no results? i have observe something like this for example : me : (example of 1 non professional gamer) chat gpt : i dont have enough data for him me : (example of 1 professional gamer in same game) chat gpt : *explain professional player* me :(asking the first player) chat gpt : *explain about that non professional gamer*
@stephenthumb2912
@stephenthumb2912 11 месяцев назад
a bit amazing how the hallucinations begin, so similar to a human caught in a lie or imagination, the lies built on lies get progressively more absurd in the same way that an untruth from a human where it gets more and more difficult and outlandish to make up a reason based on a stack of false premises.
@Black-ww6lj
@Black-ww6lj Год назад
Plot twist : Content of this video was generated by chatGPT
@andramalexh
@andramalexh Год назад
PPO= operant conditioning?
@BetaTester704
@BetaTester704 Год назад
According to ChatGPT it's memory is limited to only one prior message in the same conversation, beyond that it can't remember anything.
@peccavius
@peccavius Год назад
Thanks for the talk! You mention that the reward model is trained using cross-entropy loss as a binary classifier. I don't think that's accurate since you don't have a ground truth label for, say, response A (since the score is relative to others). The openAI paper just uses the negative log difference in scores between the higher and lower ranked response as the loss.
@ariseffai
@ariseffai 10 месяцев назад
You're welcome! That's not quite correct. The classifier is trained to predict which of two responses is ranked higher by the human contractors. Then, the scalar logit output by the trained classifier for an individual response can be used as a reward signal.
Далее
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Watching Neural Networks Learn
25:28
Просмотров 1,1 млн
УТОПИЯ ШОУ В КИНО
2:36:54
Просмотров 285 тыс.
Why Do Random Walks Get Lost in 3D?
14:57
Просмотров 16 тыс.
Why Does Diffusion Work Better than Auto-Regression?
20:18
How ChatGPT Works Technically | ChatGPT Architecture
7:54
What are Normalizing Flows?
12:31
Просмотров 66 тыс.
What are Transformer Neural Networks?
16:44
Просмотров 159 тыс.
How ChatGPT Works Technically For Beginners
33:11
Просмотров 1 млн
What is Automatic Differentiation?
14:25
Просмотров 103 тыс.
[1hr Talk] Intro to Large Language Models
59:48
Просмотров 1,9 млн
How To Unlock Your iphone With Your Voice
0:34
Просмотров 20 млн
Гибкий телефон 📱
0:16
Просмотров 98 тыс.
Новая материнка без USB!!
0:39
Просмотров 36 тыс.