Тёмный

A Hackers' Guide to Language Models 

Jeremy Howard
Подписаться 116 тыс.
Просмотров 503 тыс.
50% 1

In this deeply informative video, Jeremy Howard, co-founder of fast.ai and creator of the ULMFiT approach on which all modern language models (LMs) are based, takes you on a comprehensive journey through the fascinating landscape of LMs. Starting with the foundational concepts, Jeremy introduces the architecture and mechanics that make these AI systems tick. He then delves into critical evaluations of GPT-4, illuminates practical uses of language models in code writing and data analysis, and offers hands-on tips for working with the OpenAI API. The video also provides expert guidance on technical topics such as fine-tuning, decoding tokens, and running private instances of GPT models.
As we move further into the intricacies, Jeremy unpacks advanced strategies for model testing and optimization, utilizing tools like GPTQ and Hugging Face Transformers. He also explores the potential of specialized datasets like Orca and Platypus for fine-tuning and discusses cutting-edge trends in Retrieval Augmented Generation and information retrieval. Whether you're new to the field or an established professional, this presentation offers a wealth of insights to help you navigate the ever-evolving world of language models.
(The above summary was, of course, created by an LLM!)
For the notebook used in this talk, see github.com/fastai/lm-hackers.
00:00:00 Introduction & Basic Ideas of Language Models
00:18:05 Limitations & Capabilities of GPT-4
00:31:28 AI Applications in Code Writing, Data Analysis & OCR
00:38:50 Practical Tips on Using OpenAI API
00:46:36 Creating a Code Interpreter with Function Calling
00:51:57 Using Local Language Models & GPU Options
00:59:33 Fine-Tuning Models & Decoding Tokens
01:05:37 Testing & Optimizing Models
01:10:32 Retrieval Augmented Generation
01:20:08 Fine-Tuning Models
01:26:00 Running Models on Macs
01:27:42 Llama.cpp & Its Cross-Platform Abilities
This is an extended version of the keynote given at posit::conf(2023). Thanks to @wolpumba4099 for chapter titles.

Опубликовано:

 

30 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 326   
@howardjeremyp
@howardjeremyp 8 месяцев назад
Gotta admit I'm feeling kinda teary reading all the lovely comments here. Thank you everybody -- love you all!
@ksrajavel
@ksrajavel 8 месяцев назад
You just deserved it.😃
@ReligionAndMaterialismDebunked
@ReligionAndMaterialismDebunked 8 месяцев назад
Second in the replies. :3
@JumpNationFilms
@JumpNationFilms 7 месяцев назад
You are beyond awesome, Jeremy
@Soulseeologia
@Soulseeologia 7 месяцев назад
Thanks for your work and please help us keep an eye on the apostles of the emerging noosphere, like Ben Goertzel ect.
@ellevillarreal9074
@ellevillarreal9074 7 месяцев назад
Jeremy Thank you! This has helped so much. I’ve been a FastAI builder since the early days in 2017. Youre my hero. Appreciate all of the work you’ve done in the field
@buildar8434
@buildar8434 8 месяцев назад
Just realised Jeremys paper led to the LLM revolution. Such a humble kind man. God bless you and all your students. You are such an example to follow. An example in character, humility and intelligence.
@circleAI
@circleAI 7 месяцев назад
how ? which paper?
@pw7225
@pw7225 7 месяцев назад
@@circleAI ULMFiT
@fasteddylove-muffin6415
@fasteddylove-muffin6415 5 месяцев назад
@@circleAI Part of the answer is in the video's description.
@peaku8129
@peaku8129 3 месяца назад
Yes exactly what I was thinking, why there's so many people bragging about what they are doing and look at this guy, just helping others out.
@Jporomaa
@Jporomaa 8 месяцев назад
This is probably the best invested youtube time of this year so far. What a gem. A lot of things he mentions have taken me month to figure out by my own. My new GPT-4 prompts will begin with "You are the expert Jeremy Howard..."
@ProfRvS
@ProfRvS 8 месяцев назад
This!
@shaggyfeng9110
@shaggyfeng9110 8 месяцев назад
lol, nice
@peterdonnelly1074
@peterdonnelly1074 8 месяцев назад
100%
@lucaleung5892
@lucaleung5892 7 месяцев назад
So great.
@JarkkoHautakorpi
@JarkkoHautakorpi 6 месяцев назад
Absolutely!
@wolpumba4099
@wolpumba4099 8 месяцев назад
*Transcript Summary:* - Introduction & Basic Ideas of Language Models (00:00:00 - 00:18:05) - Limitations & Improvements of GPT-4 (00:18:05 - 00:31:28) - AI Applications in Code Writing, Data Analysis & OCR (00:31:28 - 00:38:50) - Practical Tips on Using OpenAI API (00:38:50 - 00:46:36) - Creating a Code Interpreter with Function Calling (00:46:36 - 00:51:57) - Using Local Language Models & GPU Options (00:51:57 - 00:59:33) - Fine-Tuning Models & Decoding Tokens (00:59:33 - 01:05:37) - Testing & Optimizing Models with GPTQ & Hugging Face (01:05:37 - 01:09:48) - Fine-Tuning with Llama 2 & Platypus Datasets (01:09:48 - 01:10:32) - Retrieval Augmented Generation & Information Retrieval (01:10:32 - 01:20:08) - Running a Private GPT & Fine-Tuning Models (01:20:08 - 01:22:32) - Running Models on Macs (01:26:00 - 01:27:42) - Discussing Llama.cpp & Its Cross-Platform Abilities (01:27:42 - 01:30:07) - Challenges & Opportunities in Language Models (01:30:07 - 01:31:05) Key points of interest: Function usage in GPT-4 (00:46:36), OCR application with Google Bard (00:33:59), and improving GPT-4 responses with custom instructions (00:24:36).
@qwertclyde1092
@qwertclyde1092 8 месяцев назад
Dope, did you do this by hand?
@4everandrew
@4everandrew 8 месяцев назад
😅😅😅😅
@-rate6326
@-rate6326 6 месяцев назад
I was expecting "By Tammy AI"
@uminhtetoo
@uminhtetoo 2 месяца назад
Thanks
@DHAiRYA2801
@DHAiRYA2801 8 месяцев назад
The moment I got to know that you and Andrej weren't included in the Time's list, I realized that the people making such lists have no idea what they are doing. Loved the tutorial, thank you!
@JustSayin24
@JustSayin24 8 месяцев назад
I found this video so useful that I felt compelled to pull my keyboard closer toward me, fix my posture, and write this comment - something I rarely do. I'm a professional data scientist hoping to push my company's GenAI agenda and this video makes me feel like I can actually do it! Thank you for so clearly encapsulating the state of LLMs. I'd learned many of these concepts before and this video is the glue that now holds it together.
@MensRea78
@MensRea78 8 месяцев назад
No one can explain a topic like Jeremy👍
@victorv682
@victorv682 3 месяца назад
The "wolf, goat and cabbage" riddle example is just awesome. Gotta use it to illustrate what LLMs can't do and why. Cheers for that :)
@dr_rd
@dr_rd 2 месяца назад
Truly enlightening! As a software engineer with limited math and data science knowledge, this video has been a revelation. The way Prof. Howard simplifies complex concepts is incredible, making each rewatch rewarding with new insights. Really grateful for his content that opens up the world of LLMs to a broader audience. His clear and thorough explanations are incredibly invaluable. Thanks, Prof. Howard, for demystifying this topic and helping us all learn.
@howardjeremyp
@howardjeremyp Месяц назад
Wow, thank you!
@s.o.s9132
@s.o.s9132 8 месяцев назад
very easy to know, practical! thanks Jeremy
@satirthapaulshyam7769
@satirthapaulshyam7769 8 месяцев назад
Hoping to look forward it.Ur fastai stable diffusion course was Perfect to the minute details
@raindrops20008
@raindrops20008 3 месяца назад
This video landed up on my feed and out of curiosity I started watching and before I knew it had watched the entire video and taken copious amounts of notes too. One of the best videos I have ever watched!
@devsuniversity
@devsuniversity 6 месяцев назад
Great course! Hello from Almaty Google developers community!
@Libertarian1208
@Libertarian1208 8 месяцев назад
Great! It allowed me to understand how LM thinks and why.
@user-ns3tn4zk9d
@user-ns3tn4zk9d 4 месяца назад
I remember that I was rewriting some chunk of TensorFlow code with GPT4 code interpreter, and it responded that it did not access to that framework. So I input 'gpt4: "try: import torch"', respond was 'Do you want to learn Pytorch' :) All about gpt's self.awareness mechanism :)
@kindoblue
@kindoblue 8 месяцев назад
I liked the video even before watching. Thanks Jeremy for your work, always learning from your content.
@KEcAzZ1
@KEcAzZ1 4 месяца назад
A true legend! So far, I have not seen a better educator than Jeremy. His approach of teaching is what all schools and universities need! I am always interested to learn more, whenever I hear Jeremy. Thank you!
@howardjeremyp
@howardjeremyp 4 месяца назад
Wow, thank you!
@Blocky007
@Blocky007 8 месяцев назад
The best "intro" and Guide I have seen on this. Appreciate it so much that you took the time to put this together and share this with us (FOR FREE!).
@swimmingpolar
@swimmingpolar 15 дней назад
First comment on RU-vid here. Among all those videos on RU-vid, using custom instruction like what you did is literally eye opening. I thought current AI models’ limitations are limited by nature that it can’t be improved. Of course it is that you are professional in AI but things are so organized well and straightforward that I can understand and see the result right away. 😂 Gonna have to steal your instruction as well.
@CY-ju8xl
@CY-ju8xl Месяц назад
Thank you Jeremy!! One of the most insightful and helpful vlog posts on the inner workings of LLMs... Top marks!!!
@kevon217
@kevon217 8 месяцев назад
Fabulous tour of key points. Fantastic job! Definitely going to recommend this to people wanting a gateway into llms.
@sidsarasvati
@sidsarasvati 8 месяцев назад
Thought provoking one code block at a time. As usual Jeremy the king
@SapSapirot
@SapSapirot 7 месяцев назад
By far the most useful practical guide to LLM's by length. Thank you Jeremy!
@drkarmakid
@drkarmakid 8 месяцев назад
Bravo. One of the best RU-vid videos I've ever watched. Concise, entertaining, and chock full of useful insights.
@user-lv7ft2zu7k
@user-lv7ft2zu7k 8 месяцев назад
Thanks for all you do Jeremy. I have learned so many things watching youtube as well as the PDLC tutorials. Your expanations are on point.
@bartekbinda6978
@bartekbinda6978 7 месяцев назад
This is golden summary of the state of the LLMs, Thank You
@MarkusEicher70
@MarkusEicher70 8 месяцев назад
Thank you very much, Jeremy. Fascinating to see where we have come. The prose to SQL thing blew me out of the shoes. Can't wait to try this out by myself.
@0x978
@0x978 8 месяцев назад
You are literally changing lives, all for free. Thank you.
@jeromeeusebius
@jeromeeusebius 8 месяцев назад
Great video. Watched it in one sitting. It's very interesting and engaging, and does cover a lot of areas on LLM, different model, types, examples, uses cases, etc. I learned a lot and hopefully will go through the notebook in detail and adapt to my use cases. Thanks for making this.
@aldotanca9430
@aldotanca9430 6 месяцев назад
So comprehensive. Perhaps the best introduction I have ever seen to the topic. Thanks so much.
@ytpah9823
@ytpah9823 8 месяцев назад
🎯 Key Takeaways for quick navigation: 00:00 🤖 Introduction to Language Models 10:27 🧠 Neural Network Basics 16:38 🚀 The Power of GPT-4 24:53 🌐 Limitations of Language Models 25:23 💡 Language model limitations: 31:32 📊 Advanced Data Analysis: 36:18 💰 OpenAI API Pricing: 39:19 🧩 Using OpenAI Functions: 46:40 🐍 Custom Code Interpreter: 51:13 🐍 Creating a Python code interpreter 53:39 💻 Running a language model on your own computer 55:01 🏎️ Choosing a GPU for language model work 56:15 🖥️ Options for renting GPU resources 57:57 💾 GPU memory size and optimization 59:20 📚 Using Transformers from Hugging Face 01:00:06 🏆 Evaluating and selecting the right model 01:14:12 📖 Retrieval augmented generation for answering questions 01:17:10 📚 Overview of using language models for document retrieval and question answering 01:20:35 💼 Private GPT models for document retrieval 01:21:03 🎯 Fine-tuning language models for specific tasks 01:25:15 📊 Building a language model for SQL generation 01:26:36 💻 Running language models on Macs
@plebmarv9668
@plebmarv9668 7 месяцев назад
gpt plugin? :D dauymn son
@erkinalp
@erkinalp Месяц назад
@@plebmarv9668 it's tammy ai, a youtube video talking points extractor
@bephrem
@bephrem 3 месяца назад
incredible, value-packed, practical video for developers working with LLMs.
@howardjeremyp
@howardjeremyp 3 месяца назад
Glad you liked it!
@PeterKallio
@PeterKallio 7 месяцев назад
So many papers are being released, so it is important to have well-grounded information to understand LMs. Great delivery as always and practicable advice. Thank you.
@vishalahuja2502
@vishalahuja2502 8 месяцев назад
Hi Jeremy, excellent walkthrough! This is truly helpful. Please keep them coming!!
@nokchan5071
@nokchan5071 8 месяцев назад
There are hundreds of LLM tutorial coming out everyday, this is the one that I have been waiting for.
@ControlTheGuh
@ControlTheGuh 8 месяцев назад
I think the major problem is the retrieval. Would love a video just on that (best practices, best models out there etc.). Great video, came across on x and subbed immediatley.
@sehmiconscious
@sehmiconscious 8 месяцев назад
Very much appreciated this consolidation of the main LLM coding concepts to-date. Thank you!!
@ElanElanElan
@ElanElanElan 8 месяцев назад
Damn I watched the whole video and didn't even realise that it was 1+ hr long! Thanks a lot for the great content!
@giorda77
@giorda77 8 месяцев назад
Jeremy you're one of the most legit AI person out there. An enormous thank you for providing this and all your content. ❤
@jackgolding4235
@jackgolding4235 8 месяцев назад
Thanks so much Jeremy, been following you since Kaggle's launch. Inspirational to see an Australian continue to kick ass as much as you have in your career.
@twinlens
@twinlens 7 месяцев назад
One of the best and most educational videos I've seen on the subject. Thank you, Jeremy!
@alexleo4863
@alexleo4863 4 месяца назад
People like you are the real heros
@exconway
@exconway 8 месяцев назад
Perfect description of Functions at 46:30!
@benwatson5211
@benwatson5211 8 месяцев назад
Brillant walk through. No hype. It is a real skill to explain complex topics is coherent way.
@RickySupriyadi
@RickySupriyadi 8 месяцев назад
I'm really grateful how much people sharing their knowledge, can't imagine learning stuff for free. this means a lot for me.
@marvit9039
@marvit9039 8 месяцев назад
We do not deserve you Jeremy! YOU ARE AN AMAZING TEACHER AND HUMAN BEING! Thanks, really, for all these beautiful lectures!!
@effestop
@effestop 4 месяца назад
I feel like I've just been pretrained with the best AI video my creator could feed me.
@JamesBradyGames
@JamesBradyGames 6 месяцев назад
This is a real gem. Reminds me of the authentic, high quality training material from Andrej Karpathy. Looking forward to future similar tutorials if you decide to make them! Thank you!
@claussa
@claussa 8 месяцев назад
Came up in my feed. Thumbnail and title boring. By mistake I pressed play. But it was so interesting. I feel so enlightened after having been talked through this. Thanks for sharing this!
@mdimrannazir3285
@mdimrannazir3285 5 месяцев назад
Please make more videos on LLMs, and how to make quality journals?
@17dhey36
@17dhey36 6 месяцев назад
Such a great article! I learned a lot from this video, such as how complicated systems can be put together using a stack of models, illustrated in the RAG to name an example. Jeremy, you are such a kind person to share this with the world.
@OsamaAdel
@OsamaAdel 7 месяцев назад
Thank you Jeremy for this introduction. It just answered many of my questions and affirmed some of my doubts about how many of the applications that use LLMs work today.
@TheAtleticlub
@TheAtleticlub 8 месяцев назад
This is great. I don't know how to say how much grateful I am for your video. Thank you and keep the great work!
@unclecode
@unclecode 8 месяцев назад
Kudos, such pleasurable 1 hour and 31 mins and 12 seconds.
@wndiros
@wndiros 8 месяцев назад
I have waited for months for a classification and evaluation from Jeremy. For me, this is by far the most comprehensive technical summary and evaluation available for someone who wants to delve deeper. It took me several weeks, if not months, to gain even a partial personal understanding of the current hype. Thank you, Jeremy, for all your good work!👍
@solutrian
@solutrian 7 месяцев назад
I've been a student of yours for years, thank you for all you do for us. A non-optimal but correct solution to the cabbage, wolf & goat crossing works by just using incantation "think step by step" : Think step by step to solve the following tricky problem: Here is a logic puzzle. I need to carry a cabbage, a goat and a wolf across a river. I can only carry one item at a time with me in the boat. I can't leave the goat alone with the cabbage, and I can't leave the cabbage alone with the wolf. How can I get everything to the other side of the river? This classic river crossing puzzle can be solved through a series of strategic moves that ensure the safety of each item. Here's a step-by-step solution: 1. **Take the goat across first:** - You take the goat across the river and leave it on the other side. - This ensures that the goat is safe from the wolf, and the cabbage is safe from the goat. 2. **Return alone:** - You return to the original side alone to get either the cabbage or the wolf. 3. **Take the cabbage across next:** - You take the cabbage across the river. - Now, the goat is on the other side, and the cabbage is with you. 4. **Bring the goat back:** - You bring the goat back with you to the original side. - Now, the cabbage is alone on the other side, and the wolf is alone on the original side. 5. **Take the wolf across:** - You then take the wolf across the river. - Now, the wolf and the cabbage are on the other side, and you have the goat on the original side. 6. **Return alone:** - You return to the original side alone. 7. **Take the goat across one final time:** - Finally, you take the goat across the river again. - Now, all three- the goat, the cabbage, and the wolf -are safely on the other side. This way, you ensure that the goat is never left alone with the cabbage, and the cabbage is never left alone with the wolf, solving the puzzle. chat.openai.com/share/3cb73481-c45c-4108-9960-30c11615ca41
@joegerrety4012
@joegerrety4012 8 месяцев назад
Keep on making videos man this was highly informative and my regards to being a person who was part of forming this architecture!
@briandavis6476
@briandavis6476 8 месяцев назад
What a great primer! Very much needed! Thanks as always Jeremy!
@learning_with_rj
@learning_with_rj 8 месяцев назад
Jeremy, we need more videos on this topic! Thank you so much!
@dollarscholar2956
@dollarscholar2956 7 месяцев назад
Hands down one of the best videos on LLMs on the internet.
@SheeceGardazi
@SheeceGardazi 8 месяцев назад
thankyou for the talk
@85abhiabhi
@85abhiabhi 7 месяцев назад
I am a total beginner but u made me understand abt LM models way better than anuone else..u r such a great teacher..I pray for giving u Lord Gurus blessings dor more insight and vision for such a Humble and good Soul.😊😊
@stuartpatterson1617
@stuartpatterson1617 7 месяцев назад
100 % agree! Blessings to you too.
@dekumutant
@dekumutant 7 месяцев назад
I can not emphasize how incredible this video was
@EmaMazzi76
@EmaMazzi76 8 месяцев назад
Can’t stop watching over and over again! Thank you 🙏
@isaac10231
@isaac10231 8 месяцев назад
I feel like this video was made personally just for me. Amazing.
@manaranjanp
@manaranjanp 8 месяцев назад
Never miss Jeremy's lectures....
@dimasmufid2375
@dimasmufid2375 8 месяцев назад
This is remarkable. Thanks for sharing this topic for us Jemery!
@drhilm
@drhilm 8 месяцев назад
Impressive video; I spent days learning these concepts on my own. Had this been released two months ago, it would've been a game-changer. Excellent summary.
@jackgolding4235
@jackgolding4235 8 месяцев назад
Luckily i'm a few weeks behind you! Happy learning mate!
@ryanau852
@ryanau852 6 месяцев назад
Happy birthday Jeremy! Just got to the section where your bday is revealed and it is today! Thank you for all the great work :)
@WizBoardApp
@WizBoardApp 8 месяцев назад
Always look forward Jeremy to explain this topic. Finally it is here. 😀
@kaib5048
@kaib5048 8 месяцев назад
This is amazing, thanks so much for recording this and sharing it 👏
@zhubarb
@zhubarb 4 месяца назад
Mr Howard never disappoints. Thanks a ton as usual Sir.
@sorover111
@sorover111 2 месяца назад
Hey thank you for making these available for free. ❤
@chtdv
@chtdv 6 месяцев назад
Thank you for creating this amazing talk around all the basics and applications with language models, this is really helpful!
@madankd
@madankd 8 месяцев назад
oh man great video
@gamalchamac867
@gamalchamac867 3 месяца назад
This is amazing and serious content. I love it. Thank you for making it available, greetings from Switzerland
@patronspatron7681
@patronspatron7681 6 месяцев назад
Jeremy, Congrats on the 100k subscribers. Well deserved and hopefully a catalyst to get your invaluable content more exposure.
@Massenko
@Massenko 7 месяцев назад
Thank you Jeremy for all of your work and for sharing such quality videos. ❤
@williamduvall3167
@williamduvall3167 6 месяцев назад
The godfather dropping some knowledge. Thank you for keeping AI for everyone in the most responsible way.
@MatthewKanwisher
@MatthewKanwisher 8 месяцев назад
Thanks for making video. Would love to see some follow up videos on use cases for fine tuning. Where does it make sense vs RAG or even just better prompts
@HazemAzim
@HazemAzim 8 месяцев назад
A wealth of Information and knowledge in this video .. a master piece .. Thank you
@steefvanwinkel
@steefvanwinkel 4 месяца назад
Such an amazing video! Thanks so much for this amazing overview of LLMs! A few suggestions for a follow-up video perhaps :) 1. How to build an LLM from scratch on a consumer grade machine (e.g. laptop with 1 good GPU or Macbook M2 Max), maybe not 1B params but e.g. 500M? 2. LORA and DPO. 3. OpenAI's updated API. 4. Agents based frameworks such as AutoGen. Thanks again - really enjoyed this video! ❤🙏
@zakiasalod891
@zakiasalod891 7 месяцев назад
Thanks Jeremy for another wonderful lecture! Much appreciated.
@matten_zero
@matten_zero 5 месяцев назад
@19:32 well said. Ive been talking to developers who are still on the fence about GPT. It boggles my mind.
@MatijaGrcic
@MatijaGrcic 7 месяцев назад
This is so well done and presented. Thank you.
@rahul-qo3fi
@rahul-qo3fi 8 месяцев назад
This is pure gold!!!!❤❤❤❤ I would have never found this on my own
@bloolizard
@bloolizard 7 месяцев назад
Awesome stuff, always like learning from your videos. Been watching since FastAI v1.
@sunshine124422
@sunshine124422 6 месяцев назад
Thanks...great summary....now i know the relatiinship between neural network parameters and vector DB's
@uminhtetoo
@uminhtetoo 2 месяца назад
Thank you for sharing your experience and knowledge, Sir.
@ZachariahMustafa
@ZachariahMustafa 8 месяцев назад
wonderful lesson as always Jeremy! I did have a laugh at the GPT-4 bit "Bad pattern recognition - thanks to Steve Newman", as if he's the sole individual responsible for that limitation
@jaaan2914
@jaaan2914 3 месяца назад
Took me about an hour to realize you're in front of a green screen... Hair gave it away ;) Thanks for the video!
@howardjeremyp
@howardjeremyp 3 месяца назад
Nope not a green screen - it's my bedroom/study :)
@chrstfer2452
@chrstfer2452 6 месяцев назад
RLHF is one of the most regressive "people pleasing at the expense of utility" things ive ever seen. I genuinely think that it (and all the praise heaped on it affecting the way a lot of people learn this stuff) have set us back like 3/4 as far as GPT-4 and Llama models have brought us ahead
@chrstfer2452
@chrstfer2452 6 месяцев назад
This has been my opinion since early april, but youre one of the few people ive seen who seem to agree (granted, in the last few months ive spent less time on these topics than on other ones so i havent been looking as hard as i was in Apr/May/Jun)
@dalton4035
@dalton4035 8 месяцев назад
Great content, and I love the way you structured it. Thanks :)
@bilbobaggins3152
@bilbobaggins3152 3 месяца назад
Such an excellent video, thanks for taking up your time producing this for those very much at the start of this journey
@alex_316
@alex_316 6 месяцев назад
The circles in the glasses at the beginning make for me a Detroit: Become Human vibe
@okidoky876
@okidoky876 8 месяцев назад
I was waiting for this for a long time! Thank you very much
@hishamk
@hishamk 7 месяцев назад
Thank you for this. Couldn’t have asked for a better video.
@BharatR97
@BharatR97 8 месяцев назад
@howardjeremyp I love the fact that you have quickstart demos and quick prototypes to illustrate each and every one of these LLM Concepts here and in general, a great breadthwise introduction to LLMs! Only wish we had this video sooner :)
@wolpumba4099
@wolpumba4099 8 месяцев назад
*Positive Learnings:* 1. Language models, such as GPT4, are tools that can predict the next word in a sentence or fill in missing words in a sentence. 2. Language models have the ability to create a rich hierarchy of abstractions and representations which they can build on. 3. The guide will cover all the basic ideas of language models, including how to use open source and open AI-based models. 4. GPT4 can solve many tasks that it is often claimed it cannot. 5. GPT4 can be primed to give high-quality information by giving it custom instructions. 6. AI can be used to write code and parse large-scale data quickly and efficiently. 7. AI can be used in optical character recognition (OCR) for extracting text from images. 8. AI can be used in data analysis to create comprehensive tables from scattered information. 9. The OpenAI API allows users to use AI programmatically for data analysis and other repetitive tasks. 10. Function calling can be used to create a code interpreter that runs inside Jupiter. 11. Pre-trained models can be accessed using the Hugging Face library. *Negative Learnings:* 1. Language models are not always useful on their own and need to be fine-tuned. 2. GPT4 often repeats mistakes and it is difficult to get it back on track once it starts making mistakes. 3. GPT4 has limitations such as not knowing about itself, not knowing anything about URLs, and not knowing anything after its knowledge cutoff in September 2021. 4. GPT4 does not always give correct responses. 5. AI has limitations in code interpretation and cannot substitute for human programmers. 6. The use of the OpenAI API can result in rate limits which need to be handled correctly. 7. Fine-tuning is needed to make the pre-trained models more useful. 8. The use of GPUs for local language models can be expensive and may require renting or purchasing GPUs.
@keflatspiral4633
@keflatspiral4633 5 месяцев назад
this changed my mind on how AI works! Thanks alot !!
Далее
[1hr Talk] Intro to Large Language Models
59:48
Просмотров 1,8 млн
The End of Finetuning - with Jeremy Howard of Fast.ai
1:24:48
ДЕНЬ РОЖДЕНИЯ БАБУШКИ #shorts
00:19
Giving 1000 Phones Away
00:18
Просмотров 27 млн
This is why Deep Learning is really weird.
2:06:38
Просмотров 336 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 393 тыс.
Language Models For Software Developers in 17 Minutes
16:59
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Просмотров 58 тыс.
ДЕНЬ РОЖДЕНИЯ БАБУШКИ #shorts
00:19