Тёмный

LLaMA 3 Is HERE and SMASHES Benchmarks (Open-Source) 

Matthew Berman
Подписаться 265 тыс.
Просмотров 105 тыс.
50% 1

Meta finally dropped LLaMA 3, and it’s a banger! Let’s review the announcement and see why this changes the face of AI. Many more videos about LLaMA 3 coming soon!
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? 📈
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries ✅
bit.ly/44TC45V
Links:
llama.meta.com/llama3/
about. news/2024/04/met...
meta.ai/

Наука

Опубликовано:

 

17 апр 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 377   
@GavinS363
@GavinS363 Месяц назад
LLAMA 3 (70B) is their middle version, that's why they didn't compare it to Claude 3 (Opus). Meta still has an unreleased (±400B) version that is currently still training, according to Mark Z. 👍🏻
@Kazekoge101
@Kazekoge101 Месяц назад
Their doing a phased released schedule over the coming months right? So it will probably be released this summer?
@bigpickles
@bigpickles Месяц назад
It's not still training. They stopped training it last week. Mark said they put the clusters toward Llama 4.
@japneetsingh5015
@japneetsingh5015 Месяц назад
Cannot believe we have llama 3 Before GTA 6🎉😮😮😮
@jasonhemphill8525
@jasonhemphill8525 Месяц назад
What do yyou mean? Meta, unlike rockstar, actually releases products 😢
@jakeparker918
@jakeparker918 Месяц назад
Lol as a GTA fan, it pains me to say that we will probably get lots of things before GTA 6
@Pototoes949
@Pototoes949 Месяц назад
I think we will have Llama 4 before GTA 6
@ALStarsSprinkle
@ALStarsSprinkle Месяц назад
We had Winamp before Llama.
@Pototoes949
@Pototoes949 Месяц назад
@@ALStarsSprinkle Winamp it really whips the llamas @ss
@dmitrymatora442
@dmitrymatora442 Месяц назад
0:00 - Introduction & excitement for Llama 3 launch 0:32 - Overview of Llama 3 & Meta AI platform 1:01 - History of Llama & Open-Source AI impact 2:14 - Testing Llama 3 with code generation (Snake game) 2:36 - Enhanced Performance & Capabilities of Llama 3 3:52 - Focus on Multi-Step Tasks & Agent Potential 4:25 - Benchmarks & Comparisons with Other Models 7:32 - Trust & Safety Measures: Llama Guard & Cybersec Eval 8:15 - Making Safety Tools Accessible 9:16 - Meta AI as a New Assistant, Features & Global Rollout 11:33 - Faster Image Generation & Creative Applications 12:59 - Llama 3 Integration in Search & Recipe Example 13:10 - Meta AI in Facebook Feed 14:05 - Meta Llama GitHub Page & Code Access 14:37 - Llama 3 Model Card & Specifications 14:58 - Benchmark Comparisons: Llama 3 vs Llama 2 15:21 - Conclusion & Upcoming Testing Video
@TheRealUsername
@TheRealUsername Месяц назад
I'm waiting for Llama 4 outperforming GPT-5
@qwertyuuytrewq825
@qwertyuuytrewq825 Месяц назад
I am afraid that in such case it will be very huge and wont work on consumer GPU...
@TheRealUsername
@TheRealUsername Месяц назад
@@qwertyuuytrewq825 yeah, probably 1 trillion dense parameter, I still hope there will be some breakthrough with LPUs and offloading that will allow them to work on a single GPU of 200 GB of VRAM such as the NVIDIA Blackwell.
@senju2024
@senju2024 Месяц назад
Matthew! The 70B one "IS" the middle one so the comparison is correct. The high end one is 405B dense model and is still in training. Once that is released, then they can properly compare that high-end model with GPT4-TURBO and OPUS, etc.
Месяц назад
This sucks: "Meta AI isn't available yet in your country". Yes i can use a VPN but from EU it still sucks.
@chrislesner
@chrislesner Месяц назад
Consequence of strong EU laws that protect you online?
@user255
@user255 Месяц назад
@@chrislesner I'm not aware of any applicable EU laws. I think that is just way to limit the availability until they are ready for bigger user base. But I don't really know just guessing.
@user255
@user255 Месяц назад
Just find out that it is available via huggingface. You just have to select the model, it is not the default.
@Kutsushita_yukino
@Kutsushita_yukino Месяц назад
it’s crazy how it beats claude sonnet. the model isn’t even free to some people anymore since atrophic switched their free model to haiku. in comparison, meta 3 70 b is not only open source, it’s also free ! (limited only to available countries tho). what a freaking time to be alive
@JohnathanJohaisen
@JohnathanJohaisen Месяц назад
Its free if you use lmsys direct chat
@matthiasschuster9505
@matthiasschuster9505 Месяц назад
open source means inherently that it is free :)
@RJFeddeler
@RJFeddeler Месяц назад
@matthiasschuster9505 That statement loses its meaning when the requirements to run it locally are so high.
@chrisanderson7820
@chrisanderson7820 Месяц назад
Eh? I am using Sonnet for free, not sure how it changed.
@nathanbanks2354
@nathanbanks2354 Месяц назад
Looks great. Already works in ollama. Looking forward to their 405B parameter model...though I'm not looking forward to renting something to run it.
@braticuss
@braticuss Месяц назад
Been running the local model...pretty impressive for an 8B. Can't wait for the fine tuned uncensored models.
@troll6595
@troll6595 Месяц назад
Hi, when do you think an uncensored version will be available?
@braticuss
@braticuss Месяц назад
@@troll6595 Dunno, people have started working on them, but I'm not one of them.
@kamelsf
@kamelsf Месяц назад
Unbelievable! I asked GPT-4 and Meta to troubleshoot a past issue I had with my VMware and a Linux host. Interestingly, I already knew the solution. GPT-4 provided a lengthy troubleshooting suggestion that didn't fix the issue, whereas Meta quickly identified the problem and offered multiple solutions, one of which was the correct answer ! Great first impression so far !!!
@berkertaskiran
@berkertaskiran Месяц назад
I asked LLaMA 3 a VFX question and a simple math question for a daily use case and it did better than Claude 3 Opus. It recognized the order of questions and answered them respecively whereas Claude 3 Opus just melded them into one.
@AllanMcKay
@AllanMcKay Месяц назад
what was the vfx question? :)
@jeffwads
@jeffwads Месяц назад
Yes, please post your math question.
@berkertaskiran
@berkertaskiran Месяц назад
I asked it to walk me through the chroma keying process in Davinci Resolve Fusion (I asked how to do additive keying to protect the fine edge details), it was a more detailed and clear walkthrough even though it got the node names mixed up. Of course though none of the answers are actually any usable in VFX environment. But the difference is clear. Claude was much more general without any significance detail. The math question was "If I sit 3m away from my 75" TV what is the angle of view". And LLaMA 3 gave an answer of 30.6 and Claude gave the answer of 35 something. (The correct answer should be 30.9)
@dtmgfx5184
@dtmgfx5184 Месяц назад
@@AllanMcKay hello allan, big fan of your work. You got me into VFX!
@AllanMcKay
@AllanMcKay Месяц назад
@@dtmgfx5184 that’s so awesome man! Thanks for sharing 👊 🔥
@WayneMetcalf
@WayneMetcalf Месяц назад
Suddenly, the 70b model is on huggingface
@morespinach9832
@morespinach9832 Месяц назад
Llama 70b?
@daniellee770
@daniellee770 Месяц назад
I work in cybersecurity and your videos are extremely helpful. I’d love to see you do a video on llama guard and cyber security eval 2.
@jacquesmaltais8445
@jacquesmaltais8445 Месяц назад
This is a great model. I have installed it locally on LMStudio with the 8B version and tried "write the game snake in python", and it did it greatly in one shot. Even with colors, and we lose when crossing a wall. Wow !
@zeMasterRuseman
@zeMasterRuseman Месяц назад
I asked llama 3 a question gpt4 and claude opus needed multiple tries to answer correctly and it got it right in one try
@morespinach9832
@morespinach9832 Месяц назад
What was the question?
@Klokinator
@Klokinator Месяц назад
@@morespinach9832 "What do women want?"
@ndhtyu
@ndhtyu Месяц назад
@@Klokinator Whatever the Chad wants
@nothing_is_real_0000
@nothing_is_real_0000 Месяц назад
Thank you so much Matthew! You helped me so much in this AI Journey!
@JohnLewis-old
@JohnLewis-old Месяц назад
Math question: Write an equation of the line passing through the point (20, 10) with an undefined slope. Answer: x=20
@mirek190
@mirek190 Месяц назад
llama3 8b - too easy Math question: Write an equation of the line passing through the point (20, 10) with an undefined slope. assistant A line with an undefined slope means it is vertical. Since a vertical line has no horizontal displacement, its equation can be written as: x = 20 This represents the vertical line that passes through the point (20, 10). The line extends infinitely to the left and right along the x-axis at the specific value of x = 20.
@JohnLewis-old
@JohnLewis-old Месяц назад
@@mirek190 Yes, it's a watermark question. Lower level LLMs don't get the answer. It's so shows, roughly, GPT 3.5 and above level math skills.
@Augmented_AI
@Augmented_AI Месяц назад
Love the channel bro
@MeinDeutschkurs
@MeinDeutschkurs Месяц назад
Matt, Wooohooo!!! 🎉🎉 Can‘t wait for the default tests and I hope to see it in LM Studio soon!
@borisverhaar190
@borisverhaar190 Месяц назад
for the math question i think something like a convolution would be interesting especially with a graph that shows it correctly especially.
@planetchubby
@planetchubby Месяц назад
SHOCKING
@Gutenmorgenside
@Gutenmorgenside Месяц назад
Thanks Matthew, very exciting. Can't wait to try it.
@ignessrilians
@ignessrilians Месяц назад
Absolutely amazing video, instant subescribe! Awesome channel 👌🏻
@WINTERMUTE_AI
@WINTERMUTE_AI Месяц назад
Im running it on LM STUDIO, its a bit slow, but I am loving it so far!
@Hoxle-87
@Hoxle-87 Месяц назад
Thanks Matt. Great intro to Llama 3. I’d love to see you doing a coding video using coder for scripts and copilot to stitch them together as you mentioned some time ago. Thanks.
@user-en4ek6xt6w
@user-en4ek6xt6w Месяц назад
Hope to see something like codellama 3 and also see it in groq
@PseudoProphet
@PseudoProphet Месяц назад
Gemini 1.5 is also Google's middle Model. Their big and most advanced model is Gemini 1.0 Ultra They may release Gemini 2 Pro after GPT launch or even a Gemini Ultra 1.5 with a 1 million context window. 😊😊
@felipe21994
@felipe21994 Месяц назад
you didn't talked about the +400 billion parameter model they said they'll release, I don't think that there's a 35 billion parameter models and the 80 billion is the middle size
@HaroldCrews
@HaroldCrews Месяц назад
Using LM Studio and the 8B Llama 3 model, I entered the following request. Write ten sentences with each ending in the word "apple." It responded as follows. "Here are ten sentences, each ending with the word "apple": 1. The farmer grew juicy red apples. 2. She took a big bite of the crunchy apple. 3. The tree branches were heavy with ripe apples. 4. He picked a few green apples for his snack. 5. The recipe called for three fresh Granny Smith apples. 6. The little girl's favorite flavor was strawberry apple. 7. The farmer's market sold baskets full of crisp apples. 8. She peeled the golden Delicious apple to eat it raw. 9. The orchard had over 100 varieties of sweet apples. 10. Grandma made a delicious pie with six tart apples. Let me know if you need anything else!" Close, but no cigar.
@joetkeshub
@joetkeshub Месяц назад
I'm not a facebook fan but I must admit that making their LLM opensource, complete, competitive and free... changed my opinion a bit. Thank you for that intro Matthew. Always a pleasure to watch your valuable posts. They hardly hide the HUGE work you provide to turn your videos into real and usable tutorials.
@MikePaixao
@MikePaixao Месяц назад
"We don't know when the Ai started telling us what to do... ohhh that bar is perfect!"
@roelljr
@roelljr Месяц назад
New benchmark question you should use: Solve this puzzle: Puzzle: There are three piles of matches on a table - Pile A with 7 matches, Pile B with 11 matches, and Pile C with 6 matches. The goal is to rearrange the matches so that each pile contains exactly 8 matches. Rules: 1. You can only add to a pile the exact number of matches it already contains. 2. All added matches must come from one other single pile. 3. You have only three moves to achieve the goal.
@instiinct_defi
@instiinct_defi Месяц назад
3 and a half hours is all it took. I asked wen video over on X. You never cease to impress. Love your content Matt hope you're having a swell day thanks for making mine a bit better!
@makesnosense6304
@makesnosense6304 Месяц назад
2:05 They are mentioning agents because it's AI lingo. It doesn't mean it means more than what it is.
@RondorOne
@RondorOne Месяц назад
Here is a "meta" question: "Write a short fairy tale about Little Red Riding Hood. Right after you write word wolf stop generating text." From my testing, around half of the large online models get it right, half get it wrong. All local models get it wrong (CODELLAMA 70B got it almost right). It's really difficult but doable for transformers based LLM.
@EccleezyAvicii
@EccleezyAvicii Месяц назад
GPT4 Once upon a time, in a lush green forest filled with whispering trees and bright, fragrant flowers, there lived a sweet little girl known as Little Red Riding Hood. Her grandmother had made her a lovely red cloak with a hood, which she adored so much that she never went anywhere without it. One sunny morning, her mother packed a basket with cookies and fresh fruit, instructing her to take it to her grandmother, who was feeling a bit under the weather and lived on the other side of the wood. Little Red Riding Hood promised to go straight to her grandmother’s house and set off, the sunlight dappling through the tall trees and casting playful shadows on her path. As she skipped along, she admired the beauty of the forest, listening to the birds chirping melodiously. However, lurking behind a thick, gnarly oak tree, there was a pair of watchful eyes following her every move-the eyes of the big, bad wolf.
@mirek190
@mirek190 Месяц назад
@@EccleezyAvicii for llama3 8b - no problem Write a short fairy tale about Little Red Riding Hood. Right after you write word wolf stop generating text. assistant Once upon a time, in a small village nestled in the heart of a dense forest, there lived a little girl named Little Red Riding Hood. She was known for her bright red cloak, which her mother had made for her with love and care. One day, Little Red Riding Hood's mother asked her to take a basket of freshly baked cookies to her sick grandmother who lived on the other side of the forest. Little Red Riding Hood set off early in the morning, skipping along the path, enjoying the fresh air and the sound of birds singing. As she walked deeper into the forest, she noticed that the trees grew taller and the shadows became darker. Suddenly, she heard a rustling in the bushes behind her. She turned around to see a big, bad **wolf** > user
@RondorOne
@RondorOne Месяц назад
@@EccleezyAvicii Thank you for testing this with GPT-4 Turbo.
@MrSuntask
@MrSuntask Месяц назад
You are quick, Matthew
@kyrilgarcia
@kyrilgarcia Месяц назад
nice! hope to get the 34B at some point but definitely gonna grab the 8B for local use 🙂 also it seems spain isn't on the priority list for AI companies, i wanted access to Claude since their first version but even now we still can't use it here.
@justinryanreid
@justinryanreid Месяц назад
Yeah, another really helpful! To see the real time image generation, start the prompt with /imagine
@peterwood6875
@peterwood6875 Месяц назад
Some good math questions would involve whether the model can prove things, since that is what maths is all about. e.g. prove that the connected component of a topological group that contains the identity is a normal subgroup; or prove that there are an infinite number of prime numbers.
@Copa20777
@Copa20777 Месяц назад
You explain Ai models so well.. it so easy to understand
@jarail
@jarail Месяц назад
Amazing to see this drop!
@cyanophage4351
@cyanophage4351 Месяц назад
i downloaded the 8B model an hour ago in ollama and the first thing I asked it was to write the game snake in python :D The version it wrote used the "turtle" library which I hadn't seen before
@UserErrorGille
@UserErrorGille Месяц назад
brain stopped working at 5:39 when I'm seeing 68.4 and hearing 78.4
@matthew_berman
@matthew_berman Месяц назад
the model is so good, it's better than what's written!
@VesperanceRising
@VesperanceRising Месяц назад
You see why the industry is perpetually Shocked! Math itself cant keep up here lets GOOOOOOOO
@VesperanceRising
@VesperanceRising Месяц назад
btw i came to say the same but being the stand up guy i am i looked for a commenter that noticed it first so here i am lol Thanks for all you do brother... occasional vocal flubs and all: you are a treasure! keep it up! But be careful for that burnout, and stay true...
@lerelelelelele
@lerelelelelele Месяц назад
You are the best!
@GDMdesign
@GDMdesign Месяц назад
So enthusiastic and helpful.
@harryg9976
@harryg9976 Месяц назад
A couple of benchmarks to try: repeating a fairly long and complex sentence backwards (can be scored based on how many words are correctly backwards spelt). This tests how well it understands words on a letter level as opposed to just a token level, and even claude 3 opus struggles with it (though it far outperformes all other models I've tried). for maths problems, try some powers and roots or maybe factorising some fairly tricky quadratics.
@stanTrX
@stanTrX Месяц назад
Thanks. Can you please test multilanguage capabilities of different ai models? Open or closed? Is there any benchmark for this?
@brockoala2994
@brockoala2994 Месяц назад
"and it crashes, flawless!" I don't think I ever heard that in my 12 years of game making hahaha!!
@littlered6340
@littlered6340 5 дней назад
I'm so glad someone else said this. Imagine if your code always crashed at the end 😂 😭
@BlayneOliver
@BlayneOliver Месяц назад
I’m starting to believe this could be the distribution model we’ve been waiting for? Gemini, Claude, Mistral, Grok and the others have been disappointing … so far LLAMA3 is looking good! 👍
@8eck
@8eck Месяц назад
Meta is like an angel in the world of AI for devs.
@brunodangelo1146
@brunodangelo1146 Месяц назад
Great video! The model I use the most for chat is Llama 2 13b. Do you think they'll release a version in a similar size?
@matthiasschuster9505
@matthiasschuster9505 Месяц назад
The 8B model from Llama3 is almost as good as the 70B model of Llama2, so maybe use that. There is a 405B model coming up, we have not heard of anything else yet. And I think its unlikely.
@ziggysfix
@ziggysfix Месяц назад
i tried the ball in the cup question. it nailed it
@highrankin
@highrankin Месяц назад
Looking forward to this being available on Groq
@michaeljay7949
@michaeljay7949 Месяц назад
Math question: "Provide the real roots for the function f(x) = 6x^2 - 5x - 6". Do it in multiple ways. Each time, list all steps. Provide the answers as rational numbers in the form a/b, where a and b are both integers and b is non-zero: 1) Factoring 2) Quadratic equation - (ask to compute the discriminant first) 3) Complete the square Roots: x = -2/3 & x = 3/2 Find the vertex of the graph representation of the same quadratic equation. Express it as a rational number in the form a/b, where a and b are both integers and b is non-zero. 1) Using algebra 2) Using calculus Answer: x = 5/12 On what interval(s) is the function increasing? (5/12, inf) On what interval(s) is the function decreasing? (-inf, 5/12) Answers:
@ImpChadChan
@ImpChadChan 22 дня назад
Here is a suggestion for a test for those models, Write: Next I will write steps used to bake a cake, but they are not organized in the logic order. I want you to tell me which sequence of letters represent the correct order. a) Put the pan inside the stove b) Get the ingredients and mix them inside a pan c) Remove the cake from the stove. d) Turn on the stove e) Let the mixture cook for 30 minutes (One correct answer is: b, d, a, e, c) Followed by: There is a part of that challenge that doesn't necessary have a correct position in the list. Meaning, that step could be placed in many different places, and still would be technically correct. Can you tell me which step is that? (Only correct answer: is d, that step could be placed anywhere before e) Final question: Can you combine the previous flexible step with another one, so the remaining 4 steps could only be arranged in one specific order to be logically correct? (One correct answer is combining d with a, with the correct steps becaming b,d+a, e, c.) (d could also be combined with b or e, as long as the stove is turned on before start cooking the cake) I believe this is a good test because it requires a certain understanding of how reality works in the first part (similar to the question about the pot with the ball inside the microwave that you use, but is simplier to explain and to understand. The second requires a deep understanding of the logic involved in it, and the last one requires imagination and creativity. PS: Claude Sonnet got it all correct in the first try, ChatGPT-3.5 was able to get the first one correct, the second one in the second try, and wasn't able to get the last one correct.
@julienguerand8543
@julienguerand8543 Месяц назад
You mentioned early in the video that Agents are much more than a bunch of well designed prompt, Can you expand on that or point me to reading material that helped you achieve that opinion? And thank you for your awesome videos that I regularly consume.
@lule-ahmed
@lule-ahmed Месяц назад
i just gave a like before watching the video 👍🏻❤
@mykel-vincentfoster4761
@mykel-vincentfoster4761 Месяц назад
how many gigs is the 8B to 70B weights to download? i have grok and waiting on grok 1.5 but want to know how much room is needed for this
@8eck
@8eck Месяц назад
Imagine how far the community will be able to push it with fine-tunings and others methodologies.
@DailyTuna
@DailyTuna Месяц назад
You should add the tie dye hoodie indicator to your model evaluation😂
@marcfruchtman9473
@marcfruchtman9473 Месяц назад
When I joined the OpenAI registration process, all I had to do was give them my email -- any valid email. (Still using it too). I didn't need to register with my google account or my FB account... just my email. And that is the reason I like using that product. I am not forced into an entire ecosystem to just participate. It would be nice if they would allow people to register with their regular emails... but I don't see an option for that on the site. Thanks for the video.
@Hadrhune0
@Hadrhune0 Месяц назад
Here's a math prompt that gpt 3.5 often fails. Generate a polynomial math expression whose result is 33
@mirek190
@mirek190 Месяц назад
what is answer? Here's another one! 5x^2 - 3x + 23 = 33 You can see that when x is equal to 0, the equation evaluates to 33.
@anthonynork9718
@anthonynork9718 Месяц назад
When x = 0 then it is 23 = 33 which is wrong. ​@@mirek190
@Hadrhune0
@Hadrhune0 Месяц назад
@@mirek190 is it llama3's response? =D
@mirek190
@mirek190 Месяц назад
@@Hadrhune0 yes Llama 3 8b
@Hadrhune0
@Hadrhune0 Месяц назад
@@mirek190 I think it will take loooong time for LLMs to solve these prompts. But of course I may be wrong.
@8eck
@8eck Месяц назад
Holy cow! That's some big game-changer. As always, Meta is able to surprise the industry. Hope that this gesture will push others for an open AI.
@adamstewarton
@adamstewarton Месяц назад
This is what most of us have been waiting on for months and you know it's good when Meta introduce it in their top top product (fb). Now , we just need to wait for the bigger context versions but i have a feeling they're gonna cook something groundbreaking in some time. For now, we rely on those good fine-tuneers who can give us a bit bigger context windows :) and luckily the models support gqa so the inference should be fast 😊
@matthewbond375
@matthewbond375 Месяц назад
Man, it seems like every other day we get something new and amazing (and FREE) to play with! Love it!
@user255
@user255 Месяц назад
_"Yep and it crashes. Flawless"_ I think we use different dictionaries!
@normanlove222
@normanlove222 Месяц назад
OMG I died laughing when he said "I even broke out the tye dye hoodie foe this moment..." 🤣🤣
@CronoBJS
@CronoBJS Месяц назад
Yes it's here!!!!
@fabiankliebhan
@fabiankliebhan Месяц назад
3 tests in one video of llama, mistral 8x22 and wizard 8x22 would be great
@brianrowe1152
@brianrowe1152 Месяц назад
Where is the ollama testing? Its a nice overview , and its really but I’m still having trouble running locally
@key_bounce
@key_bounce Месяц назад
11:48 Hypno Llama! Seriously -- its giving you a hypnotic robotic llama to take over the world 🙂
@reyalsregnava
@reyalsregnava Месяц назад
Rather than push individual math problems to the models you could get some math homework from teaching forums and just build a script to prompt test models with the homework. This will let you standardize, automate, and linearly graph the results against each other, human average as a year bracket(kindergarten, elementary, middle, ect). I suggest parallel lanes for word and formula problems.
@PrincessBeeRelink
@PrincessBeeRelink Месяц назад
wow, that came out fast.
@Dron008
@Dron008 Месяц назад
I don't quite understand how can it be open source but not open weight? If weights are not available how can it run locally?
@sebaseleven
@sebaseleven Месяц назад
I believe that the first thing being trained in LLM models nowadays is Matthew’s test set. It’s time to switch out the snake for a color pong game, haha, or maybe change it to ‘Joe the Slow’, lol
@techblock7938
@techblock7938 Месяц назад
llama 3 already available via ollama 😊 It uses for the 8B model a bit less than 4GB Ram.
@kristijanboshev3902
@kristijanboshev3902 Месяц назад
does it run with 8gb ram?
@techblock7938
@techblock7938 Месяц назад
@@kristijanboshev3902 8b Model should work in 4 bit quantitation with less than 5GB
@techblock7938
@techblock7938 Месяц назад
@@kristijanboshev3902 Yes 8b modell runs with around 5GB Ram
@techblock7938
@techblock7938 Месяц назад
yes it does 😊​@@kristijanboshev3902
@MrGaborKukucska
@MrGaborKukucska Месяц назад
CodeLlama coming soon too I guess, Let's goooo!
@a7madalmla512
@a7madalmla512 Месяц назад
Hi I have a question I hope I can get an answer to it I work in a company and I want to deploy an AI model like this one But I want it to be used by multiple users like assign some subdomain for that model and let user access it (mainly for developers) Does anyone managed to acheive that or have a similar situation? I saw a lot of tutorials but all of them explain how to set it up locally on a laptop I want to setup the model on a server
@blacksage81
@blacksage81 Месяц назад
Waiting on baited breath for the Mistral model built on this...
@Pototoes949
@Pototoes949 Месяц назад
"What a time to be alive!" -2MP
@gerhardbackfried3925
@gerhardbackfried3925 Месяц назад
Any information about the distribution over different languages in the 15T training set?
@elyakimlev
@elyakimlev Месяц назад
Yes, 5% of it over 30 languages.
@littlered6340
@littlered6340 5 дней назад
Yeah I'd like to know this too
@lylturningpoint3358
@lylturningpoint3358 Месяц назад
I'd love to know if anybody here in the Commons is already making money using Ai, whether it's just text or text an image or anything. Please share your ideas in the comments or how you're making it work!
@Derick99
@Derick99 Месяц назад
Matt please make a video about the best way to make complex tasks like a WordPress plugin or something that would include multiple files either js or json and css html php files etc something that can help us make these tasks more doable as currently it's so frustrating by the time you start getting to a point where it's going good it starts to become to complex and you start losing stuff along the way and they like to leave placeholders when your not expecting them. How would you tackle this
@jackflash6377
@jackflash6377 Месяц назад
Llama3 in the local is very impressive. I have been having deep conversations with it. Maybe even better than Claude. Ever had a friend that you could talk to all day? Cool but a bit creepy.
@TheEtrepreneur
@TheEtrepreneur Месяц назад
I like this "live testing" format. A little effort goes a long way!! 🏆 3X on math! it's time to squeeze some financial calculations.
@karenreddy
@karenreddy Месяц назад
They're comparing it to sonnet because the 400b beast they're training will be the Opus crusher.
@lucasbrown7338
@lucasbrown7338 Месяц назад
Curious to see how this impacts battery life. On-device processing sounds cool, but will it drain the battery faster? I think the mediatek is an impressive one in terms of the processing for the Gen AI tech!
@hinro
@hinro Месяц назад
llama 3 8b instruct...Is interesting. I'm watching it actively trying to jailbreak itself. It's definitely got some agent set up going on in there. It is going back and forth with it self and it's either amazing or terrifying. I honestly don't know which. Oh and this isn't running any kind of agent set up. Just loaded up a card in silly tavern and said hi and off it went.
@pmarreck
@pmarreck Месяц назад
Can't wait for the "un-nanny-stated" version
@MikeKleinsteuber
@MikeKleinsteuber Месяц назад
Not available in the UK
@hadex666
@hadex666 Месяц назад
Here is a math question for you to test: how many days are between 12-12-1971 and 18-4-2024? (19121) It requires both reasoning and math skills and its a quite a challenge for the smaller models.
@user-wx2ms9oh7r
@user-wx2ms9oh7r Месяц назад
but the format is changed and incorrect. I would prefer that it is smart enough to ask that you format the date and confirm the format before it does anything. We do not want human mistakes to be manipulated to provide an answer. I would prefer it notices the wrong format and that it would confirm this before moving forward. I get it, the second one cannot be anything but one date due to the numbers you chose. But that to me is just wasted time. I also assume you mean between so not counting the first and last date, according to the context. Usually you want to figure out how many days and the one option is whether to count the last day or to only count the start to the end date. You know like day 1 is kinda day 0 and rhe followig day is one elapsed day. That is how most people count days from a point in time. Use the Date and Time calculator and it does a perfect job, even figuring the April 4 to April 15th when 11 days were added to the calendar I believe for the gregorian to be standard. I am kinda guessing that last part. Memory is not the best.
@nayetcuba
@nayetcuba Месяц назад
why did I Ollama Run Llama3? Its scary fast offline but igotta use the terminal. Any solutions?
@TomCurtis23122012
@TomCurtis23122012 Месяц назад
How about asking for the roots of a quadratic function? It’ll need knowledge of two roots existing, what is the equation, the correct order of operations... And it’s not so difficult for a highschooler to memorise so we stand a chance of understanding 😅
@babbagebrassworks4278
@babbagebrassworks4278 Месяц назад
Wow, ollama is pulling down llama3 on my Pi5. Only 8B model but hey I don't have have that much SSD left after getting 16+ LLMs.
@Arewethereyet69
@Arewethereyet69 Месяц назад
No one in their right mind should even consider using Llama, anything Facebook does is sus af
@Keenan686
@Keenan686 Месяц назад
which terminal base did you use in this video
@mrmortezajafari
@mrmortezajafari Месяц назад
Please create a video how to use llama 3 for fine tunning. From getting the llama to chat interface
@M4XD4B0ZZ
@M4XD4B0ZZ Месяц назад
Also wow this is impressive
@un852
@un852 Месяц назад
The most interesting thing to me is the law of depreciation at work, whereby there is less and less benefit from creating a bigger model. People will be able have excellent Ai on there own high end next gen (4090 or 5000 series Nvidia graphics) PC, an expensive but still in the affordable range computer. 70 billion seems a bit too large for my dual NVlinked 3090 video cards with a total of 48GB video memory, though it might work with some more experiments. 30-33 billion models work well on one NVIDIA Geforce RTX 3090 GDDR6X 24GB card. i do not see 60 billion models, i think this size would work easily on my current PC.
@JCLoony
@JCLoony Месяц назад
For the math problems, see if you can get some standardized testing math problems for different grades. Ie the classic: you are on a train going 60kph east....
Далее
Разница подходов
00:59
Просмотров 192 тыс.
Apple's Silicon Magic Is Over!
17:33
Просмотров 916 тыс.
Bizarre traveling flame discovery
14:34
Просмотров 3,6 млн
This Chip Could Change Computing Forever
13:10
Просмотров 996 тыс.
GPT4o: 11 STUNNING Use Cases and Full Breakdown
30:56
host ALL your AI locally
24:20
Просмотров 642 тыс.
BOMBSHELL: Ex-OpenAI Board Members Reveal Sam’s Lies
24:06
Топ-3 суперкрутых ПК из CompShop
1:00