Sign up Shipd now to start earning while coding! tally.so/r/3jBo1Q And check out Datacurve.ai if you're interested: datacurve.ai/ On a side note, I am also looking for some like-minded people that are down to work together. For video scripting to maybe revive the AI newsletter with me, feel free to hit me up on Discord if you’re interested!
Llama 3 is 8B instead of 7B because the increased vocabulary size -- llama 8B has a feature dimension of 4096. Therefore, the initial embedding layer goes from 32000*4096 to 128000*4096, and the final prediction layer goes from 4096*32000 to 4096*128000. Aka a difference of 800M parameters.
I much prefer this method of a greater vocabulary size as it results in higher efficiency of long contexts, scaling beyond what the 7b is in terms of efficiency past a given point.
@@Ginto_O Depends on the tokenization method, but it can be the case. In some methods like Wordpiece, high frequency words are kept as one token will low frequency are split into subwords. If you increase vocab size then you allow for more tokens and hence more "full word" token at the same time.
I know right? I still can't believe we ended up in the timeline where Meta of all companies are the champions of open source! Like seriously, who went back in time and stepped on a butterfly? GIVE ME THEIR NAMES!! 😅
Open sourcing is better because it takes away the leverage of models like GPT4 and other closed sourced ones from their competitors. If you can't compete, disrupt the competition.
Stable diffusion is not falling apart, SD3 has hit gold in my view. That is the best image generation model right now. SD3 is accessible via API and its gonna make a killing. I don't think we have seen the last of them. As a matter of fact,its only a start. Stable diffusion has has the potential to give SORA a run for its money.. We will see.
@@luckyb8228 i mean the company, didn't by cloud mention that. the apis access could gradually become a closed source software although SD3 demos are amazing i agree
If the model could train more why would they stop? I thin they may be under the expected budget and wait better results. In this case, opensourcing is a good marketing strategy
Congrats on graduating and good luck on your foray into doing more RU-vid. Your videos always go beyond surface level news. It’s the reason you’re the only AI channel I watch, and why I watch all the videos you drop. Looking forward to seeing how your channel grows! -Niko
Congrats on graduating bro 🎓🎉👏 and to clarify, I'm not the "boss man," I only want to support your excellent work. Thank you for all your videos and excited to follow along your adventure 🙂
Mistral 7B was released based on Llama 2 architecture, i can't wait after 2-5 months what Mistral will release based on this new way of training models by Meta AI
@@Slav4o911 The signs are looking promising that Llama 3 will beat GPT4 once the community starts to fine-tune them, especially looking at how big of an improvement that's been done on Llama 2, it's likely we will see some big improvements on the newer model, probably more so because these are bigger models.
I did not expect that I could run an LLM that can beat an older version of GPT-4 on my own PC this year. For reference, 70B runs at ~1 token/s on a 8C CPU. Not "interactive", but I sometimes switch tabs when asking GPT-4 something bigger too. And 8B runs at 60 tokens/s on my RTX 4080, which is more than interactive!
Yeah, it does surprise me how quickly these open source models are developing, from a size to performance level. You get a sense that the likes of OpenAI, Microsoft and Google are using a brute force approach to A.I. which must cost them a fortune to run compared to the smart nimble way that the open source community is doing, and it makes sense, if you have limited resources, you're going to think outside the box to get better results. I really do wonder how much better a 7b, 13,b 40b and 70b can get before we get to limits that we need bigger models for better results, it looks like we are still a long way away from that because we keep finding better solutions for the given model sizes, which improves performance and like you said, it's remarkable the pace of development in just over 1 year, makes me wonder what we will see over the next 5, 10 years.
@@r.k.vignesh7832 I got a notification but your response isn't here, anyway thanks. Is it possible to use the integrated GPU to make it a little bit faster?
@@masterneme Damn, I don't know what happened. I said that you can run but probably not very fast, as I can easily run 8B models on 16GB RAM + 6GB VRAM, and that you should try it on Ollama and see how you go
Hey, congrats on finishing university! Please do what you like to do the most. But in my opinion there are already a lot of AI-news youtubers that cover a lot of what is happening in the AI world on the surface but what I really like about your content is the way you try to go into one topic a bit deeper. I really like the entertaining but educational style of your videos, so keep up the great work.
It is competition. Open source is way to get some users away from GPT-4 userbase. Llama is not yet ready, it make mistakes. So, not yet time to collect money from it, now is time to get postition in AI market. So, open source is clever move.
Sad, I hope someone had already saved the best open source -- offline, so in the future when it become paywall people would just use the model when it was free. for DMCA I guess they should uplaod it to a torrent, so that everyone is the host.
What mistakes... have you even tested it? Llama 3 is the best open model ever released. Now open models are just a few finetunes away from flatly beating GPT4 and by a lot. Considering how much Llama 2 based models had evolved, almost nudging GPT4, I have no doubt, open source Llama 3 based models will beat GPT4, the difference is not even that big, just a little uncensoring will beat GPT4. When a model is censored it's lobotomized, so it doesn't matter how good the real GPT4 is, if people can't reach the unlobotomized model. Llama 3 will be unlobotomized by the community, there is no way a lobotomized model can ever beat a truly open and uncensored model with similar capabilities. It's funny how because of a few "bad" words, the whole AI field is lobotomized and stifled, because a few human snowflakes can't take reality and don't have the ability to think by themselves.
@@Slav4o911 The problem is you can't really "unlobotomize" an LLM model without decreasing its quality. I believe the current best uncensored model is WizardLM-2-8x22b. They released it uncensored by mistake. It wasn't lobotomized in the first place. I use the IQ_4_S version and it's amazing.
@@Slav4o911Open AI businesses model seems to be throwing more power into GPT, GPT 5 will need a small country's energy to run. Llama 3 can be run locally, that a insane difference no matter how you look at it.
You are the only person that talk about AI in a way that I understand and also don't waste my time talking about random stuff for 10 minutes. I want to thank you for this. When llama 3 was announced I watched and read other channels and I was so disappointed, you have spoiled me with your quality.
I love when you explain research paper more than just AI News. Even this video was a little bit more in depth into the science of the machine learning than other videos out there. Hence, I continue the good work.
Really looking forward to your next videos man! I know you will keep doing an amazing job! Will support patreon as soon as my startup is no longer just bleeding money 😂
I suspect a big reason for them to want to release open source is because for one, the community themselves will help to improve the model a lot, which over the long run would save Mata a fortune, and two is probably to level the playing field, being that A.I. is likely going to be important in so many areas that it would be dangerous to allow so few governments and corporations control them, so open sourcing them, blows that open and puts everyone on the same playing field. If we had a situation where eventually one or two closed models dominant the market, that would give that corporation and probably the government of the country a massive advantage over everyone else, it's a given that they will use the uncensored version of the model whiles everyone else gets the restricted one, because of all this, open source is very important for A.I. models. There is also the advantage of open source models that will lower the cost for consumers and gives consumers far more control and privacy when running at a local level.
Also, when it comes to LLMs, they’re spending far less money than OpenAI as well as far less compute to kneecap their competitive edge with the larger “better” models, thereby setting themselves up to capture a significant amount of the market share around AI later down the line a la Microsoft making Internet Explorer free versus Netscape who charged a bunch of money.
Thanks for being one of the few AI youtubers that seems very knowledgable on ML as a whole. You're doing a good job of condensing the information without leaving the juicy technicals out, imo
If the whole YT thing doesn't work out, be an ML researcher lol In all sincerity, I like it when you go deeper into the papers and research. Most AI YTers either focus on AI News, test running the tools or just high level think pieces. Those are nice and all, but stuff like this is cool too. I think Yannic Kilcher does paper deep dives too? No offense to the man though, his videos are just too long. And probably too technical. While you balance the technical stuff that I'm curious about without making it well, too technical.
Actually, it makes perfect sense to start with open sourcing. As clearly shown, AI is in its infancy And we are highly ignorant on how to properly train them. Later models can always be closed source, but this is a crucial period of information gathering and experimentation. So it's not only beyond reasonable, but actually rather smart.
Resource requirements are so high on the big models that you can effectively open and close source. Open sourcing GPT4, for instance, wouldn't halt OpenAI's revenue stream.
Heck AI is such a vibrant and fast evolving industry this is like trying to surf a 100 ft wave and remain on top. Data curators! Ahh god that's like something from a sci-fi novel 5 years ago.... data curators... we collate and sell high quality training data ahhh
hi bro thank for video you doing a great job! just wanted to ask which software u used to create/animate you avatar at the end of the video? it's in general called png-tuber if i'm understand correctly? but which exactly do you use?
They guy just finished university... And here I am, having finished my Bachelor's in Software Engineering last year by cheating through all the exams, watching this video and not understanding how half the things discussed work. That is to say, you've made it, OP! Wish you luck with whatever endeavor you go for next. And to everyone else - make sure you're actually interested in the subject enough before applying! 🤣
2:21 what I am interested in (and most developers do need, though they may not realize it) is MMLU and human eval score (unbiased and uncontaminated only) because this gives the model the ability to do things that uptil now (before llama3-8B) only mixtral could do, but that is huge compared to this (don't need to mention bigger models because it is obvious they can do it too but they are just too big) so yea, I love this 8B model. I am sure next 3b or even 1b models would be as great as this (Mark Zuk promised mobile based models in 2025). So, I am really enthused and really love what meta (not facebook) is doing finally.
I think 8B models are also not very far away from running on the future mobile phones. It would be neat to have a model which can outperform GPT4 running locally on your smartphone. That reality is actually not very far away. Unless some dumb politician bans open models.
Llama 3 might beat Mixtral models on synthetic benchmarks but I still get more useful answers from Mixtral 8x7b and 8x22b. Latest Ollama, Open-webui, no custom system prompt, all 4-bit quants. Mixtral 8x22b and Llama 3 70b are slow as hell.
The issue AI is having is the same issue most software has; we're just throwing more compute at it instead of making it better. I mean transformer models are fundamentally very limited on an architectural level, so there are limits here, but for the most part no-one has really spent the time making it learn or operate better, we've just thrown more and more data and compute at it. That *_does_* work, but it reaches a limit. At some point the requirements grow expinential for linear or even plateauing returns. In contrast architectural improvements cost way more upfront (note : "cost" does not necessarily mean cash, it can be as simple as mental resources to spend the time thinking about a complex problem.) with near-zero returns on trivial scales, but *_absurd_* returns relative to unoptimized comparisons at scale. For an example of this Tantan released a video about binary optimizing the rendering topology of his game (it sounds advanced, and it sorta is, but trust me its nowhere near the level of jargon that description implies.) and, while his old renderer showed a performance difference on his old CPU compared to his friend's stronger CPU, after optimizing it to bitwise operations the performance was nearly identical. Further optimizations *_did_* eek out a bit more of a perf difference, but it was still only ~50%. The point though is that bitwise ops are so bloody fast that if you optimize to that point your CPU stops mattering. Strong CPUs, weak CPUs, old CPUs, new CPUs, it just doesn't matter, because you're already so damn optimized throwing more compute at it can't even be more performant. (* it CAN, but the returns are tiny, especially compared to otherwise under optimized equivalents) That isn't saying "okay so optimizing is bad and leads to poor resource utilization" it's saying "we've overbuilt CPUs by this much unnecessarily *_because_* we haven't had a culture of good software optimization". For another vidro there is the in/famous video Clean Code Horrible Performance which explores a similar idea. The overall point though is that we only hit performance walls with respect to our current operating assumptions. If we challenge those assumptions and change our approach we can see (and indeed often find) insane, bordering on incomputable performance boosts. Sure, you could put a threadripper in there with a 500w power supply to do the job... or you could design an analog circuit that does it for a watt. Performance walls, in lieu of some seriously first principle basis for assertion otherwise, can only ever be thought of in reference to a set of assumptions. If you're hitting a performance wall, it probably means you should look at those assumptions. Now, not all assumptions *_can_* be challenged, for instance any game you make is gonna be limited to needing to run on windows with conventional CPU and GPU architectures, but the more assumptions you can challenge the more likely you are to find the real bottleneck, and AI just gasn't been challenging many of it's assumptions lately. Instead they've favoured just throwing more compute at it and there are only so many GPUs you can throw before your arm gets tired and you run out of silicon.
Now this is some armchair quarterback level stuff, but I really don't feel that very large parameter models are the solution to AI accuracy. I think you will soon see a race to the bottom, for who can make the smallest, well-performing LLM that can fit into a smartphone or tablet. I think the largest use of LLMs in the future will be on-device. I'm really surprised that you can move from 8 billion parameters, to over 400 billion, and really not see anywhere near the return in performance or reasoning. It will be interesting to see what the future holds, but it is just as interesting to understand some of the limitations of where we conduct research going forward. Apple has a very different take on this, I think they will be showing off shortly.
Benchmark is one thing. But I found it gives more generic answers, even ignoring specifics in the question. So there is definitely more blur or average in it with fewer parameters.
If they're quantized (compressed) to the GGUF format, the numbers in their names are usually a good indicator on how much RAM you'll need to run them. For example, 8B will probably need around 6-8 GB of RAM unless you choose a heavily quantized version, which could let you get away with less RAM at the cost of a dumber AI. VRAM from an NVIDIA card will be the fastest, AMD will be a little slower (I think), and regular RAM will be the slowest. If you install LM Studio, you can view the models on Huggingface and see exactly how much RAM each version requires.
isn't mistral and some other ai with name starting with p (I forgot it ) even more impressive than llama? (I think the name was phi 2 though I might be wrong)
Meta is open sourcing it because they learned from Microsoft and vscode. They will sneak into the middle between the user and the developer and in the end the can probably monetize it somehow (think about copilot and vscode)
hmm... watching this breakdown as a common user of the free version of ChatGPT 3.5...didnt understand anything but still I enjoyed the content. thanks anyway