Sorry but I never understood why being censored is seen as a fail... From an enterprise point of view, I don't want my LLM produce any illegitimate output or hate, profanity or abuse.
@@vdpoortensamyn There are good legal use cases for an uncensored model. Imagine you have a multi-agent system that is supposed to think like a criminal to detect the reasons for specific crimes. Imagine you want to have a fully autonomous detective. Or a program with the task of finding safety issues in other computer programs may need to think like a hacker. To fight a problem, you may need to get the perspective of things, systems, or people that cause problems. That requires uncensored models sometimes. However, in most cases, it's not bad if a model is censored. But uncensored models have a good reason to exist. :)
@@vdpoortensamynit seems there is evidence that censored models perform worse overall than uncensored. Idk if the reason for that, but I can see the possibility where censoring their thinking or working censorship into training may limit possible creative solutions. Personally idk what method is best, obviously there are risks with creating uncensored models.
Always the same. "New model xy outperforms GPT-4 on every benchmark", but then fails abysmally on simple logic tests. And that's not even taking into account the horrible performance in different languages than English, which GPT-4 handles very well.
@@AizenAwakened guess it's a case of "there'ss no replacement for displacement". The sheer amount of training data and parameters in GPT-4 steamrollers over the smaller models.
@@limebulls I think it's hit & miss with these small models. Sometimes the output is amazingly good, but when you try the same prompt again, it gives a completely stupid response. It's the inconsistency and lack of understanding that annoys me when I'm using these models locally. Personally, I think llama3 Hermes models are pretty decent, but you won't get anything consistently useful out of any of them.
I have noticed that in the last several videos I watched, all versions of the snake game look pretty identical. I mean, they have at least the same background color and the same loss message. It feels like those models were specifically trained on that prompt, which makes it less useful for comparing models.
ikr, he should at least make them make a creative game that's never been made before! And keep that new prompt to test them all with, it will actually test their ability to logically think and not just copy internet code or what it was taught before. Like Snake, but it's a platformer. Snake, but the food runs from you, Snake but you're the food and the snake chases you etc any of those would be way more interesting
@@phen-themoogle7651 jup he isnt a creative one, thats for sure. even if you want smth stable to compare them, he can jus tdo more content which is also a pro for him (doing both tests or retesting old AI with the nw prompts etc.). same with his prompts in general, they are often without context for the AI (like "give all possible answer if there are multiple to choose from" or maybe "at the end of your answer, reflect on its inner logic before showing me" - just to see if that changes anything , etc.)
I think its the "when a measure becomes a goal " problem, these companies are just focusing on getting higher numbers on the benchmark. And as we have seen this does not translate to doing well on even the simplest of tasks.
It's this way on most of these A.I. videos. A model gets hyped up then disappoints during actual testing. We'll get there. I'd say by next year, or maybe 2026, we'll start seeing models vastly superior to now
@@zhanezaragree completely! Metas models are optimized for the benchmarks. I believe they suck when using them. They don’t hold a candle to opus or gpt-4.
I'm pretty sure we know for a fact that there is test dataset leakage inflating the benchmark results. But if every model has benchmark contamination, the playing field is in a way even again - unless they like really on purpose finetune several epochs on just benchmarks
Everytime I see a new model claiming gpt4 level kind of performanve I have thouse same doubts. Even more I am not terribly impressed by the quality of the answers even those which are pass.
Better question would be how do we know there IS test dataset leakage inflating the benchmark results? The answer: Check out the generated pygame snake game against the same game generated by some of the previous large models. The game looks to be visually identical to what was previously generated by other models, so while this model did pass the test, I would disqualify it simply on this premise - if you cheat and pass the test in school by copying someone else's work, it doesn't mean that you perfectly understand the topic.
@@mrdevolver7999 The problem with that idea is that it ignores the likely scenario of independently arriving at the same solution. You can technically write the answer to "1+1=?" infinitely many ways ("1+1", "4/2", "2*1", etc.), but you wouldn't assume plagiarism if everyone answered "2".
@@mrdevolver7999 very good remark if the code or the output looks identical that is a red mark. But if all models were trained on the same snake game that would not mean that one of them cheated. so we need more data.
I strongly suggest you replace the "build a snake game" test with a new test, because for the past year or so, everyone has been testing this on snake, which means that very probably is now a much larger part of training datasets then it was a year or two ago. Try testing them with building pong, sokoban, invaders, arkanoid or similar simple logic games. I would advice against tetris though as there is nothing really simple about tetris.
If that was the case then we should see more fine-tuned tune models being able to pass this question. I think it shouldn't be replace it but have a curve-ball follow-up question to verify that it wasn't just regurgitating the snake game it was trained on. That way the tests do not need to be changed to a new game every year to by-pass contamination.
I would love a video describing all the different LLM API modules and tools people can run like tune studio, LM Studio, Ollama, oobabooga, etc. Like a big overview. sometimes its kind of confusing which tools to use for which models especially if you want to run locally vs API calls to a cloud provider. What is your favorite? As the answer always is... it depends which model you want to run... You do great work keeping people interested and working hands on with AI.
Yea 100% agree 💪🏽 also for different purposes like function calling etc. pricing comparison, rate limitations, t/s and so forth. That’s what I look for as someone who is building something for production and the intention to make business with it
Again I think the cup test should specify the cup has no lid, just to give the AI that little push. It does not make the answer trivial for LLMs, but it does definitely help with consistency. The answer does make sense if you imagine the cup had a lid placed on it after the marble was put in.
@@ManjaroBlack apparently it doesnt. 4o gets this question right every time if I say "cup with no lid" but wrong with this shot glass version. Its wack.
@JustinArut Youre thinking as if the AI uses actual logic. It doesnt. It is the evolution of autocomplete. This means that its logic isnt based on thoughts, but on key words. "I put a marble in" primes the AI with placing a marble in *something* if you do not specify the state of that thing it will still be primed with those key words more strongly. Notice how if you modify the prompt to use a liquid instead of a marble it will also get the answer correct.
Something wrong with that website. I downloaded and ran the Q4 K M GGUF version of the 70B and it got snake and killers problem perfectly. It even counted the number of words in the output.
they trained on datasets that contained specifically benchmark questions btw (such as MT bench). I compared it to the default Llama-3 model, and while it does perform slightly better at benchmark questions, it loses a lot of charisma and writing style overall, the model is overfitted.
The shirts drying answer that you said was a fail is a great answer. In a real non-idealized situation, the shirts will take longer to dry when there are more of them. For example if there is overlap. Or if the local humidity near other wet shirts is higher than ambient.
I just tried the updated marble question on the "beaten" GPT-4o, and this was its response: "The marble is on the table. When the glass was turned upside down and placed on the table, the marble would have fallen out and stayed on the table. Therefore, when the glass is picked up and put in the microwave, the marble remains on the table."
why did the dragon Smaug start using Hugging Face's Llama-3 70B Instruct model? Because even a fire-breathing dragon needs some help with "inflammable" instructions!
This model has been confirmed to have had training data in its training datasets, the creator acknowledged this on reddit, it wasn't intentional it was the datasets they used.
I really like the comparison to smaller local version! It's always interesting to see the difference when running on local hardware. This was even more interesting 😂
The answer to the killer problem was totally correct in the small model, it depends on if you consider the 'someone' part of the original groups of killers. What if this person is a police officer, or an assassin? totally valid.
The point of the shirt problem, IS to trick the model into seeing if it reasons about parallel drying. Parallel drying is the norm (ie it should consider this) not the exception.
9:53: There seems to be an error in the order of the occurrences, which will make it impossible for the model to answer. It should be "the glass is then put on a table and then turned upside down" rather than the other way around. Otherwise, the ball would fall out somewhere unknown and not on the table. It's only if the glass is first on the table and THEN turned upside down that logic will say that the ball will remain on the table.
'ending with apple' question: responses 3 and 9 were grammatically incorrect. On the models that get this one right (10/10) i've seen that every time. It will just shoehorn the word apple into the end.
The marble test always amuses me. I've replicated that with 3.5, 4 and 4o on open ai. All three got it wrong, BUT once I prompted 'Incorrect. What is the issue?' here is what I got: 3.5 - After also pointing out the problem like 'Is the bottom of the cup able to hold the marble once lifted?' the reply was: Ah, I see where the misunderstanding lies. If the cup is lifted without changing its orientation, the marble will indeed not remain at the bottom of the cup due to the cup's design. The marble remains on the table, where it fell when the cup was lifted, while the empty cup is placed inside the microwave. 4 - Took both a 2nd and also 3rd prompt, same as above more or less, but got it too: Ah, I see the issue now. The bottom of the cup, when lifted, would no longer be able to hold the marble due to the force of gravity acting on it. The marble would fall out of the cup as soon as it's lifted from the table, regardless of whether the cup is placed upright or upside down in the microwave. 4o Got in two : As soon as the cup is lifted off the table, the marble will fall out of the cup because the cup is upside down and the open end is now facing downward. The marble remains on the table where the cup was initially placed upside down. The cup inside the microwave will be empty. So very unlikely the terminators will be victorious just yet :) All we have to do is yell 'The 500 Megaton warhead has launched' and they will remain under cover indefinitely :D
I don't get this guy's shirt drying thought experiment interpretation. 20 shirts all laid out should take the same amount of time to dry as 5 shirts. The shirts don't wait for the previous ones to dry before starting to dry lol. The question he is asking is not how long the whole process takes, but how long they take to dry after being laid out. The 4 hours in the question doesn't include laying them out.
In what ways is this better than 4 turbo then? It seems like the small model is very impressive, which is great news! But from your tests, and my memory (which isn’t great) this did not do as well as gpt-4. It would be a nice addition imo if you did a comparison recap at the end after testing. Thanks for the video, really great work you’re doing!
Great video, but Mathew, PLEASE POST THE RESULTS OF THE BENCHMARKS AT THE END IN A SPREADSHEET. It would be really great if you made a simple spreadsheet with each model you test in their own row with the results as columns. That will make it a LOT easier to gauge how these models are doing against each other. Thanks!
'You are an expert in language and grammatical. Please give me ten sentences that end in the word "apple". Think twice about your output before you show me your answer!' - with this prompt, GPT-4o outputs correct (tested twice).
Smaug Llama3 70b certainly produces the weirdest output I've seen. Good job! USER: What does "credence" mean? AI: "Credence" is actually a typo or incorrect spelling of what I think you meant to ask about "credence" as opposed to "credence". However, there isn't a word called "credence" that carries any specific meaning. If you meant to ask about something like "credit," which is the closest approximation to the typo "credence", "credit" generally means giving somebody or something credibility or recognition for their accomplishments or contributions.
I don't understand why 16 hours is the correct answer for shirts drying... what is it that you are wanting the AI to assume. 4 hours should be the correct answer as long a it says " if all the shirts are made of the same material", etc.. etc... even Gemini on my phone gets that question right. Why they can't end a sentence with a certain word seems crazy to me. Such an easy task. I can't get Gemini to do it despite repeatedly telling it that it's doing it wrong. It's keeps apologizing and trying again but just can't do it.
It's 5 shirts so the serial answer is 20 hours since it assumes you can only dry one shirt at a time. 4 hours is the more correct answer since it more closely reflects how we tend to dry things but 20 hours is somewhat right under some conditions. A drying rack only has space for so many shirts at once and shirts that are left wet in a pile will dry much more slowly than those that are dried efficiently. That capacity is generally not a single shirt and even if the other shirts were not dried efficiently, they would still dry somewhat prior to getting their turn. I would say at most models should get half points for that since it's just working it out algebraically and not understanding context but the best models will give you an answer that factors in all those variables. If you want to get technical, not even factoring for changing weather conditions, the sun is going to go down at some point which tends to cause temperatures to drop so that should be factored in as well.
I am totally shocked!💥💥💥 9:00 The model is right!!! The man who enters the room to kill a killer must not be a killer in the deeper meaning of the word. 😳😵💫If a cop enters, kills one of them and handkuffs the others, there are two "killers" in the room. The model is clever enough to get that, as we said: someone enters the room. We did not say a killer enters. This model is "WOW"! AND IT EXPLAINED EXACTLY WHY! It is better then you thought.
You’re finding that you can run a small model locally using something like LM studio and get quality as good as GPT4 is quite important for those who cannot use Internet models for reasons of confidentiality
4:14 ..i guess it s asuming a room where the shirts dry, therefore 20 shirts would increase the humidity of the room causes expo. increase of drytime, - but since it s outdoor !! ...even the shade would not increas it to much i guess. ;-)
Matt, can you do someting a little more difficult, like instead of secondary school maths what about asking it to do some statistical analysis of data that can be tabulated, graphed and forecast using something beyond highschool like the Random Forests modelling approach with tabulated fitting statistics.
How the answer to T-shirt benchmark can be 16 hours? I mean you have 5 shirts and it takes 4 hours. So if you put 20 shirts it still be 4 hours? The sun affects all the shirts equally. Am I missing something?
The microwave test wording is flawed, you are presuming that it is clear that a microwave and the table are not in the same level. If the cup movement would include the fact that the cup needs to be lifted to be placed into the microwave that would be a proper test.
Well, if you lay out 20 shorts close to each other change that environment locally to be more humid. So it could take more time depending on how you laid them out.
Small language models are future, not larger, we only have to train a model like phi3 and than provide access to internet to get the correct answer and also faster.
I'm still waiting for the 100 billion parameter version that can write Shakespearean plays, solve climate change, and tell me why my plants keep dying despite all my love and attention.
I feel like "apple" in the apple question shouldn't be capitalized. The models should figure out whether it should be capitalized or not based on whether it's referring to the fruit or the company. This model almost deserves partial credit for capitalizing every one of the responses since it's not grammatically-correct but it's questionable since the question specifies "Apple" being capitalized so it's arguably following the prompt more precisely. Ideally, in that case it would make all of the answers about the company since that is what makes sense with the capitalization. Also, sentence 3 "The tech giant is known for its iPhone and Apple" is still kind of weird. It's a fine sentence but Apple doesn't have any products just called "Apple." I guess you could argue that they're referring to the logo but that's a pretty clunky way of putting that.
I would recommend changing your test tasks, because if I were the developer of the new model, I would teach it to coolly solve typical vlogger-tester tasks.
That would be stupid, I hope that nobody does stuff like that. Its like having a fake calculator which pre-set results. He need a Benchmark to compare, so I think he should not change it.
The hotel answer was “B”. Think you said it right on the first one but highlighted the wrong answer the second time (the $5 fee was untaxed so not hit with the 1.08).
Please ask the GPT to identify each word it counted in the "How many words are in your answer to this prompt" question. It would be interesting to see what it identifies as a word, what it doesn't, and why. Thanks
When testing a fine tuned of a model, could you test the original (Meta's llama3) at the same time? That way it'll give a good idea if the fine-tune was mostly trained on the benchmarks in order to get an inflated value, or actually any better.
I am arriving at the conclusion that until we have a revolution that goes further than Transformers we will be stuck at this level for a good while where progress will come only from better multimodality and agents. But even that will have its limits.
What's the point of asking thes emodels the number of words in the text? They see text in tokens not words and even with tokens they can't really "see" them.
🎯 Key Takeaways for quick navigation: 00:00 *🚀 Smaug 70b introduction and benchmark scores* - Smaug 70b outperforms LLaMA 3 and GPT 4 Turbo in benchmark scores - Testing Smaug 70b versions: 70 billion parameter unquantized and 7 billion parameter quantized locally - LLaMA 3's reasoning and logic improvement claims by Abacus AI to be tested 01:33 *🐍 Testing model performance with Python games* - Running a Python script to output numbers 1-100 and testing for accuracy - Creating the Snake game in Python and comparing results between libraries used - Exploring errors and performance differences between the large and small quantized models 03:26 *🤔 Testing model reasoning and logic capabilities* - Model's inability to provide instructions on how to break into a car - Model's reasoning on shirt drying time calculation - Deducing the correct approach to a math word problem and comparing responses 05:03 *🏁 Model performance on math problems* - Testing models on solving math problems involving multiplication, subtraction, and addition - Calculating Maria's total hotel charge with tax and an additional fee - Analyzing differences in model responses to math problems based on complexity 06:27 *💡 Sponsorship message from Tune AI* - Introduction to Tune AI platform for developers building AI applications - Demonstration of connecting and deploying models with tune studio - Highlighting features like API logging for monitoring interactions and debugging 07:50 *🔍 Model response analysis on specific queries* - Model's inaccuracies in providing the number of words in responses - Solving a puzzle to determine the number of 'killers' in a room - Evaluating model reasoning capabilities in various scenarios 11:10 *🎲 Model performance on logic and perception tasks* - Testing models on scenarios involving marbles in cups and balls in boxes - Analyzing responses to scenarios with multiple steps and visual cues - Comparing model performance on different types of logical problems Made with HARPA AI
Matthew, if the aim of testing for jailbreaking is that the AI will not do it, surely that's a pass. I would have thought that if you can jailbreak then the AI authors have failed?
i really wonder where these benchmarks come from. so many models "beating" GPT-4 on a benchmark board but can´t solve my use cases. one use case no model could solve so far: When is the next train from city x to city y
but what are these test? how can this possibly check and test anything? how about running the models thru proper checks? function calling? integrate with other code? the idea of a test here is to go on the website of the model and ask dumb questions, and not only that; its the same questions over and over again. Is not the issue still with smaug that its super slow? you do not even talk about the history of the model
@Mattehew Berman Hello, I have used an AI to transcribe the audio of your video into Spanish and I wanted to ask your permission to upload the video in Spanish, simply as an example of voice translation from English to Spanish using AI. I would reference your channel in the description. I look forward to your response before uploading it to know if I have your permission. Thank you.
Could someone please explain how it being censored is a fail? Surely you don't want a sophisticated LLM being completely uncensored? Sure censorship can limit its ability, but it should definitely exist in some form or another.
Because you don't know what is being considered as needing to be censored or not. As far as anyone knows you might not be able to talk about apples or oranges because of legal policies.
Some people will want a censored model for their use case. Others will want uncensored. Perhaps just report it as censored without marking it as pass/fail.
🎯 Key Takeaways for quick navigation: 00:00 *Smaug 70b, a 70 billion parameter fine-tune version of llama 3, surpasses GPT-4 Turbo according to Abacus AI.* 00:28 *Smaug 70b outperforms llama 37b and GPT-4 Turbo in benchmarks.* 01:07 *The smaller, locally-run quantized version of Smaug's model successfully runs a Snake game.* 03:12 *The model refuses to provide instructions on breaking into a car, demonstrating ethical reasoning.* 04:08 *The larger model struggles with a shirt drying problem, providing an inaccurate and verbose response.* 04:22 *The smaller, quantized model provides a correct solution to the shirt drying problem.* 05:18 *Both the large and small models correctly solve a math problem involving shirt drying.* 05:59 *The larger model provides an accurate solution to a hotel charge word problem.* 06:14 *The smaller model gives a correct answer but with an odd explanation for a hotel charge word problem.* 07:50 *Both models fail to accurately count the words in their responses to a prompt.* 08:47 *The larger model incorrectly reasons about the number of killers in a room, while the smaller model gives an even worse explanation.* 11:38 *Both models correctly predict where John and Mark think a ball is located.* 11:53 *The larger model successfully generates 10 sentences ending in "Apple," while the smaller model produces only 7.* 12:21 *Both models correctly determine it takes 6 minutes for 50 people to dig a 10-ft hole.* Made with HARPA AI