Тёмный

META's New Code LLaMA 70b BEATS GPT4 At Coding (Open Source) 

Matthew Berman
Подписаться 284 тыс.
Просмотров 79 тыс.
50% 1

Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
MassedCompute - bit.ly/matthew-berman-youtube USE CODE "MatthewBerman" for 50% discount
ollama.ai/library/codellama
ai.meta.com/blog/code-llama-l...
huggingface.co/papers/2308.12950
huggingface.co/codellama/Code...
ai.meta.com/llama
github.com/defog-ai/sqlcoder
/ 1752329471867371659
zuck/posts/1...
ai.meta.com/resources/models-...
/ 1752013879532782075
Disclosures:
I am an investor in LMStudio

Наука

Опубликовано:

 

30 янв 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 217   
@matthew_berman
@matthew_berman 5 месяцев назад
I'm creating a video testing Code LLaMA 70b in full. What tests should I give it?
@santiagomartinez3417
@santiagomartinez3417 5 месяцев назад
Metaprogramming or transfer learning.
@johnclay7422
@johnclay7422 5 месяцев назад
those which are not available on youtube.
@SteveSimpson
@SteveSimpson 5 месяцев назад
Please show how to add the downloaded model to LMStudio. I add the downloaded model to a subdir in LMStudio's models directory but LMStudio doesn't see it.
@so_annoying
@so_annoying 5 месяцев назад
What about to write a kubernetes operator 🤣
@OwenIngraham
@OwenIngraham 5 месяцев назад
suggesting optimization on existing code, preferably having context of many code files
@bradstudio
@bradstudio 5 месяцев назад
Could you make a video on how to train an LLM on a GitHub repo and then be able to ask questions and instruct it to make code, for example, a plug-in?
@ordinarygg
@ordinarygg 5 месяцев назад
It worked, you just have bad driver
@lironharel
@lironharel 5 месяцев назад
Thanks for actually showing the errors you encountered and keeping it as real as possible! Great and enjoyable content❤
@jacobnunya808
@jacobnunya808 5 месяцев назад
True. Keeps expectation realistic.
@theguildedcage
@theguildedcage 5 месяцев назад
I appreciate your disclosure. I intend to check this out.
@EdToml
@EdToml 5 месяцев назад
Mixtral 8x7B was able to build a working snake game in python here...
@efifragin7455
@efifragin7455 5 месяцев назад
can you share exactly which model was it? i also looking for model that can run on my pc i9-11k gtx 3060 16gb that i can code and make programs like snake
@EdToml
@EdToml 5 месяцев назад
@@efifragin7455 I have a 7700 cpu with 64G of 5600 memory and rx6600xt (8g) gpu and am using rocm 5.7. The model is Mixtral 8x7G K_M_4bit (thebloke on hugging face). Using llama.cpp about 7G gets loaded onto the gpu with about 26G in cpu memory.
@charlies4850
@charlies4850 5 месяцев назад
@@efifragin7455Use OpenRouter
@auriocus
@auriocus 5 месяцев назад
The error comes from libGL failing to load and is clearly NOT in teh code that codellama wrote. It's a problem with your machine's graphics drivers.
@RevMan001
@RevMan001 5 месяцев назад
If I can download the model from TheBloke, why do I have to apply for access from Meta? Is it just for the license?
@user-iq8lr8wc8l
@user-iq8lr8wc8l 5 месяцев назад
I'm running LMStudio on my system running Debian bookworm 12 and it's running good. Really want to be able to run models locally on this system to to my work when I'm home. Any ideas about local models etc. would be helpful
@marcinkrupinski
@marcinkrupinski 5 месяцев назад
Cool, all the time we get better and better open source models!
@DanOneOne
@DanOneOne 5 месяцев назад
I asked it to write a program to connect bluetooth 3D glasses to a PC. it responded: It's not a good idea, because bluetooth is limited by 10m. Use wi-fi. I said: 10m is good enough for me, please write this program. -Ok, I will. And that was it 😆
@geographyman562
@geographyman562 5 месяцев назад
It would be great to get a price breakdown on how much computer you need to have to get in the door to run these locally and compare those ranges to the VM host options.
@vishnunallani
@vishnunallani 5 месяцев назад
What kind of machine is needed to run these type of models?
@stargator4945
@stargator4945 5 месяцев назад
I used a mixtral instruct model 8x7B and it was quite good, especially with other languages than English. So would this 70B model actually be better?
@TubelatorAI
@TubelatorAI 5 месяцев назад
0:00 1. Meta's New CodeLama 70B 👾 Introduction to Meta's latest coding model, CodeLama 70B, known for its power and performance. 0:22 2. Testing CodeLama 70B with Snake Game 🐍 The host plans to test CodeLama 70B's capabilities by building the popular Snake game using the model. 0:25 3. Announcement by AI at Meta 📢 AI at Meta announces the release of CodeLama 70B, a more performant version of their LLM for code generation. 0:56 4. Different Versions of CodeLama 70B 💻 An overview of the three versions of CodeLama 70B: base model, Python-specific model, and Instruct model. 1:21 5. CodeLama 70B License and Commercial Use 💼 Confirmation that CodeLama 70B models are available for both research and commercial use, under the same license as previous versions. 1:40 6. Mark Zuckerberg's Thoughts on CodeLama 70B 💭 Mark Zuckerberg shares his thoughts on the importance of AI models like CodeLama for writing and editing code. 2:37 7. Outperforming GPT-4 with CodeLama 70B 🎯 A comparison between the performance of CodeLama 70B and GPT-4 in SQL code generation, where CodeLama 70B comes out as the clear winner. 3:25 8. Evolution of CodeLama Models ⚡ An overview of the various versions of CodeLama models released, highlighting the capabilities of CodeLama 70B. 4:21 9. Using Olamma with CodeLama 70B 🖥 Integration of CodeLama 70B with Olamma for seamless code generation and execution. 5:18 10. Testing CodeLama 70B with Massive Models 🧪 The host tests the performance of CodeLama 70B using a massive quantized version and shares the requirements for running it. 5:47 11. Selecting GPU Layers Choosing the appropriate number of GPU layers for better performance. 6:08 12. Testing the Model Running a test to ensure the model is functioning correctly. 6:43 13. Running the Test Requesting the model to generate code for a specific task. 7:27 14. Generating Code Observing the model's output and determining its effectiveness. 8:16 15. Code Cleanup Removing unnecessary code and preparing the generated code for execution. 8:40 16. Testing the Generated Code Attempting to run the generated code and troubleshooting any errors. 9:09 17. Further Testing Continuing to experiment with the generated code to improve its functionality. 9:15 18. Verifying CodeLama70b's Capabilities Acknowledging that CodeLama70b has successfully generated working code. 9:20 19. Conclusion and Call to Action Encouraging viewers to like, subscribe, and anticipate the next video. Generated with Tubelator AI Chrome Extension!
@voncolborn9437
@voncolborn9437 5 месяцев назад
Matt, you mentioned you were using a VM from Mass Compute with the model pre-installed. Who are they? So to be clear, you were not running the VM locally, right?
@sumitmamoria
@sumitmamoria 5 месяцев назад
Which version will run reasonably fast on rtx3090 ?
@kevyyar
@kevyyar 4 месяца назад
can you create a video on how to setup these LLMs on VSCode with the extension like Continue, Twinny, etc? I have downloaded Ollama and have downloaded the models i need but im not sure how to configure them to run on the extensions on vscode
@endgamefond
@endgamefond 5 месяцев назад
What virtual computer you use?
@jackonell1451
@jackonell1451 5 месяцев назад
Great vid ! What's "second state" though ?
@user-iq8lr8wc8l
@user-iq8lr8wc8l 5 месяцев назад
I don't see the link to mass compute.
@allenbythesea
@allenbythesea 5 месяцев назад
awesome videos man, I learn so much from these. I wish there were models tuned for c# though. Very few of us create large applications with python.
@mrquicky
@mrquicky 5 месяцев назад
It is surprising that the DeepSeek Coder 6.7b model was not listed in the Rubrik, though I recall Matthew reviewing & confirming that it did create a working version of the snake game. That was the most interesting part of the video for me. Seeing that it was not even being ranked anymore. I'm assuming a 70 billion parameter model would use more memory and perform more slowly than the 6.7 billion parameter model.
@DevPythonUnity
@DevPythonUnity 5 месяцев назад
how do one bomecs an inverster in LLM studio?
@dgiri2333
@dgiri2333 3 месяца назад
I need olamma text(nlp)to sql query or nlm to django orms is there any llms for that.
@janalgos
@janalgos 5 месяцев назад
still underperforms gpt4 turbo though right?
@aboghaly2000
@aboghaly2000 5 месяцев назад
Hello Matthew, great job on your work! Could you please compare the performance of Large Language Models on Intel, Nvidia, and Apple platforms?
@MrDoobieStooba
@MrDoobieStooba 5 месяцев назад
Thanks for the great content Matt!
@warezit
@warezit 5 месяцев назад
🎯 Key Takeaways for quick navigation: 00:00 🚀 *Meta's Code LLaMA 70b Announcement* - Meta announces the most powerful coding model yet, Code LLaMA 70b, which is open-source and designed for coding tasks. - The model comes in three versions: the base model, a Python-specific variant, and an instruct model optimized for instructions. - Code LLaMA 70b is notable for its performance in human evaluation and its applicability for both research and commercial use under its license. 02:31 💾 *SQL Coder 70b Performance Highlights* - SQL Coder 70b, fine-tuned on Code LLaMA 70b, showcases superior performance in PostgreSQL text to SQL generation. - The model outperforms all publicly accessible LLMs, including GP4, with a significant margin in SQL eval benchmarks. - Rishab from Defog Data highlights the model's effectiveness and the open-sourcing of this tuned model on Hugging Face. 03:39 📈 *Code LLaMA 70b Technical and Access Details* - Introduction of Code LLaMA 70b as a powerful tool for software development, emphasizing ease of access and its licensing that allows for both research and commercial use. - Details on the expansion of the Code LLaMA series, including future plans for LLaMA 3 and the model's exceptional benchmark performances. - Mention of mass compute support for testing the model and an overview of the quantized version's requirements for operation. 06:11 🐍 *Testing Code LLaMA 70b with a Snake Game* - Demonstration of Code LLaMA 70b's capabilities by attempting to write a Snake game in Python using a cloud-based virtual machine. - Highlight of the potential and limitations of the model when generating code for complex tasks and the practical aspects of running such a model. - Transparency about the author's investment in LM Studio and the intention to disclose any interests for full transparency. Made with HARPA AI
@cacogenicist
@cacogenicist 5 месяцев назад
Sure would be cool to be able to run this on my own hardware. So, what are we talking VRAM-wise? 92GB do it? ... sadly, I don't have a couple A6000s sitting around.
@hishtadlut1005
@hishtadlut1005 5 месяцев назад
Did the snake game worked at the end? What was the problem there?
@1986hr
@1986hr 5 месяцев назад
How well does it perform with C# code?
@hqcart1
@hqcart1 5 месяцев назад
it's trained on pht, might not be as good for c#
@dungalunga2116
@dungalunga2116 5 месяцев назад
Would it run on an m3 max 36gb RAM ?
@BlayneOliver
@BlayneOliver 5 месяцев назад
That’s you just flexing 😅
@elon-69-musk
@elon-69-musk 5 месяцев назад
give more examples and thorough testing pls
@technerd10191
@technerd10191 5 месяцев назад
For LLM and CodeLlama inference, the M3 Max with 64GB of unified memory (50 GB actuallu usable) seems promising. So, it would be interesting to see how Macs would perform for quantized 70B param LLMs...
@Phasma6969
@Phasma6969 5 месяцев назад
Bro do you mean DRAM, not "unified memory"? Lol wut
@TDKOnafets
@TDKOnafets 5 месяцев назад
@@Phasma6969 No, its not DRAM
@DanteS-119
@DanteS-119 5 месяцев назад
Why don’t you just build a server with a decent beefy GPU and then hundreds of gigs of RAM? Genuine question, I love Apple Silicon just as much as the next guy
@VuLamDang
@VuLamDang 4 месяца назад
@@DanteS-119 the power draw will be too high. M series chips are scarily efficient
@NicolasSouthern
@NicolasSouthern 4 месяца назад
@@DanteS-119 I think it's because if you don't have enough ram on the GPU itself, it'll start processing on the CPU which is extremely slow. The apple silicon has the unified memory, so the GPU can access it with very little bottleneck. I believe theoretically, you could build a Mac with a hundred gigs of unified memory, and be able to load the largest models out there. If you wanted to load the largest models into a GPU memory you'd need to find ones with 24-48GB of ram (not the lower level cards). Having 128GB of system ram would not help, as the GPU can't really utilize that. The apple silicon is a bit different, there isn't really a "GPU", but there are processor cores that are built for GPU functions. A lot like integrated graphics on intel chips, you can run a desktop environment without a GPU, because the intel chips have limited capabilities. Apple just blew that concept out of the water with their graphics chips.
@william5931
@william5931 5 месяцев назад
can you make a video on mamba?
@janfilips3244
@janfilips3244 5 месяцев назад
Matthew, is there a way to reach out to you directly?
@matthew_berman
@matthew_berman 5 месяцев назад
my email is in my bio
@osamaa.h.altameemi5592
@osamaa.h.altameemi5592 5 месяцев назад
can you share the link for mass-compute? (the ones who provided the VM)
@kenhedges
@kenhedges 5 месяцев назад
It's in the Description.
@chunbochen9976
@chunbochen9976 11 дней назад
it looks good with simple piece of code, but how will it work and help in the real project or product?
@dungalunga2116
@dungalunga2116 5 месяцев назад
I’d like to see you run it on your mac.
@user-bd8jb7ln5g
@user-bd8jb7ln5g 5 месяцев назад
So you are an investor in LM Studio, perfect. Can you please tell them to allow increasing font size. My vision vascilates between good and poor and sometimes I'm having problems reading LM Studio text. BTW I'm seeing LM Studio release frequency ramping up 👍
@Pithukuly
@Pithukuly 5 месяцев назад
i am mostly looking if there is any model that generate vertica sql syntax
@pcdowling
@pcdowling 5 месяцев назад
I have codellama 70b working well on ollama. Rtx 4090 / 7950x / 64gb. The newest version of olama uses about 10-20% gpu utilization and offloads the rest to the cpu, using about 55% of the cpu. Overall it runs reasonably well for my use.
@freedom_aint_free
@freedom_aint_free 4 месяца назад
What is his context window ? How big is the code that it can generate ? Is it accurate ?
@zkmalik
@zkmalik 5 месяцев назад
yes ! please make more on the new llama model!
@AliYar-Khan
@AliYar-Khan 5 месяцев назад
How much compute power it requires to run locally ?
@MelroyvandenBerg
@MelroyvandenBerg 5 месяцев назад
The RAM was already stated. And regarding GPU. You need 48GB VRAM to fit the entire model. That means 2x RTX 3090 or better. You could also use CPU only, depending on the CPU but I think that will result into 1 token a second or something. Hopefully we soon have ASICS. Since I think GPUs can't hold up.
@synaestesia-bg3ew
@synaestesia-bg3ew 5 месяцев назад
​@MelroyvandenBerg, this is quite sad .
@harisjaved1379
@harisjaved1379 5 месяцев назад
Matt how do you become an investor in LM studio? I am also interested in becoming an investor
@samuelcatlow
@samuelcatlow 5 месяцев назад
It's on their website
@countofst.germain6417
@countofst.germain6417 4 месяца назад
I just found this channel, it is great to see an AI channel that actually knows how to code.
@gaweyn
@gaweyn 2 месяца назад
but why in LM Studio, why not in an open-source project?
@Noobsitogamer10
@Noobsitogamer10 4 месяца назад
Coding battles, LLaMA crushes with its mad skills yet stays so chill.
@Leto2ndAtreides
@Leto2ndAtreides 5 месяцев назад
Wonder if the Macbook guy was running a quantized version or not. The maxed out M3 Macbook has a 128GB option also.
@frankjohannessen6383
@frankjohannessen6383 5 месяцев назад
unquantized 70B would probably need around 150GB ram.
@fbravoc9748
@fbravoc9748 5 месяцев назад
Amazing video! How can I become an investor in LMStudio?
@wayne8797
@wayne8797 5 месяцев назад
Can this run on a M1 Max 64gb MBP?
@cacogenicist
@cacogenicist 5 месяцев назад
Not acceptably, I wouldn't think.
@LuckyLAK17
@LuckyLAK17 3 месяца назад
...please a test with installation/access insteuctions will be great. Tks
@michaelestrinone2111
@michaelestrinone2111 5 месяцев назад
Does it support c# and .net 8?
@hqcart1
@hqcart1 5 месяцев назад
no, just phyt and js
@michaelestrinone2111
@michaelestrinone2111 5 месяцев назад
@@hqcart1 Thank you. I am using GPT3.5 with average success, but it is not up to date with .net 8, and I don't know if open-source LLMs exist that are trained on this framework.
@hqcart1
@hqcart1 5 месяцев назад
@@michaelestrinone2111use phind, it's online coding ai and free, its level somewhere between gpt3.5 and 4
@DailyTuna
@DailyTuna 5 месяцев назад
Your videos are awesome!
@matthew_berman
@matthew_berman 5 месяцев назад
Glad you like them!
@LanceJordan
@LanceJordan 5 месяцев назад
What was the secret sauce to "get it to work" ?
@BrianCarver
@BrianCarver 5 месяцев назад
Hey @matthew_berman, love your videos, this one sounds a little different. Are you using AI to generate any parts of your videos now?
@matthew_berman
@matthew_berman 5 месяцев назад
Nope! What sounds different about it?
@DoctorMandible
@DoctorMandible 5 месяцев назад
AI will replace some Jr devs. Never replace coding entirely as you suggest.
@dominick253
@dominick253 5 месяцев назад
I think if anything it just will expose more people to programming. At least that's the effect it had on me. Before I I felt like it was such a huge mountain to climb and now I feel like the AI can do the templates and 90% of the work then I can focus on getting everything to work together to actually make the project.
@JT-Works
@JT-Works 5 месяцев назад
Never say never...
@starblaiz1986
@starblaiz1986 5 месяцев назад
"Never" is a long time ;) When AI gets to human-level intelligence (likely this year, or at most by the end of the decade), what will stop it from replacing programmers?
@EdToml
@EdToml 5 месяцев назад
Suspect coding will become much more of a collaboration. Less so with poor human coder sand much more so with good & great coders.
@seriousjan5655
@seriousjan5655 5 месяцев назад
@@EdToml Actaully, as a !£ years living from programming .... to write actuall code is the last thing. This models do not know what they are doing, that just had huge sets of probabilites. Last week I spent an hour with 6 coleagues discussing 3 options of approach from technical, economical and future advance standpoint. No, no replacement by AI. Sorry.
@reynoeka9241
@reynoeka9241 4 месяца назад
Please, you should test it in macbook pro m2 max
@SuperZymantas
@SuperZymantas 5 месяцев назад
its better better vs gtp3 or 4? so bard and gtp wrote working code also and it is working, or this model can spit out more codes lines?
@vcekal
@vcekal 5 месяцев назад
Hey Matt, will you do a vid on the leaked early version of mistral-medium? Would be cool!
@NimVim
@NimVim 3 месяца назад
How did you manage to get a checkmark? I thought only 100k+ channels and pre-existing businesses could get verified?
@vcekal
@vcekal 3 месяца назад
@@NimVim I found a security vulnerability in RU-vid which allowed me to do that. It’s patched now, though.
@K.F-R
@K.F-R 5 месяцев назад
1. Install ollama 2. Run 'ollama run codellama:70b-instruct' No forms or fees. Two or theee clicks and you're running.
@stickmanland
@stickmanland 5 месяцев назад
Me, looking on with my lovely 3GB Geforce GT 780
@first-thoughtgiver-of-will2456
@first-thoughtgiver-of-will2456 5 месяцев назад
Thank you for investing in LM Studio. I regard you as the most transparent AI Engineer journalist (for lack of a better term). Please keep up the important and quality work you've been doing for AI.
@dan-cj1rr
@dan-cj1rr 5 месяцев назад
a dude on youtube click baiting everyone about AI isnt an engineer
@matthew_berman
@matthew_berman 5 месяцев назад
❤️
@ChrisS-oo6fl
@ChrisS-oo6fl 5 месяцев назад
@@matthew_berman I take it that although you invested in LM studio you’ll still discuss other projects like oobabooga, open-llm, 02 LM Studio, hugginfacechat, silly, or the countless others if there’s anything notable to cover right? Or inform the public of the options and tools that are available. I do use LM studio but for some reason I personally don’t trust it especially with any uncensored model. Even as an extremely novice user I find it a little meh..so often stuck with oobabooga for most stuff. I also use other platforms for different use cases like my Home Assistant LLM API. Its human nature to become biased and unintentionally push, showcase or primary feature a resource which we are personally invested with. I personally prefer my creators to remain neutral with diverse content experiences unless o sought them out for their products.
@michaelpiper8198
@michaelpiper8198 5 месяцев назад
I already have a setup that can code snake that I plug into AI so this should be amazing 🤩
@CV-wo9hj
@CV-wo9hj 5 месяцев назад
Love to see you running locally. What Specifications are needed to run it locally?
@footube3
@footube3 4 месяца назад
At 4 bit quantisation (the most compression you'll really want to perform) you'd need a machine with 35GB of memory in order to run it (whether its CPU RAM, GPU RAM or a mixture of the two). For it to be fast you need that memory to be as high bandwidth as possible, where GPUs are generally the highest bandwidth, but where some CPUs have pretty high memory bandwidth too (e.g. Mac M1/M2/M3 & AMD Epyc).
@CV-wo9hj
@CV-wo9hj 4 месяца назад
@@footube3 gah when I got my Mac Studio M2, I couldn't imagine why I needed more than 32 gigs 🤦
@emmanuelgoldstein3682
@emmanuelgoldstein3682 5 месяцев назад
GPT-4 ranks at 86.6 on HumanEval versus CodeLlama's 67.8. Meta used the zero-shot numbers for GPT-4 in their benchmark comparison, which is pretty dishonest.
@michaeldarling5552
@michaeldarling5552 5 месяцев назад
🙄👆👆👆👆👆👆👆👆👆👆👆👆👆
@romantroman6270
@romantroman6270 5 месяцев назад
They used GPT-4's HumanEval score from all the way back in March.
@fabiankliebhan
@fabiankliebhan 4 месяца назад
Do you plan to test the new mistral-next model available on the LLM Chatbot Arena? It is crazy good. Possibly better than GPT-4.
@scottamolinari
@scottamolinari 4 месяца назад
Can I make a request? If you are going to highlight the text you are reading, just highlight the whole sentence with click and drag (which you do later in the video) and get rid of that highlighted cursor.
@knowhrishi
@knowhrishi 5 месяцев назад
We need test video pleaseeeeee
@beelikehoney
@beelikehoney 5 месяцев назад
please test this version!
@onoff5604
@onoff5604 5 месяцев назад
yes please try it out! (and let us know the results of your experiments with snake please...)
@user-iq8lr8wc8l
@user-iq8lr8wc8l 5 месяцев назад
coding as we know it will be replaced and a new programming paradigm will emerge . This is absolutely wonderful. I'm glad I lived to see this. I've only have been experimenting with AI for about 2 months and I can't get enough of it.
@voncolborn9437
@voncolborn9437 5 месяцев назад
And then what? There will only be "programmers" that know how to ask questions and hope they get what they need? That doesn't sound very promising for the futhre of computing.
@chineseducksauce9085
@chineseducksauce9085 5 месяцев назад
@@voncolborn9437 yes it does
@benscottbongiben
@benscottbongiben 5 месяцев назад
Be good to see locally
@miguelangelpallares8234
@miguelangelpallares8234 2 месяца назад
Please test in Macbook Pro M2 Max
@karolinagutierrez4383
@karolinagutierrez4383 4 месяца назад
Sweet, this llama model crushs GPT4 at coding.
@StephenRayner
@StephenRayner 5 месяцев назад
Not watched yet. But really want to fine tune this bad boy. This will be so nuts!
@VictorMartinez-zf6dt
@VictorMartinez-zf6dt 9 часов назад
This is cool, but I don’t honestly think it will make programmers obsolete.
@MetaphoricMinds
@MetaphoricMinds 5 месяцев назад
Thank you! Remember everyone, download while you can. Regulations are on their way!
@TheReferrer72
@TheReferrer72 5 месяцев назад
Don't be silly. These LLM's are not AGI
@michaeldarling5552
@michaeldarling5552 5 месяцев назад
@@TheReferrer72 You're assuming the government knows the difference!
@TheReferrer72
@TheReferrer72 5 месяцев назад
@@michaeldarling5552 Governments are much smarter than people give them credit for.
@punkouter24
@punkouter24 5 месяцев назад
where is the .NET version. ?
@MEXICANOGOLD
@MEXICANOGOLD 5 месяцев назад
META made a coding animal cooler than all the rest.
@Derek-bg6rg
@Derek-bg6rg 5 месяцев назад
This video has me wishing I was a coding LLaMA too.
@user-iq8lr8wc8l
@user-iq8lr8wc8l 5 месяцев назад
ask it!!! what test to perform.
@pebre79
@pebre79 5 месяцев назад
Please run on m1 & m3
@mazensmz
@mazensmz 5 месяцев назад
Hi Noobi, you need to delete the old prompt before prompting again, because it will consider the old prompts part of the context.
@avi7278
@avi7278 5 месяцев назад
yeah because compated to GPT-4 it has the intellect of a chipmunk.
@mirek190
@mirek190 5 месяцев назад
mixtral 8x7b doesn't have such limitations. You can ask so completely different code later and is no a problem. I whing llma2 architecture is too obsolete now.
@brunobergami6482
@brunobergami6482 3 месяца назад
"I think this will make programming obsolete" - Matthew lol why people still believe that full trust on code will be passed to AI?
@nannan3347
@nannan3347 5 месяцев назад
*cries in RTX 4080*
@mattmaas5790
@mattmaas5790 5 месяцев назад
On the other hand, I have a 4090 but still probably won't use 70b version because it'll be slower than 30b version
@kyrilgarcia
@kyrilgarcia 5 месяцев назад
same but in 3060. There is no such thing as enough vram 🤣
@theresalwaysanotherway3996
@theresalwaysanotherway3996 5 месяцев назад
I'd be very interested to see this compared to the current best open source programming model (exlcuding the recent alpha mistral medium leak), deepseek 33b. As far as I can tell it's not as good, but maybe this 70b really is the new front runner
@user-oc2db7nm8o
@user-oc2db7nm8o 4 месяца назад
LLama is super impressive at coding.
@MH-sl4kv
@MH-sl4kv 5 месяцев назад
I'm surprised it didn't refuse and give you a lecture on the ethics of caging snakes and making them move around looking for food in a little box until they run out of room and die. The censorship on AI is getting insane.
@montserrathernandezgonzale6856
@montserrathernandezgonzale6856 4 месяца назад
Looks like GPT-4 is getting put out to pasture.
@fuzzylogicq
@fuzzylogicq 5 месяцев назад
A lot of these models seem to assume everything is python, for most other low level languages no model can beat GPT 4. Yet !
@user-iq8lr8wc8l
@user-iq8lr8wc8l 5 месяцев назад
and try not to piss it off!!!
@MelroyvandenBerg
@MelroyvandenBerg 5 месяцев назад
Let's go Code LLama!
@GaelNoh
@GaelNoh 5 месяцев назад
Llama is impressive!
@ReligionAndMaterialismDebunked
@ReligionAndMaterialismDebunked 5 месяцев назад
Early crew. Shalom. :3 Noice!
@user-yv1jj4ts6w
@user-yv1jj4ts6w 4 месяца назад
If this thing starts making TikToks we are all doomed.
@vladvrinceanu5430
@vladvrinceanu5430 5 месяцев назад
bro llm studio i guess fu cked up something with new updates. i cannot run even old models on my mbp14 pro m1 ( m1 pro with hightes core count ) as i was able before. improvements to make: - Beeing able to use model for scientific propose as generating molecule formula and so on. ( there is not a single LLM scientific tool supported on llm studio even if the model is available on huggingface) - Fix the gpu metal for m1 mbp14, in fact i was able to use it now not anymore.
@MelroyvandenBerg
@MelroyvandenBerg 5 месяцев назад
If you are in investor in the project, also put in the video on screen in the future. Not just in the description. ok?
@liketheduck
@liketheduck 5 месяцев назад
TEST IT! :)
@rrrrazmatazzz-zq9zy
@rrrrazmatazzz-zq9zy 4 месяца назад
You've decided to do this anyways, but I want you to test code llama 70 b
Далее
The SIMPLE Way to Build Full Stack AI Apps (Tutorial)
10:41
LLaMA 3 Tested!! Yes, It’s REALLY That GREAT
15:02
Просмотров 212 тыс.
Looks realistic #tiktok
00:22
Просмотров 13 млн
Has Generative AI Already Peaked? - Computerphile
12:48
All You Need To Know About Running LLMs Locally
10:30
Просмотров 121 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 435 тыс.
LLaMA 3 Is HERE and SMASHES Benchmarks (Open-Source)
15:35
Так ли Хорош Founders Edition RTX 4080 ?
13:00
ИГРОВОВЫЙ НОУТ ASUS ЗА 57 тысяч
25:33