Тёмный

Boost Productivity with FREE AI in VSCode (Llama 3 Copilot) 

Mervin Praison
Подписаться 41 тыс.
Просмотров 30 тыс.
50% 1

Опубликовано:

 

7 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 68   
@jeffwads
@jeffwads 4 месяца назад
Very impressed with the 8B L3 in regards to coding. Amazing how much progress they have made.
@martin22336
@martin22336 4 месяца назад
I am using this is insane 😮 I think full stack developers will not like their future holy crap.
@carta-viva
@carta-viva 3 месяца назад
There's pretty cool as a starting point. I think A.I in future will do many other thinks to help us on achieving more productivity.
@Techonsapevole
@Techonsapevole 4 месяца назад
very nice, and it's just a 8B parameter model
@joseeduardobolisfortes
@joseeduardobolisfortes 4 месяца назад
Very good tutorial. You don't speach about the platform; can I assume it will work both Windows and Linux? Another thing: What's the recommended hardware configuration to install Llama 3 locally in our computers?
@G3TG0T
@G3TG0T 4 месяца назад
Great video! How do I connect to my own local Ollama server running on my local machine with this?
@kesijack
@kesijack 2 месяца назад
it works in macbook air m3 16gram, a little slow but can be used. Thank you Mervin.
@prestonmccauley43
@prestonmccauley43 4 месяца назад
This was a great quick lesson. One thing I was seeing if anyone figured out, often I need to refer to very new documents on API etc, has anyone tied this into like. RAG structure, so we are always looking at latest document?
@kannansingaravelu
@kannansingaravelu 3 месяца назад
do we need both llama3:8b and instruct? can we not work only with instruct? Also I see your code works faster - could you specify your PC / system specs and config as it takes a good amount of time on my iMac 2017
@m12652
@m12652 4 месяца назад
The buttons don't do anything... note i'm working off line. The 4 buttons at the bottom of the add-ins panel just copy the code to the chat window. They don't do anything else and once clicked, the AI stops responding to questions. When i asked it what was wrong with the "explain selected code" the AI responded "nothing, its only meant to copy the code. Anyone know if this is broken for me or its simply an incomplete add-in...?
@Adrian-mu8gg
@Adrian-mu8gg 28 дней назад
If I hv a project with code separated in different files, can it read all and debug?
@manojkr7554
@manojkr7554 2 месяца назад
ollama : The term 'ollama' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + ollama pull llama3:8b
@McAko
@McAko 3 месяца назад
I prefer to use Continue plugin
@MaorAviad
@MaorAviad 4 месяца назад
amazing content! maybe you can create a long video where you use this to create a full stack application
@Fonzleberry
@Fonzleberry 4 месяца назад
Any ideas about how this works on large scripts? What's the context length?
@thebudaxcorporate9763
@thebudaxcorporate9763 4 месяца назад
wait for implement on streamlit, keep it up bro
@maryamarshad871
@maryamarshad871 2 месяца назад
Can you guide me step by step it doesn't work on my laptop
@rmnilin
@rmnilin 4 месяца назад
SPOILER ALERT: this is not amazing, but you'll be able to make scrambled eggs on your laptop while it writes you a crud service that actually doesn't work
@ragtop63
@ragtop63 21 день назад
SPOILER ALERT: Stop trying to run language models on a laptop.
@tumadrezocl
@tumadrezocl Месяц назад
why ollama pull llama3:8b dont work?
@bhanujinaidu
@bhanujinaidu 4 месяца назад
super video. Thanks
@JohnSmith762A11B
@JohnSmith762A11B 4 месяца назад
Excellent and useful tutorial! 👍
@m12652
@m12652 4 месяца назад
Does CodeGPT require me to be logged in? I'm all set up but if I ask it to explain something it just says "something went wrong! Try again.". Then i have to either quit and restart vscode or disable then enable the extension...
4 месяца назад
Thanks for sharing. I host the ollama server on a remote server. How do I make it connect to the remote machine instead of localhost?
@mohit_talniya
@mohit_talniya 4 месяца назад
Reply here if you find this
@alirezanet
@alirezanet 2 месяца назад
I prefer Continue it is much more fun to work with
@yagoa
@yagoa 4 месяца назад
how do I use another computer running ollama on my LAN?
@ah89971
@ah89971 4 месяца назад
Thanks, can you make video of pythagora using llama 3?
@m12652
@m12652 4 месяца назад
Nice one 👍
@djshiva
@djshiva 4 месяца назад
This is amazing! Thanks for much for this tutorial!
@m12652
@m12652 4 месяца назад
This app looks like a good idea but its a long, long way from finished. Buttons (refactor, explain, document and fix bug in selected code don't do anything but copy selected code to the chat. If you use the clear button it clears the selected model etc. but not the history. I just asked it to write a basic api call for sveltekit and it wrote some pure garbage based on assuming the previous selection was part of the current question. I'm using a 2019 MBP with 32gb ram and its too slow to add any value so far... for me at least
@red_onex--x808
@red_onex--x808 4 месяца назад
awesome info
@alinciocan5358
@alinciocan5358 4 месяца назад
does it slow down my laptop if I run it locally? would I be better off running haiku on cloud? what would you recommend, I'm just getting into code
@allxallthetime
@allxallthetime 4 месяца назад
I have 8 GB of VRAM and when autocomplete is on for Codyai copilot, my fans turn on full blast on my laptop. I have 64 GB of ram so it doesn't slow my pc down, but if it was running on your CPU and not your GPU it might slow your computer down. I don't think that it will slow your computer down if you have enough VRAM or a ton of ram, but it could depending on your computer's specs. There is also an extension called "Groqopilot" on vscode that requires that you supply it with a groq api key, and when you do it will create code for your lightning fast with llama3 70b which is of course the better model of llama3 8b. It doesn't autocomplete but it behaves very much like the tutorial we just watched.
@m12652
@m12652 4 месяца назад
Does anyone else get the feeling that the way AI's answer questions is based on the old Microsoft "clippy" assistant... annoyingly eager and can't answer anything much without wrapping it in a paragraph or so of irrelevance.... Very annoying to get 6 or 7 line answers where the only relevant bits are a number or a few words.
@Fonzleberry
@Fonzleberry 4 месяца назад
If you're using chat gpt you can change that in settings. I think in things like Ollama, you can also change your settings so that it get's straight to the point.
@m12652
@m12652 4 месяца назад
@@Fonzleberry I know thanks... just haven't had much luck though lol, at one point i got fed up and added an instruction to "only answer boolean questions with a yes or a no", I had to restart the model (bakllava) to get it to start answering properly again as it answered all questions with "yes" or "no". I don't get why the default mode is to burry all answers in information not requested. I guess someone redefined the word "conversational". Can't even ask whats 2+2 without an explanation lol
@Fonzleberry
@Fonzleberry 4 месяца назад
@@m12652 It will improve with time and use cases. A model fine tuned with META's Messenger/Whatsapp data would have a very different feel.
@ErfanKarimi-ep7ie
@ErfanKarimi-ep7ie 3 месяца назад
guys i installed it according to the vid but i cant run the ai and i saw somewhere that i need to put it in PATH but i dont know where the files are installed
@SelfImprovementJourney92
@SelfImprovementJourney92 4 месяца назад
can i use it to write any code I am a beginner do not know anything about coding just starting from zero
@MervinPraison
@MervinPraison 4 месяца назад
Yes you can write most popular programming language
@konstantinrebrov675
@konstantinrebrov675 4 месяца назад
You would need to know at least the basics of coding, and how an application is designed and structured. This writes the code for you but if you cannot read the code or at least understand what it's doing at a high level, then it's too early for you. It gives you 2/3 of the finished product. You just need to know how to integrate that code into your application. You need to know how to create an application, what are the different parts of an application, how to deploy and run an application.
@sillybilly346
@sillybilly346 4 месяца назад
It only gives option for codellama and not llama instruct, please help
@AlexMelemenidis
@AlexMelemenidis 4 месяца назад
I have the same issue. In the CodeGPT menu I only see as options "llama3:8b" and "llama3:70b", but not "llama3:latest" or "llama3:instruct", as I have them available (when I would go to a command line and do ollama list). When I select llama3:8b and enter a prompt, nothing happens. When I choose another model which I have installed, like "mistral" it works just fine...
@AlexMelemenidis
@AlexMelemenidis 4 месяца назад
ah okay, so seems to be the name and CodeGPT has a set list of compatible model names? I did another "ollama pull llama3:8b" and now it works.
@sillybilly346
@sillybilly346 4 месяца назад
@@AlexMelemenidis yes same here, thanks
@MosheRecanati
@MosheRecanati 4 месяца назад
Any option to use it with intellij ide?
@haricharanvalleru4411
@haricharanvalleru4411 4 месяца назад
very helpful tutorial
@dorrakallel5303
@dorrakallel5303 4 месяца назад
thank you so much for this video , is it open source please? can we find the weights files and use it ?
@ragtop63
@ragtop63 21 день назад
Yes, it's released under the MIT License. It's open-source and free to use.
@abhijeetvaidya1638
@abhijeetvaidya1638 3 месяца назад
why not use codelama ?
@harikantipudi8668
@harikantipudi8668 4 месяца назад
Latency is pretty bad when im using llama3:70b on vscode for CodeGPT. I am on windows . I guess its with the underlying machine . Anything can be done here?
@ragtop63
@ragtop63 21 день назад
Get a better GPU. Preferably something with a lot of VRAM. If you’re attempting to do this on a laptop like many others here are, you’re setting yourself up for failure. If you’re serious about running large language models, don’t run them on a laptop.
@srinivasyadav7448
@srinivasyadav7448 4 месяца назад
Does it work for react native code?
@programmertelo
@programmertelo 3 месяца назад
amazing
@mafaromapiye539
@mafaromapiye539 4 месяца назад
AI Technologies are making things easier as it boost one's vast Human General Intelligence capabilities...
@amitkumarsingh4489
@amitkumarsingh4489 4 месяца назад
could not see the thescreen @ 2:17 in my vscode
@MervinPraison
@MervinPraison 4 месяца назад
Click the settings icon , it was show just before that
@amitkumarsingh4489
@amitkumarsingh4489 4 месяца назад
@@MervinPraison solved
@Mr76Pontiac
@Mr76Pontiac 4 месяца назад
I'm really not impressed with Llama:8B. I decided to skip Python, and go to Pascal. I asked to create a tic tac toe game, and have had nothing but problems with it. It CONSTANTLY forgets that Pascal is a declarative language and forgets to include the variable definitions, especially the loop variables. When i ask for it to revisit, this last time it decided to rewrite the function to draw the board in a console.log instead of a writeln. I mean, it rewrote the WHOLE function to be completely useless. I tried running the 70b, but the engine just kept prioritizing to my GTX970 instead of my RTX3070. The documentation on the site, as well as the Github repo just doesn't explain well enough where to put the weights on where the engine should calculate. I could pull the 970 out, but, meh.
@JohnDoe-ie3ll
@JohnDoe-ie3ll 3 месяца назад
why you guys are using third party plugins which has limit and then you claim its free.. would be nice to see which doesnt require that shit
@beratyilmaz7951
@beratyilmaz7951 4 месяца назад
try codeium extension
@MeinDeutschkurs
@MeinDeutschkurs 4 месяца назад
I can just see: Something went wrong, try again.
@krishnak3532
@krishnak3532 4 месяца назад
If I run the local ollama3 will it requires GPU to see faster performance.@mervin
@ragtop63
@ragtop63 21 день назад
Yes. All language models see vast performance improvements when using a supported GPU.
@iukeay
@iukeay 4 месяца назад
This would be amazing if the code was stored for a workspace in a vector store
Далее
host ALL your AI locally
24:20
Просмотров 1 млн
Free AI in VS Code (Better Than GitHub Copilot)
19:28
The Most Elite Chefs Ever!
00:35
Просмотров 2,9 млн
Copilot Best Practices (What Not To Do)
8:43
Просмотров 53 тыс.
This VS Code AI Coding Assistant Is A Game Changer!
14:27
Should you learn Elixir in 2024?
6:34
Просмотров 92 тыс.
Cursor - A Better Visual Studio Code/Copilot?
11:07
Просмотров 23 тыс.
The Most Elite Chefs Ever!
00:35
Просмотров 2,9 млн