Тёмный

How to run a llm locally | Run Mistral 7B on local machine | generate code using LLM 

Joy Maitra
Подписаться 515
Просмотров 4,3 тыс.
50% 1

Опубликовано:

 

13 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 14   
@akilsghodi9057
@akilsghodi9057 7 месяцев назад
Great job bro I was assigned this task and I was struggling a lot but your tutorial helped me a lot
@joymaitra5414
@joymaitra5414 7 месяцев назад
Happy to hear, that it was helpful...
@albrechtfeilcke2000
@albrechtfeilcke2000 3 месяца назад
Hey, great video, thanks to you I got it working. Is there a possibility to change the code, so that not the first modelfile is downloaded? I want to download Q4 Mistral for examble but the code gives Q2. I am pretty new to this, sorry if this is a silly question.
@joymaitra5414
@joymaitra5414 3 месяца назад
Thank you for reaching out !! Good question it is. In another video I have shared different ways to load the models, with Llama_cpp you can manually download the model of your choice and load that for inference.
@nasiksami2351
@nasiksami2351 7 месяцев назад
Great one! Can you make a tutorial on, how can we finetune our custom dataset locally and use that finetuned model for getting domain-specific results locally?
@joymaitra5414
@joymaitra5414 7 месяцев назад
I will certainly try that....
@AbhijitRayVideos
@AbhijitRayVideos 5 месяцев назад
I see you have some good use cases. I've been working on similar projects. How about we have a little chat and exchange notes?
@joymaitra5414
@joymaitra5414 5 месяцев назад
Sure, you can reach out on email or LinkedIn
@vbridgesruiz-phd
@vbridgesruiz-phd 9 месяцев назад
Hi Joy, thank you for this video. What is the advantage of using ctransformers library versus other github libraries available such as OpenLLM? Is it just a matter of personal preference?
@joymaitra5414
@joymaitra5414 9 месяцев назад
Hi, thanks for your quey, more than preference i would say understanding the core, openLLM is like a wrapper created using the basics that tried to capture.
@vbridgesruiz-phd
@vbridgesruiz-phd 9 месяцев назад
@@joymaitra5414 I prefer simple as well. I will try with just ctransformers to see if that improves implementation. Right now, too many wrappers out there haha.
@wibuyangbaca2113
@wibuyangbaca2113 4 месяца назад
Thank you so much, this explanation is great! This really help me a lot but, i'm stuck at adding my own gguf models to my project. Like when i'm trying to add it my code didn't detect it and downloaded the other version of the model id. Can i download the models manually from Hugging Face than downloading it from the script? Because the file i downloaded from the script, is not even a gguf file or any type of that.
@joymaitra5414
@joymaitra5414 4 месяца назад
Yes, that is absolutely possible, you can manually download and provide to the local file path in LlamaCpp, for CTransformers also the same is possible
@wibuyangbaca2113
@wibuyangbaca2113 4 месяца назад
​@@joymaitra5414 Thank you again for helping and assisting me!
Далее
OG Buda, Слава КПСС - LAZER SLAVA
01:58
Просмотров 121 тыс.
All You Need To Know About Running LLMs Locally
10:30
Просмотров 149 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 476 тыс.
Running a Hugging Face LLM on your laptop
4:35
Просмотров 74 тыс.
The EASIEST way to RUN Llama2 like LLMs on CPU!!!
8:15
Ollama - Local Models on your machine
9:33
Просмотров 93 тыс.
OG Buda, Слава КПСС - LAZER SLAVA
01:58
Просмотров 121 тыс.