Тёмный
Fahd Mirza
Fahd Mirza
Fahd Mirza
Подписаться
Learn AI, Cloud, DevOps and Databases. Become a Better AI Cloud Engineer.
Dallah - Multimodal LLM for Arabic
6:15
4 часа назад
DreamCar - 3D Car Reconstruction
5:59
9 часов назад
Комментарии
@SafaaMohammed-tu3fp
@SafaaMohammed-tu3fp 37 минут назад
hello brother .. is there a video and image enhancer github project that is open source and i can run it on google colab ? and the same question for text to speech and voice cloning other than rvc and open ai that can rival the eleven labs quality . i would be grateful really
@SafaaMohammed-tu3fp
@SafaaMohammed-tu3fp 35 минут назад
i tried( 2x video) but i ran into issues and errors when i run it in google colab , the issues seams from the publisher
@talonfive7938
@talonfive7938 38 минут назад
Hi, where can I find the coding for show_mask and show_points? Thank you in advance.
@fahdmirza
@fahdmirza Час назад
🔥Install SearXNG with Perplexica and Ollama Locally for AI Search Engine ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-LjTIy0FEkAQ.htmlsi=DBfI1i0JxSsOsIgd
@shivaharibargaje8374
@shivaharibargaje8374 5 часов назад
Nice Explanations...
@sauxybanana2332
@sauxybanana2332 9 часов назад
tool use is a crotch until llms are trained to process logic and alphaproof seems to crack the code with a silver olympiad title, for the time being, this is a good crotch
@user-gr8on2pc5y
@user-gr8on2pc5y 9 часов назад
hi, Fahd Mirza can you build a full-stack eCommerce website using Claude sonnet 3,5 with aider
@MohanishPrerna
@MohanishPrerna 13 часов назад
What we need to provide in Token ? As it is using OPen AI library , does it require to mention Open AI API key ? Just FYI ,I am using Windows
@anaskhan-lz2hk
@anaskhan-lz2hk 13 часов назад
If it is not open source then we are not excited
@informationtech-musa
@informationtech-musa 16 часов назад
sir kindly speak loudly.
@camelion14
@camelion14 19 часов назад
Thanks Fahd for this great video I've installed all the packages and I can run the program without errors (after several fights with the libs conflict) but it takes too much time to print the response, why ? Is it due to the fact that I'm using CPU ? I don't have an NVIDIA card in my laptop. is there a way to make it faster ?
@deathdefier45
@deathdefier45 19 часов назад
Hey, thank you for the amazing tutorial, what is your pc config? How have you managed to load the 70B parameter model locally? Also will a 36 GB unified memory mac M3 studio be able to load the model?
@clarencetexada8869
@clarencetexada8869 20 часов назад
When I get to the pip install -r requirements.txt it doesn't work for me. Everything before that worked
@PushpendraKumar-it4wf
@PushpendraKumar-it4wf День назад
Getting error while uploading a pdf or other file...any remedy
@testales
@testales День назад
Are you sure that it used your functions? How can the function calls be verified?
@teekamsinghgurjar7992
@teekamsinghgurjar7992 День назад
Could you create a video tutorial on how to fine-tune a large language model to memorize specific data?
@themax2go
@themax2go День назад
curious... why using jupyter notebook instead of just python?
@fahdmirza
@fahdmirza День назад
haha. Because, everyone was saying in comments to use notebook instead of python. :)
@themax2go
@themax2go День назад
@@fahdmirza lol ok... in all seriousness though, have you noticed any benefit over "the traditional way"?
@phstella
@phstella День назад
How much VRAM it ends up using with the prompt guard and llama-guard models? I can load the llama 3.1 model, but when I try to run the mesop app it uses too much memory for my system. Do you know if there is a way to disable the guards so it doesn't load the other models?
@Abid_Ali_Fareedi
@Abid_Ali_Fareedi День назад
Please can you send me the link to the AI Model Review tool!
@fahdmirza
@fahdmirza День назад
Sure, its in video description.
@AndyBerman
@AndyBerman День назад
nice find!
@fahdmirza
@fahdmirza День назад
Thanks!
@4ohm531
@4ohm531 День назад
chuddah
@aynray
@aynray День назад
The link in the video description is for Gemma blog entry from Feb 2024 and not the llama3 blog entry, can you please also link the llama3 RAG blog entry. Thanks for doing the tutorial and all other interesting videos.
@fahdmirza
@fahdmirza День назад
let me check
@ForTheEraOfLove
@ForTheEraOfLove День назад
With David Attenborough as default is a real treat. I appreciate you for co-creating a magnificent future 🙏 P.S. this is confirmation that it works
@fahdmirza
@fahdmirza День назад
Glad you enjoy it!
@themax2go
@themax2go День назад
does this method / tool work w/ macos / metal?
@fahdmirza
@fahdmirza День назад
would have to check
@mikew2883
@mikew2883 День назад
Very cool tool! So the test prompts available are just a few hardcoded or is the repository available to download and add and configure your own prompts?
@victordans5663
@victordans5663 День назад
where can I find a API reference documentation of the model ?
@user-pr6nm2di6d
@user-pr6nm2di6d День назад
I must say u read our minds. We think of something, I see a notification for it from your channel sir.. Huge shout out to you sir 🎉🎉🎉
@anothercoder452
@anothercoder452 День назад
Hi Fahd, Thanks for always brining us the latest and greatest of AI. The google is not public can you please make it public.
@gomgom330
@gomgom330 День назад
Bro i just discover this thing (transformers and huggingface) now i wonder, when we used model or pipeline, so we use their model right? Does it require internet or works like API? if yes so we can use the model even on low end pc or hardware (like use a object detection task with huge model in raspbery pi) because it running with internet not locally?
@TheSinbio
@TheSinbio День назад
Will it work for Windows 10? I was trying to import a customisable model from huggi but I can't create a "modelfile" in Windows qwq
@themax2go
@themax2go День назад
wsl
@lisag.9863
@lisag.9863 День назад
Thank you for the great video! I got an error that says that "No text files found in input" even though my input does have a clear *.txt file. Do you know what could be the problem?
@PushpendraKumar-it4wf
@PushpendraKumar-it4wf День назад
Does it support RAG architecture?
@Manash-t9q
@Manash-t9q День назад
Insightful video. Can you create a video about fine-tuning Picard model for sql generation.
@Kashif.Rehman.
@Kashif.Rehman. День назад
Great work, keep it up Fahad bhai
@Manash-t9q
@Manash-t9q День назад
Insightful tutorial. Can you create a tutorial video of fine-tuning Picard model?
@CarHound
@CarHound 2 дня назад
What is the cost like to run this?
@gosolo1000
@gosolo1000 2 дня назад
I have no problem installing it. It runs easily but I would like a shortcut to start it quickly so I don't have to install it every time I need to use it.
@hurairahsartandcraft4515
@hurairahsartandcraft4515 2 дня назад
would be better if you provided the commands
@user-pr6nm2di6d
@user-pr6nm2di6d 2 дня назад
This opened the doors for multiple pathways. Thanks alot sir. You are amazing ❤❤❤❤❤. Unfortunately i can comment & like only once for a single video.
@fahdmirza
@fahdmirza 2 дня назад
So nice of you
@user-pr6nm2di6d
@user-pr6nm2di6d 2 дня назад
​@@fahdmirzaTried to pull sqlcoder 7b-2 from defog with ur code changes. Works like a charm. Thanks again sir.. you made my day
@Cosmos_comedy
@Cosmos_comedy 2 дня назад
Can u scrape multiple websites at the same time? At large scale?
@fahdmirza
@fahdmirza 2 дня назад
yes you can, but would have to customize it a lot.
@jeremiahgrove9201
@jeremiahgrove9201 2 дня назад
What a waiste of time, why did you not just show how to access it publicly instead of rambling about how to use LM studio, there are videos around showing how to use the software, came to watch the video based on your title..
@fahdmirza
@fahdmirza 2 дня назад
Did you really watch the video in full? It did show what title says. Anyway, thanks for dropping by.
@TheEhellru
@TheEhellru 2 дня назад
The unhelpful assistant was very unhelpful! Shrugging and remaining silent instead of answering the question. Impressively true to form. If I had to guess, I'd say it wasn't manipulation of the hyperparameters, but rather the removal of the word "unhelpful" in the prompt that improved the output here
@fahdmirza
@fahdmirza 2 дня назад
good point, thanks.
@RoyalMamba.
@RoyalMamba. 2 дня назад
How about using it for speculative Decoding ?
@fahdmirza
@fahdmirza 2 дня назад
yes thats an idea.
@ROKKor-hs8tg
@ROKKor-hs8tg 2 дня назад
Thank you. If you allow, make a video of a browser add-on specifically for Olama to create a chat with pdf openweb not run windows
@fahdmirza
@fahdmirza 2 дня назад
noted.
@ROKKor-hs8tg
@ROKKor-hs8tg День назад
@@fahdmirza where?
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 дня назад
how is this different from LangGraph, other than that it only uses Llama 3.1
@fahdmirza
@fahdmirza 2 дня назад
I think it tends to be more generic, but good point there
@midipem
@midipem 2 дня назад
Nice tutorial, thanks! Any idea of how to change LM STUDIO background color? Not everyone likes the dark theme
@fahdmirza
@fahdmirza 2 дня назад
Let me check
@franklyvulgar1
@franklyvulgar1 2 дня назад
thanks, a next interesting tutorial would be use ollama on sagemaker to expose the calls to the model to an EC2 instance say with a python web app that end users use, so you use the ec2 to expose to instance and the sage maker powered machine to run your machine learning models with the GPU's..
@SWV_AI9
@SWV_AI9 2 дня назад
Cool, model. Instead of trying your method. I saw the same models in LM Studio. I was able to successfully download several different quant. versions, but when I tried to load them in LM Studio, I got the following error message: "llama.cpp error: 'done_getting_tensors: wrong number of tensors; expected 292, got 291'". I verified that I have the latest version and that other models are working fine. If you have any thoughts, please let me know. Thank you so much for your videos. I tried the other quantization version listed on the hugging face page which did work but was definitely censored per your example questions ("I can't help you with that").
@fahdmirza
@fahdmirza 2 дня назад
Good insight.
@muhammaddevv
@muhammaddevv 2 дня назад
@@fahdmirza the official meta page actually recommend downloading directly on local PC without ollama or hugging face. i downloaded the 8-b and there is no clear way to run it, the only option I've seen working is the hugging face. what do you suggest I don't want the 15 gb 8-b waste.. Can I still run it locally?
@abdulrahimannaufal5678
@abdulrahimannaufal5678 14 часов назад
llama.cpp has already fixed it, just update your LM studio to get the fix, and then it should run without any problem. Ollama still throws the same error till date, expecting to be fixed in the coming release. finger crossed.
@khushigupta5798
@khushigupta5798 2 дня назад
Hey please tell me how to use this on my website as a chatbot. Is it possible???
@themax2go
@themax2go 2 дня назад
no silly it's a code assistant to be run in vscode... read the title ffs
@themax2go
@themax2go 2 дня назад
not to mention it runs locally on your computer, unless you host your website on your computer you'd have to install ollama and a chat interface on the webserver that you want to host your site at, or you use a paid online chat service
@fahdmirza
@fahdmirza 2 дня назад
There are other options to create chatbot around it, please search the channel for related videos, thanks.
@darkmatter9583
@darkmatter9583 2 дня назад
please if you sell courses i would buy them or discord membership to assist with code , the 70b ollama have huge reqs and even more the 405b model on local, you are a devops i want to learn that way to run the models and modifying them, thanks
@fahdmirza
@fahdmirza 2 дня назад
Maybe one day!
@vobbilisettyveera2973
@vobbilisettyveera2973 2 дня назад
Hi Very nice video!! Can you help me with an issue... am using "lmdeploy serve api_server internlm/internlm-7b" to launch my fast api for vl model. It is woking fine ut the vl endpoint of the API is storing accumulating the memory. This is leading to crash my server. How can we stop accumulating the memory.