Тёмный
Data Science In Everyday Life
Data Science In Everyday Life
Data Science In Everyday Life
Подписаться
An Intro To Retrieval Augmented Generation
1:02
6 месяцев назад
DSPy - Does It Live Up To The Hype?
16:30
7 месяцев назад
Self-RAG Could Revolutionize Industrial LLMs!
12:31
10 месяцев назад
Learn Hookes law | Springs |  Physics
2:02
4 года назад
Ideal Gas Law
4:13
4 года назад
Hookes law lab - springs in series
2:00
4 года назад
Hookes law lab
2:23
4 года назад
Lengthscales in everyday phenomena
14:32
4 года назад
Science in Everyday Materials
2:31
4 года назад
Комментарии
@chrisder1814
@chrisder1814 8 дней назад
thanks
@sahil5124
@sahil5124 Месяц назад
woah, quality of content 👌 liked and subscribed
@andrew.derevo
@andrew.derevo 2 месяца назад
What’s happened under the hood of DSPY, how many tokens it will use during fine tuning process? thanks
@malikrumi1206
@malikrumi1206 2 месяца назад
21 open tabs! I'm *not* the only one! 🙃
@georgekuttysebastian1412
@georgekuttysebastian1412 2 месяца назад
Great video. I'm pretty unfamiliar with cloud, I just wanna make sure that I can get a LLM to service multiple endpoints for multiple users. If so how do I get to know the number of users that can be serviced
@mrsilver8151
@mrsilver8151 3 месяца назад
hi sir thanks for your great work, my question is , is this considered as generative Q&A or extractive Q&A.
@davidthiwa1073
@davidthiwa1073 4 месяца назад
Yes what about costs? Any easier platforms you have tried?
@thankqwerty
@thankqwerty 4 месяца назад
What's the algorithm tho.
@MoThoughNo
@MoThoughNo 4 месяца назад
Why is that everyone skip the most important part of AWS service for automation which is how to create the Lambda code! Is there a resource about how to write or make the Lambda context/code?
@HarmonySolo
@HarmonySolo 4 месяца назад
I would rather use prompt engineering than DSPy. The beauty of LLm is to generate contents/code with natural language, now DSPy asks people to use programming lanaguage again. There is also deep learning curve for DSPy.
@ryanscott642
@ryanscott642 5 месяцев назад
Any way you can make the font bigger? :)
@kiranhipparagi543
@kiranhipparagi543 5 месяцев назад
If I want to fine tune for RFP’s document, what is the best way? Please help.
@aniruddhakunte6358
@aniruddhakunte6358 6 месяцев назад
What if I have a custom model (for which I also have the model artifacts) that I want to deploy to Azure ML?
@gabrielfraga2303
@gabrielfraga2303 2 месяца назад
Any solutions ?
@felipeekeziarosa4270
@felipeekeziarosa4270 6 месяцев назад
i'm trying a qa in portuguese, but the models I found on HF just returns short answers. Would you suggets any model?
@Geekraver
@Geekraver 6 месяцев назад
In my experience the challenge is the metric function. Examples always seem to use exact match; getting a qualitative metric working is non-trivial. Try use DSPy to optimize prompting for summarizing video transcripts, for example; you'll probably spend more time trying to get the metric working than you would have just coming up with a decent prompt. You also need a metric function that is going to discriminate between prompts of different quality, which is also not as trivial as it might seem.
@mysticaltech
@mysticaltech 7 месяцев назад
Bigger text would have been awesome for the notebook. Thanks for the info.
@vbywrde
@vbywrde 7 месяцев назад
Yes, this was useful. Thank you. But after watching the video I'm still not sure if DSPy lives up to the hype. What I would want to see is a series of benchmark tests of "Before Teleprompter" and "After Teleprompter" results on a determinative set of tasks that cover a range of concerns. Such as: math questions, reasoning questions, code generation questions, categorized into Easy, Medium, Difficult. This should be done with a series of models, starting with, of course, GPT4, but including Claude, Groq, Geminia, and a set of HuggingFace Open Source models such as DeepseekCoder-Instruct-33B, laser-dolphin-mixtral-2x7b, etc. I would want to see this done with the variety of DSPy Compile options, such as OneShot, FewShot, using ReACT, and using LLM as Judge, etc., where the concerns are appropriate. In other words, a formal set of tests and benchmarkes based on the various, but not infinite, configuration options for each set of concerns. This would give us much better information and be truly valuable to those who are embarking on their DSPy journey. Right now, it is very unclear whether compiling using teleprompter actually provides more accurate results, and under what circumstances (configurations). I have seen more than one demo, and in some cases the teleprompter actually produced worse results, and the comment was "well, sometimes it works better than others". My proposed information set, laid out in a coherent format, would be tremendously useful to the community and would go a long way towards answering the question you posed: Does DSPy live up to the Hype? Because we don't have this information, the Jury is still out, and your video poses the right question, but doesn't quite answer it, tbh. The benchmarking tests I am proposing would. That along with a thoughtful discussion of the discoveries would be tremendously useful. That said, I did learn a few useful things here, and so thanks again!
@bluebabboon
@bluebabboon 7 месяцев назад
Ok, i did not understand one thing. Why did you include ANSWERNOTFOUND in the context? This seems to defeat the whole purpose of getting a correct answer. How would i know if the context is relevant to the question before the question is asked? Is it not similar to data leakage? The true test would be to just remove ANSWERNOTFOUND from the context , because we don't know what question that might be asked, or we can even create negative examples like we do in word2vec and just use them to train the answernotfound. Let me know if I make sense
@scienceineverydaylife3596
@scienceineverydaylife3596 7 месяцев назад
You can think of this as similar to supplying labeled data for training ML models. In this example you are training a prompt for extracting answers in a particular format (ANSWENOTFOUND is the label when no answer can be extracted from the other parts of the context)
@bluebabboon
@bluebabboon 7 месяцев назад
@@scienceineverydaylife3596 So in the test set I am assuming there wont be ANSWERNOTFOUND in the context, right?
@thecryptobeard
@thecryptobeard 7 месяцев назад
is there an easier way to do this with Ollama?
@MohammadHamzahShahkhan
@MohammadHamzahShahkhan 7 месяцев назад
Good review! My issue with these frameworks is their limitations become more transparent with large and more complex use cases - the boiler plate code ends up being technical debt that needs circumventing and the next iteration of the GPT's or Mistrals intrinsically solve some of the previous limitations that the models couldn't solve for.
@gursehajbirsingh8956
@gursehajbirsingh8956 8 месяцев назад
You haven't exaplined the code what is the preprocessing code doing how is it working ? Its a request to make a new video on proper explanaation of the code.
@wilfredomartel7781
@wilfredomartel7781 8 месяцев назад
🎉
@nonameyet5069
@nonameyet5069 8 месяцев назад
haha i am the 1000th subscriber :)
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
Yay, thank you! Here's to the next 1000🥂 :)
@DescolaDev
@DescolaDev 9 месяцев назад
Extremely helpful! Thanks
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
Glad it was helpful!
@buksa7257
@buksa7257 9 месяцев назад
@buksa7257 0 seconds ago Im having a bit trouble undestanding the following: i believe you're saying the lambda function is calling the endpoint of the sagemaker (the place where we stored the llm). But then who calls the lambda function? When is that function triggered? Does it need another endpoint?
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
The lambda function is called from the API gateway (whenever a user invokes the API)
@MahadevanIyer
@MahadevanIyer 10 месяцев назад
can you run 7b model on normal i5 16gb ddr5 laptop without gpu ?
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
Try LMStudio - combined with quantization
@VivekHaldar
@VivekHaldar 10 месяцев назад
Great summary, thanks!
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
Glad it was helpful!
@Kaushikshresth
@Kaushikshresth 11 месяцев назад
hey i want to contact you, how can i ?
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
www.linkedin.com/in/skanda-vivek-01619311b/
@nuketube2650
@nuketube2650 11 месяцев назад
Great explanation and prop choice 👍
@DihelsonMendonca
@DihelsonMendonca Год назад
Your microphone is terrible, but the video is great. Thanks. 🎉🎉❤
@ShamimAhmed-by9ey
@ShamimAhmed-by9ey Год назад
Texts are so small, facing difficulties
@scienceineverydaylife3596
@scienceineverydaylife3596 Год назад
Hi - check out the gitlab/blog for the detailed code
@amortalbeing
@amortalbeing Год назад
thanks a lot for sharing this with us
@amortalbeing
@amortalbeing Год назад
what's the requirements for the 30b and larger models when quantized? how much vram or system ram is needed?
@scienceineverydaylife3596
@scienceineverydaylife3596 Год назад
It depends on the type of quantization. A rule of thumb is for 8-bit quantization it is the same i.e. 30b parameter model 8-bit would need 30 GB of ram (preferably GPU)
@Derick99
@Derick99 11 месяцев назад
​@scienceineverydaylife3596 what would something like 128gb ram but an 8gb gpu 3070 do compared to having multiple graphics cards
@scienceineverydaylife3596
@scienceineverydaylife3596 10 месяцев назад
@@Derick99 You would probably need to figure out if LMStudio could communicate with multiple GPUs. I know packages like huggingface accelerate can handle multiple GPU configurations quite seamlessly
@RuturajPatki
@RuturajPatki Год назад
Please share your laptop specifications. Mine works so slow...
@SAVONASOTTERRANEASEGRETA
@SAVONASOTTERRANEASEGRETA Год назад
sorry i did not understand which file needs to be modified for the external server and where i should look in the Lm Studio folder. thank you :-)
@prestonmccauley43
@prestonmccauley43 Год назад
Amazon sagemaker is pretty complex and the ui horrible, any other ways to deploy? The compute model tried to charge me like 1000 dollars for the free usage. Because it spins up like 5 instances. Instance that don’t show up in the console directly, you have to open the separate sage maker instance viewer,
@scienceineverydaylife3596
@scienceineverydaylife3596 Год назад
Yes - you can deploy quantized models locally using desktop apps (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-BPomfQYi9js.html&ab_channel=DataScienceInEverydayLife) or look at other 3rd party solutions like lambda labs
@resourceserp
@resourceserp Год назад
whats the memory requirement for windows 10? can it run in cpu mode?
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
Yes CPU is fine - although slower inference than GPU
@pablogonzalezrobles803
@pablogonzalezrobles803 Год назад
same concept in sage maker please
@vitalis
@vitalis Год назад
what's the difference between this and oobabooga?
@ClubMusicLive
@ClubMusicLive Год назад
I also would like to know
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
I haven't played around with Oobabooga - but looks like similar functionalities (although I didn't see a .exe installation of Oobabooga) - in my experience with LMStudio vs other similar offerings, LM studio was the best by far: book.premai.io/state-of-open-source-ai/desktop-apps/
@shiccup
@shiccup Год назад
how do i change the chatgpt url with this?
@CsabaTothMr
@CsabaTothMr Год назад
I'm deploying a LLava model (image + text), how can I invoke that?
@RamakrishnanGuru
@RamakrishnanGuru Год назад
Would really help if you can provide an approximate cost of trying out this tutorial on AWS. Is there any info someone can share?
@giuliobavaresco1183
@giuliobavaresco1183 2 месяца назад
+1
@trobinsun9851
@trobinsun9851 Год назад
thanks ! Do you know how to connect it with your own data ?
@Derick99
@Derick99 11 месяцев назад
Privategpt
@shiccup
@shiccup Год назад
great information! glad you are uploading this! but i would really appreciate if you upgraded your mic :3
@alx8439
@alx8439 Год назад
LM studio is proprietary shit and should not be used when they're so many great open source alternatives
@SO-vq7qd
@SO-vq7qd Год назад
If you could show how to deploy a finetuned HF model and monetize it youll be rich
@tufeeapp9393
@tufeeapp9393 Год назад
when i tried to download model and then tried to run it's not loading model on studio can you please help me with that?
@ayushrajjaiswal2646
@ayushrajjaiswal2646 Год назад
how to use lm studio server as a drop-in replacement to OpenAI API? please make a video on this ASAP
@nosult3220
@nosult3220 Год назад
openai.api_key = 'your-actual-api-key' openai.api_base = 'localhost:1234/v1' mistral_response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": PROMPT_TEMPLATE}, ], stop=["[/INST]"], temperature=0.75, max_tokens=-1, stream=False, ) Here ya go
@WillyKusimba
@WillyKusimba Год назад
Is the SAAS 100% free?
@scienceineverydaylife3596
@scienceineverydaylife3596 8 месяцев назад
LMStudio is free!
@sanjanaepari9173
@sanjanaepari9173 Год назад
sir do the same thing with csv /excel file