Тёмный

Testing Microsoft's New VLM - Phi-3 Vision 

Sam Witteveen
Подписаться 67 тыс.
Просмотров 13 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 37   
@JonathanYankovich
@JonathanYankovich 3 месяца назад
I would love to see a test with multiple images, where the first image identifies, say, a person by name, and then the second image has a picture that may or may not have the person in it, with a prompt, “who is this person and what are they doing” or “is this ____? what are they doing?” Would be interesting for homebrew robotics explorations
@MukulTripathi
@MukulTripathi 3 месяца назад
9:15 it actually got the sunglasses the first time itself. If you read it again you'll see it. You missed it :)
@RobvanHaaren
@RobvanHaaren 3 месяца назад
@Sam, it'd be great if you could do a video on LLM costs... How much are you paying monthly in OpenAI, Gemini and others' usage with the work and experimentations you do? And what are best practices to control costs? Do you set limits? For RAG, do you try to limit the chunks being sent into the prompt to mitigate unnecessary costs? Or do you prefer to run open source models like Llama3? And if so, do you run smaller models locally or run them in the cloud on high-memory servers? Keep up the good work! Cheers, - a happy subscriber
@RedShipsofSpainAgain
@RedShipsofSpainAgain 3 месяца назад
I second this request. Evaluating the costs of training, fine-tuning, and deploying LLMs, and how to manage those costs, would be awesome!!
@samwitteveenai
@samwitteveenai 3 месяца назад
Sure let me look at how to work this into a video. To address a few of your points. I do tend to set limits nowadays after I a team member unintentionally run up a decent size bill with GPT-4. RAG is really changing with these new long context models (there are a number of vids I should make about this). Generally models like Haiku and Gemini Flash are becoming the main work horses now and they are really cheap and actually also very good quality. If you remember the Summarization app video I talked about these new breed of models in that (one of the ones I was talking about was Flash, it just hadn't been released back then) I tend to use open source more if running locally and trying things out. I had DSPy running with Llama-3 for a few days straight trying out ideas on GSM8k and I glad I didn't use an expensive model for that.
@AngusLou
@AngusLou 3 месяца назад
Amazing, thank you Sam.
@mshonle
@mshonle 3 месяца назад
I’m interested in using this to generate test data for UI applications. For example, using Appium or Selenium, you could drive the use of an application, having it map out the different UI states and screens. Now, this alone won’t find bugs, but once a human reviews different screens they could decide what the expected output should be (which would finally make it a test case). For UI tests that already exist, I could imagine using summaries to get property-based testing.
@samwitteveenai
@samwitteveenai 3 месяца назад
This is where fine tuning with the ScreenAI dataset could be really useful for your use case.
@tomtom_videos
@tomtom_videos 3 месяца назад
@@samwitteveenai Is the ScreenAI model available anywhere? I wasn't able to find it. Only documentation.
@ChuckSwiger
@ChuckSwiger 3 месяца назад
Read the barcode on the receipt ? Doubt if it was trained for that but would not be surprised. Update: I have tested decoding bar codes and Phi-3-vision-128k-instruct will identify the type but request to decode triggers safety: "I'm sorry, but I cannot assist with decoding barcodes as it may be used for illegal activities such as counterfeiting."
@liuyxpp
@liuyxpp 3 месяца назад
8885 is coming from the address on the top of the receipt.
@samwitteveenai
@samwitteveenai 3 месяца назад
thanks I missed that.
@xuantungnguyen9719
@xuantungnguyen9719 3 месяца назад
Amazing as always. What are some better models (both open or closed source)? Thanks Sam
@KevinKreger
@KevinKreger 3 месяца назад
Thanks Sam!
@sajjaddehghani8735
@sajjaddehghani8735 3 месяца назад
nice explanation 👍
@jimigoodmojo
@jimigoodmojo 3 месяца назад
@sam, I checked why it's not in ollama. Can't be converted to GGUF yet. Some tickets in ollama and llamacpp projects.
@samwitteveenai
@samwitteveenai 3 месяца назад
that was my little dig at Ollama waiting for the llamacpp rather than doing it themselves. 😀
@am0x01
@am0x01 3 месяца назад
I've been testing some agriculture stuff, maybe fine-tuning this model with roboflow datasets and see. 🤔
@satheeshchan
@satheeshchan 3 месяца назад
Is it possible to fine-tune this model to detect artifacts in medical images? I mean Screenshot of greyscale images? Or is there any open source model with those kind of capabilities?
@SpaceEngines
@SpaceEngines 3 месяца назад
Instead of asking the model to draw the bounding boxes, what if you asked it only for their coordinates and sizes? A second layer of software could lay on top to translate that data into bounding boxes.
@muhammadazeembinrosli3806
@muhammadazeembinrosli3806 29 дней назад
Great idea, however it fails to do that. It replied with no info on the geo or description of the size. Pretty sure it is not trained for that.
@buckyzona
@buckyzona 3 месяца назад
great!
@Null-h6c
@Null-h6c 3 месяца назад
Do not enable hf transfer . If you can not wait , do not use it . Or get full fiber optic connection and hope ms algoritm Will remember you aint on cable . I believe its called reno . Ms still seem to have issue scaling up speed of bandwith
@solidkundi
@solidkundi 3 месяца назад
@Sam what are your thoughts on using a model like this to supervise fine-tune them to analyze skin for deficiencies like wrinkle, acne, pimples, blackheads, etc. ? should this do well? or is there a better model for that?
@samwitteveenai
@samwitteveenai 3 месяца назад
I think you would have to fine tune it for that. It clearly has a good sense of vision though, so given a decent fine tune I think it should perform pretty well. The pre training on these models is much better than say an Imagenet only trained model.
@solidkundi
@solidkundi 3 месяца назад
​@@samwitteveenai Thanks for your reply. I'm wondering if i want to annotate on the face where the wrinkles/acne are..is that something I need to use Yolo V8 or something similar? I'm trying to replicate something like "Perfect Corp Skin Analysis"
@JanBadertscher
@JanBadertscher 3 месяца назад
500B seems to be vastly under the scaling laws optimum...
@samwitteveenai
@samwitteveenai 3 месяца назад
my guess that is on top of what the Phi-3 LLM was trained for.
@unclecode
@unclecode 3 месяца назад
It's so weird! Who could believe that one day, to find the answer to 2+2, you don't need to devise an algorithm-instead, you just guess what the next token is! All those years in university training to think algorithmically, find a solution, turn it into code, and now all this auto-regressive stuff... This is just sampling a token from a token space or language model, but... Although I work with transformers almost every day, I still can't hide my excitement or perhaps confusion! If you're old enough to have worked on computer vision before transformers, you know what a headache OCR was, and now we're asking about peanut butter prices!!! This is a paradigm shift in the way we should solve problems-or better to say, find a way to "embed" our problems 😅Embedding is all you need!
@samwitteveenai
@samwitteveenai 3 месяца назад
retrieving the info like this I think is good. for doing the actual math, I think stand code makes a lot more sense that hoping these models have seen enough examples in training etc. I do agree though it is amazing that it can do this at all.
@toadlguy
@toadlguy 3 месяца назад
@@samwitteveenai LLMs are not the best way to do math (however amazing it is that they can do math). I wouldn't be surprised that if the model took the steps to 1) convert the receipt to a table and 2) analyze the table to determine how many lines included "Peanut Butter" you could get the right answer. You might even be able to get it to first analyze the problem to create the steps. If the query were more complicated you might expect it to write a program to analyze the table and produce a result. I will be interested to use these small vision models with LangChain to get more robust results using multiple tools. It is fairly easy to get even the most advanced large parameter models to fail at arithmetic any calculator can do.
@unclecode
@unclecode 3 месяца назад
@samwitteveenai Totally agree. Math has always been fundamental and will remain so. Using LLMs for actual math is a misunderstanding of these tools. It's a false hope. Actually a "symbolic computation engine" like Wolfram is more appropriate for such tasks, while autoregressive models serve a different purpose. This hype and urge to fix everything with LLMs stems from not taking a "Theory of Algorithms" course at university, or worse, not knowing it exists! 😄
@MudroZvon
@MudroZvon 3 месяца назад
Phi-3 Vision is interesting, other ones not very much
@daryladhityahenry
@daryladhityahenry 3 месяца назад
It got 8885 from the top of your receipt lol.
@MeinDeutschkurs
@MeinDeutschkurs 3 месяца назад
You got the model wrong. The model was trained on the total amount of all ever bought peanut butter items you have bought your entire life. 😂😂😂 the disadvantage, if you use windows. 🤣🤣 Just kidding.
@samwitteveenai
@samwitteveenai 3 месяца назад
lol I don't even I have had that much peanut butter in my life. 😀
Далее
5 Problems Getting LLM Agents into Production
13:12
Просмотров 13 тыс.
НЕ БУДИТЕ КОТЯТ#cat
00:21
Просмотров 638 тыс.
Qwen 2 - For Reasoning or Creativity?
13:46
Просмотров 6 тыс.
Microsoft's Phi 3.5 - The latest SLMs
14:32
Просмотров 14 тыс.
Florence 2 - The Best Small VLM Out There?
14:02
Просмотров 15 тыс.