Тёмный
No video :(

Mastering Google's VLM PaliGemma: Tips And Tricks For Success and Fine Tuning 

Sam Witteveen
Подписаться 65 тыс.
Просмотров 10 тыс.
50% 1

Опубликовано:

 

27 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 20   
@paulmiller591
@paulmiller591 3 месяца назад
This is an exciting sub-field. We have a lot of clients making observations so keen to try this. Happy travels Sam.
@amandamate9117
@amandamate9117 3 месяца назад
excellent video, cant wait for more visual model examples especially with ScreenAI for agents who browse the web
@user-en4ek6xt6w
@user-en4ek6xt6w 3 месяца назад
Thank you for your video
@SonGoku-pc7jl
@SonGoku-pc7jl 3 месяца назад
thanks, we will see phi 3 with vision for compare :)
@sundarrajendiran2722
@sundarrajendiran2722 9 дней назад
Can we upload multiple images in the demo and ask questions which have answer in any one of the images?
@unclecode
@unclecode 3 месяца назад
Fascinating. I wonder if there is any example for fine-tuning for segmentation involved. If so, the way we collate the data should be different. I have one question about the timeline at 15 minutes and 30 seconds. I noticed a part of the code that splits the data set into train and test. But after split it says `train_ds = split_ds["test"]` shouldn't be "train"?. I think that might be a mistake. What do you think? Very interesting content, especially if the model has the general knowledge to get into a game like your McDonald's example. This definitely has great applications in medical and education fields as well. Thank you for the content.
@samwitteveenai
@samwitteveenai 3 месяца назад
just look at the output from the model when you do segmentation and copy that. Yes you will need to to update the collate function. The "test" part is correct because it is just setting it to train on a very small number of examples, in a real training yes use the 'train' with is 95% of the data as opposed to 5% on the test.
@unclecode
@unclecode 3 месяца назад
@@samwitteveenai Oh ok, that was for just video demo, thx for clarification 👍
@unclecode
@unclecode 2 месяца назад
​@@samwitteveenai Thx, I get it now, the "test" is just for the demo in this colab. Although It would've been clearer if they used a subset of like 100 rows from the train split. I experimented a bit, the model is super friendly to fine-tuning. Whatever they did, it made this model really easy to tune. We're in a time where "tune-friendly" actually makes sense.
@SenderyLutson
@SenderyLutson 3 месяца назад
I think the the Aria dataset from Meta is also open
@samwitteveenai
@samwitteveenai 3 месяца назад
interesting dataset. Didn't know about this. Thanks
@ricardocosta9336
@ricardocosta9336 3 месяца назад
Ty my dude
@FirstArtChannel
@FirstArtChannel 3 месяца назад
Inference speed and size of the model still seems reasonable longer/larger than a Multimodal LLM such as LLaVA, or am I wrong?
@samwitteveenai
@samwitteveenai 3 месяца назад
honestly its a while since I played with LLaVA and mostly I have just used it on Ollama, so not sure how it compares. Phi3-Vision is also worth checking out. I may make a video on that as well
@miguelalba2106
@miguelalba2106 2 месяца назад
Do you know how good the dataset should be in terms of completeness for fine tuning? I have lots of images-texts of clothes, but in some there are more details than others, so I guess during training the model will be confused. Ex. There are thousands of images of dresses with only the color, and thousands of images with color + other details
@AngusLou
@AngusLou 3 месяца назад
Is it possible to make the whole thing local?
@SenderyLutson
@SenderyLutson 3 месяца назад
How many VRAM do this model consume on while running? And the Q4 version?
@samwitteveenai
@samwitteveenai 3 месяца назад
the inference was running on a T4 so it is manageable. The FT was on an A100.
@willjohnston8216
@willjohnston8216 3 месяца назад
Do you know if they are going to release a model for real time video sentiment analysis? I thought there was a demo of that by either Google or OpenAI?
@samwitteveenai
@samwitteveenai 3 месяца назад
not sure but you can do some of this already with Gemini, just not realtime (publicly at least)
Далее
Google's RAG Experiment - NotebookLM
13:39
Просмотров 15 тыс.
Agentic Info Extraction with Structured Outputs
16:04
Why Democracy Is Mathematically Impossible
23:34
Просмотров 1,2 млн
Testing Microsoft's New VLM - Phi-3 Vision
14:53
Просмотров 12 тыс.
What Makes A Great Developer
27:12
Просмотров 179 тыс.
AI’s Dirty Little Secret
6:41
Просмотров 548 тыс.
Mapping GPT revealed something strange...
1:09:14
Просмотров 207 тыс.
5 Problems Getting LLM Agents into Production
13:12
Просмотров 13 тыс.