Тёмный

PaliGemma by Google: Inference and Fine Tuning of Vision Language Model 

AI Anytime
Подписаться 30 тыс.
Просмотров 8 тыс.
50% 1

In this video I'm diving deep into PaliGemma, a new vision language model by Google! PaliGemma can analyze images and text, making it super versatile for tasks like image captioning and question answering. I'll show you how to use this powerful tool and get the most out of it through fine-tuning.
Don't forget to like and subscribe for more tech breakdowns!
Notebook: github.com/AIA...
PaliGemma HF: huggingface.co...
Join this channel to get access to perks:
/ @aianytime
To further support the channel, you can contribute via the following methods:
Bitcoin Address: 32zhmo5T9jvu8gJDGW3LTuKBM1KPMHoCsW
UPI: sonu1000raw@ybl
#google #ai #openai

Наука

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 21   
@chongdashu
@chongdashu 4 месяца назад
> processor = PaliGemmaProcessor(model_id) Give the following errors: 90 raise ValueError("You need to specify an `image_processor`.") 91 if tokenizer is None: 92 raise ValueError("You need to specify a `tokenizer`.") 93 if not hasattr(image_processor, "image_seq_length"): 94 raise ValueError("Image processor is missing an `image_seq_length` attribute.") Should be PaliGemmaProcessor.from_pretrained(model_id)
@ricorauschkolb2801
@ricorauschkolb2801 4 месяца назад
Is the model also good for OCR tasks?
@miguelalba2106
@miguelalba2106 4 месяца назад
You need to fine tune it to achieve good results, it is a good basis for any visual understanding task
@Mesenqe
@Mesenqe 4 месяца назад
Thank you for the tutorial. I have one question: How can we use our own fine-tuned model on inference time? Can you make a video on how to use our own fine-tuned PaliGemma model during inference or if you can suggest links to read. Thank you.
@clawbro
@clawbro 3 месяца назад
Exactly I have the same issue too, I cant use it and save_pretrained is not working
@SravanKumar-cj4uu
@SravanKumar-cj4uu 4 месяца назад
Thank you for your detailed explanation. Your classes are quite interesting and are building confidence to move further forward. I need some suggestions: I saw a medical chatbot using Llama 2 on a CPU machine, which was all open source. Similarly, I need to build an image-to-text multimodal model on a CPU using all open-source tools. Please provide your suggestions.
@robinchriqui2407
@robinchriqui2407 3 месяца назад
Hi thank you very much, is it the same kind of process for any vlm model on hugging face?
@souravbarua3991
@souravbarua3991 4 месяца назад
Please make a video on multimodal/visionLM with 'video data'. In place of the image it takes the video as input.
@latentbhindi837
@latentbhindi837 4 месяца назад
Great vid! also united are gonna bottle the FA cup xd.
@AIAnytime
@AIAnytime 4 месяца назад
🤞
@latentbhindi837
@latentbhindi837 4 месяца назад
@@AIAnytime i am actually just a jinx
@AIAnytime
@AIAnytime 4 месяца назад
We won 😅
@TaHa-nf5vc
@TaHa-nf5vc 4 месяца назад
Bro i love your channel, your videos are of high quality and so instructive. And that hairstyle, clearly DOPE, i personnally think its the one :D
@barderino5673
@barderino5673 4 месяца назад
i still have confusion on why targetting q, o, k, v, gate , up , down ....targetting all linear layer ? why all ?
@nurusterling8024
@nurusterling8024 4 месяца назад
Research shows that this is the closest to full fine-tuning in terms of performance
@astheticsouls7770
@astheticsouls7770 4 месяца назад
can Pali Gemma good for RAG?
@JokerJarvis-cy2sw
@JokerJarvis-cy2sw 4 месяца назад
Sir can I use this in my local machine or in raspberry pi coz I want to make a robot via raspberry pi If not can you please suggest me any alternative if not locally then via API (free)
@karthiksundaram544
@karthiksundaram544 4 месяца назад
@MegaClockworkDoc
@MegaClockworkDoc 4 месяца назад
You put a lot of effort into this video, but your audio is terrible.
@AIAnytime
@AIAnytime 4 месяца назад
Will improve in future videos...
@rizzlr
@rizzlr 4 месяца назад
@@AIAnytime could use ai to improve it too
Далее
LLMs Quantization Crash Course for Beginners
58:43
Просмотров 2,9 тыс.
荧光棒的最佳玩法UP+#short #angel #clown
00:18
100 Identical Twins Fight For $250,000
35:40
Просмотров 44 млн
Катаю тележки  🛒
08:48
Просмотров 498 тыс.
Gemma models: Unveiling the latest advancements
26:21
Fine-tune Multi-modal LLaVA Vision and Language Models
51:06
I love small and awesome models
11:43
Просмотров 15 тыс.