I can't believe your video views bcz your explaination is on next level dude i thought it must have crossed atleast 1lakh but i hope it will soon cross it
I just started to realize the potential of AI, I already feel behind with all these new tools. Would love to see another video in the future about BlueWillow that is completely free
When I run the code on streamlit it shows two errors: 1. ValueError: axes don't match array. 2. ValueError: The name "conv2d" is used 2 times in the model. All layer names should be unique. How can I solve the problem?
Bro, let me give you a salute that in this age you are doing a incredible job. BTW lets came into the main purpose..as i am in your comment section you must have guesed that I am having trouble in understanding the attention mechanism and tensorflow overall. I have to submit my paper in next one month and I am having many problems it would be great if you work with me in speech domain.please response fast.
Hey, great lecture! Just need a help, the link for the google colab for image captioning with rnn isn't working. It would be great help if you'll provide a new link. Thankyou!!
It’s a good tutorial. But I have a question regarding attention mechanism. At 4:50, how doest it know to focus on dog getting "dog" words as input? If it knows by detecting object, then how does it know to focus on somewhere else when it receives "It/The/There/Eating/Water/Flying"? Please make it clear.
hey i looked into your kaggle notebook of transformer model with coco dataset, you mentioned that you only trained the model on 14k images for coco dataset , im a beginner in ml ,so can you tell me what should i change in your code to increase the training dataset size from 14k
I tried to use your project but when I tried to run it, it shows this error -> File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py\h5f.pyx", line 106, in h5py.h5f.open OSError: Unable to open file (file signature not found) Can you please help me solve this error and also can you please share the link to model.h5 file
Hi,when i try to run it on streamlit it displays the error "ImportError: cannot import name 'get_caption_model' from 'model' (C:\Users\z\model.py)",what am i doing wrong? Sorry i'm totally new to this,so can you pls help (i also downloaded both the H5 files too)
Have you downloaded the model.py file? If not, You can download it here: github.com/pritishmishra703/Image-Captioning/blob/master/model.py The 'get_caption_model' function is present in this file.
Brother this video is really great and i loved your explanation but i am a beginner in aiml and want to learn this in detail Can you please create a detail video on this topic
i want to do the image captioning with unsupervised or semi supervised bro if you have any reference code or implemented code if you share it will be helpful to me
Kind of a dumb question, Why do we train the dataset again if we are already using a pre trained coco dataset cnn model to extract the features as the encoder. Still new to this area.
The Inception V3 is trained for image classification (cloud.google.com/tpu/docs/inception-v3-advanced#introduction ) so we are *fine-tuning* it on our caption generation task. In simple words: The InceptionV3 is NOT specialized for doing Image captioning so fine-tuning can help the model learn task-specific features.
Yes, you can do this just by using batching. See HuggingFace documentation for more info, it's easy to do. Post any errors/issues here if you encounter any.
Thanks for replying! I Am excited to see that we can use modal.h5 file and directly build project without training it.Wouldn't it be nice if someone managed to get coco trained full dataset modal.h5 file ...
how to use model.h5 file to make predictions ???? , I tried using load_model but it's expecting checkpoints file, also tried load_weights but still giving error can u pls show how to use this model.h5 file to make predictions ??????????
You can use `get_caption_model` function to load the model: github.com/pritishmishra703/Image-Captioning/blob/master/model.py#L299 Then to make predictions use `generate_caption` function: github.com/pritishmishra703/Image-Captioning/blob/master/model.py#L270
Hey can you please give the code that you wrote in streamllit? And also, how's the huge COCO data set is processing on localhost?? And then how did you hosted that on huggingface? Also which one is hosted on higging face? RNN one or Transformer one or COCO dataset one?Please tell me how do I run it on localhost without downloading the whole dataset on my machine.
the code that I wrote in streamllit: huggingface.co/spaces/pritish/BookGPT/tree/main how's the huge COCO data set is processing on localhost? Answer: I trained my model on COCO dataset, I loaded the dataset once on Google Colab. Once the training was done, I saved the trained model weights to a file. Now, when I want to use the trained model for inference or fine-tuning (on my localhost), I only need to load the saved model from the file, not the entire COCO dataset. How did you hosted that on huggingface? Answer: I created an app.py file that includes a user interface (UI) made with Streamlit. Then I pushed it to huggingface spaces. Here's how to do it: huggingface.co/docs/hub/spaces-overview#creating-a-new-space which one is hosted on higging face? RNN one or Transformer one or COCO dataset one? Answer: The Transformer + COCO one is hosted on HuggingFace. Please tell me how do I run it on localhost without downloading the whole dataset on my machine. Answer: As I said, there's no need to download the whole dataset. You just need to load the model file ('model.h5') and then you can give it any image and it will generate captions. First clone the repository: git clone huggingface.co/spaces/pritish/BookGPT Then run the `app.py` file. This will take some time as it imports all the modules and loads the saved model. This will raise error if you don't have TensorFlow installed so make sure it is installed!
@@PritishMishra I want to use the model that you have deployed on HuggingFace. Is it possible? Or if possible, can you pls share with me your trained model?
hi can you help me please,, when i call get_caption_model() function i get the following error "ValueError: axes don't match array" do you you have any ideas
Hi, I tried testing your model and it was not giving correct captions most of the time like whenever I uploaded a simple face image, it would always prompt "a man in a suit and tie". I am new to ML/DL and wanted to make my first project on this topic. How can I make it prompt more accurate with diverse captions?
I haven't tried training it on whole dataset but i am sure that the caption accuracy will increase if you do it. Make sure the model doesn't overfit. This may increase the generalization capabilities of the model.
@@PritishMishra Okay I'll give that a try. In project building, Should I opt for a pre-trained model like ViT model from hugging face and use Pytorch for processing. The whole project is completed within 30 lines of code and the accuracy is extremely high as well. Do let me know your thoughts on that.
@@blackplagueklan7246 can you share the notebook with me? I want to see the performance. I will be glad to share the link with everyone in description!
@@PritishMishra Thanks, i been training this for 10 epochs but it stops 8 epochs and results are not much accurate, btw is it possible to retrain the model with saved weights? i have weights that run over 8 epochs with loss: 2.6367 - acc: 0.4514 , please reply ASAP
Accuracy 0.45 is great i would say! If you want to increase it more I recommend you to train it on more data (i have trained on 14K images, make it 24K or 30K). If you load my save weights you will save some epochs of training.
The Image Captioning model is a generative model, which means that it predicts a new caption for each image. You may be aware that the predictions are generated word by word; the model generates new words depending on the words it predicted previously, and generative models are highly chaotic; a minor change in their initial conditions can completely affect the structure of the predicted captions. That's why, accuracy is a hard metric to use when evaluating such models because even a single extra word in the model's prediction might entirely ruin the accuracy. In short, 43% is moderately good accuracy for our model.
"with open(f'{BASE_PATH}/annotations/captions_train2017.json', 'r') as f:" what is this path in the code???? i can not get it and it is showing me directory error pllzzz reply me i m stuck since long!! i m geeting directorary error in each code
Hello, I originally made that notebook on Kaggle. So I forgot to include that downloading code on colab. I'm doing it right now. However, I strongly advise you to run that file on Kaggle because the coco dataset is 27 GB and downloading it on colab will take forever. So, to execute the file on Kaggle, do the following: 1. Download the notebook from colab. (Go to File -> Download -> Download .ipynb) 2. Go to Kaggle and sign in. 3. Then, on the left menu, click the big "+" button. 4. Select "Create Notebook." 5. You should now be able to see the newly created notebook. Now, Go to File -> Import Notebook 6. Upload the file you downloaded in Step 1. 7. You should now be able to see the entire notebook. Now, on the right pane, click the "Add Data" button. 8. Look for Awsaf's "Coco 2017 Dataset" and add that dataset. (This one: bit.ly/3Vcst64) You're good to go! Run the notebook now, and everything should be fine. If you encounter any new errors, please reply here and I will help you.