This video explains what Transfer Learning is and how we can implement it for our custom data using Pre-trained VGG-16 in Keras. The code: github.com/anujshah1003/Trans... You can support me on Paypal :paypal.me/anujshah645?_ga=1.2...
Lots of thanks to you, it was very informative . i was novice in Transerfer learning and had much confusion but after watching this video now have great understanding about it
Hi Anuj. Thanks for the great tutorial. This helped me alot to understand VGG. I tried to freezeup to layer, block5_conv2 and train weights for my custom data. Here how I used it. last_layer=model.get_layer('block5_conv2').output x=Conv2D(512,(3,3),activation='relu', padding='same', name='block5_conv3')(last_layer) x=MaxPooling2D((2, 2), strides=(2, 2),name='block5_pool')(x) x=Flatten(name='flatten')(x) x=Dense(128,activation='relu',name='fc1')(x) x=Dense(128,activation='relu',name='fc2')(x) # Will add one layer over it. out=Dense(num_classes, activation='softmax',name='output')(x) custom_vgg_model=Model(image_input,out) #Creating the custom model using the Keras, Modelfunction. custom_vgg_model.compile(loss='categorical_crossentropy',optimizer='adadelta',metrics=['accuracy']) for layer in custom_vgg_model.layers[:-7]: layer.trainable=False custom_vgg_model.layers[7].trainable custom_vgg_model.compile(loss='categorical_crossentropy',optimizer='adadelta',metrics=['accuracy']) The trainable parameters are showing precisely. But all the epoches gets the same val_acc and nothing gets trained. What is the mistake I have done here. Please kinly help me
Can VGG 16 also be used for license plate text recognition? Like would it work if my dataset had 0-9 and A-Z ?? Should I fine tune vgg16 or change only last layer?
Hi Anuj, thanks a lot for making such wonderful tutorial. I need to know for input size (8X8X1) for 2 classes, should I change all the layers to train? or what will be the best possible way? Please help me out
Hello Anuj, I am new to AI, Your videos are very good and I like the step by step explanation. Please do the videos from A to Z (importing libraries, loading, splitting, model defining, training, saving, testing, loading back, and run the new dataset from the saved model) on real-time use cases datasets. Which will be very useful for a lot of people. Thank You.
in line 28 you if error comes in : from keras.applications.imagenet_utils import _obtain_input_shape you should replace this line with: from keras_applications.imagenet_utils import _obtain_input_shape
Hello, sir how you set the labels in the code now suppose I have 2 categories only than how I will set the labels and also please tell how to use the trained model to make predictions.
you can assign 0 to one class and 1 to another class. I would suggest you to watch this video where I already explained how to do prediction with trained model ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-u8BW_fl6WRc.html
hello Anuj shah when I ran your code it got an error import numpy as np from vgg16 import VGG16 from resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.imagenet_utils import preprocess_input from imagenet_utils import decode_predictions Traceback (most recent call last): File "", line 2, in from vgg16 import VGG16 ModuleNotFoundError: No module named 'vgg16' *** How do I show it? thanks
There is version problem, kindly check on that, I encountered that too but I don't exactly remember how I did it , however there are answers to this problem online already
Anuj! Thanks for the videos. It is a 16 layers network. But I print the layers "print(len(custom_vgg_model.layers))". It shows 23. Can you kindly tell, how can we know the number of the deep layers in our model? Thanks
+Wasif Khan 16 layers means 16 trainable layers. The other layers shown by keras maybe some dropout or maxpooling layer. Print model.sumary() and count the trainable layers only, it will be 16
while running the code for vgg16 from keras implementation img = image.load_img(img_path, target_size=(224, 224)) i got the error IOError: [Errno 2] No such file or directory: 'elephant.jpg' how can i solve it? please give some suggestions.
Hello anuj, i have a doubt about how to give labels for eg you have mentioned in the code that is -labels[0:202]=0 labels[202:404]=1 labels[404:606]=2 labels[606:]=3 plz, can you explain this ?
Well I assigned labels as such because I loaded all the 4 samples in a list in sequence where the first 202 samples are from one class , the next 202 are from second class and so on.
just remove the last layer , get the output of second last layer or whichever layer you want, that will be the feature for that specific input. you can save the feature in txt or npz or pkl file for later use
For instance, if I have this model: '''This script goes along the blog post "Building powerful image classification models using very little data" from blog.keras.io. It uses data that can be downloaded at: www.kaggle.com/c/dogs-vs-cats/data In our setup, we: - created a data/ folder - created train/ and validation/ subfolders inside data/ - created cats/ and dogs/ subfolders inside train/ and validation/ - put the cat pictures index 0-999 in data/train/cats - put the cat pictures index 1000-1400 in data/validation/cats - put the dogs pictures index 12500-13499 in data/train/dogs - put the dog pictures index 13500-13900 in data/validation/dogs So that we have 1000 training examples for each class, and 400 validation examples for each class. In summary, this is our directory structure: ``` data/ train/ dogs/ dog001.jpg dog002.jpg ... cats/ cat001.jpg cat002.jpg ... validation/ dogs/ dog001.jpg dog002.jpg ... cats/ cat001.jpg cat002.jpg ... ``` ''' from keras import applications from keras.preprocessing.image import ImageDataGenerator from keras import optimizers from keras.models import Sequential from keras.layers import Dropout, Flatten, Dense # path to the model weights files. weights_path = '../keras/examples/vgg16_weights.h5' top_model_weights_path = 'fc_model.h5' # dimensions of our images. img_width, img_height = 150, 150 train_data_dir = 'cats_and_dogs_small/train' validation_data_dir = 'cats_and_dogs_small/validation' nb_train_samples = 2000 nb_validation_samples = 800 epochs = 50 batch_size = 16 # build the VGG16 network model = applications.VGG16(weights='imagenet', include_top=False) print('Model loaded.') # build a classifier model to put on top of the convolutional model top_model = Sequential() top_model.add(Flatten(input_shape=model.output_shape[1:])) top_model.add(Dense(256, activation='relu')) top_model.add(Dropout(0.5)) top_model.add(Dense(1, activation='sigmoid')) # note that it is necessary to start with a fully-trained # classifier, including the top classifier, # in order to successfully do fine-tuning top_model.load_weights(top_model_weights_path) # add the model on top of the convolutional base model.add(top_model) # set the first 25 layers (up to the last conv block) # to non-trainable (weights will not be updated) for layer in model.layers[:25]: layer.trainable = False # compile the model with a SGD/momentum optimizer # and a very slow learning rate. model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9), metrics=['accuracy']) # prepare data augmentation configuration train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode='binary') # fine-tune the model model.fit_generator( train_generator, samples_per_epoch=nb_train_samples, epochs=epochs, validation_data=validation_generator, nb_val_samples=nb_validation_samples) How can I extract features of 256 dimensions for new image? Could you please write a bit of code? Thanks for your help Anuj ji!!
hi Anuj, I have a question, once you finished training your model with your own data, you saved the structure and the weights, then how do you load them and test what you just have done with one image?
Hi , kindly watch this video to get a detailed explanation of saving and loading models along with weights and predicting on test data ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-NUuMg5m42-g.html
Hey nice tutorial. I have 2000 training samples and 1000 validation samples. I am also using same vgg16 for cat and dog classification and I used 'sigmoid' as activation and 'binary_crossentropy' as loss. My training accuracy is 100% and validation accuracy is 98.65%. Is that weird or something? Because you got only 83% on validation data, but I got 98.65%. Increase in no. of samples increases the validation accuracy?
I ran the code for my own data set and I put my image instead of elephant in test.py but it till shows 10 predictions even though the classifier was trained for two
Hey, Anuj Great video thank you. One question you said this is just example one can implement transfer learning with your own custom data,here you have shown with lesser data thats why its showing 100% accuracy on training (I guess). What you think how much data per class is enough for transfer learning and does variety in data for a class matters? Thanks Again
Once model is trian , you just gotta load image and do model.predict. you can watch my video on training CNN from scratch for more clarity and explanation and code
Thanks for your job. It is very clear. Can you explain about how delete and add new layers? Why cannot we do it by model.layers.pop() and model.add("new layer")?
Hello sir ... Your explanation is really really good ... it helped me a lot ... I just had 1 questions ... When you fine-tuned vgg16 model it's weights must have changed ... So where are those weights saved ?? I know it's a silly question ... But please can you help me out
HI Vijay, The weights are saved after training with the command model.save_weights('fname.h5'). I would recommend to watch the video on cnn understanding and implementation to understand the theory as well as code in keras for loading data, training, saving model and reloading saved model and doing prediction, etc.: part-1: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-u8BW_fl6WRc.html part-2: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-NUuMg5m42-g.html
Hey Anuj. When I run this model = VGG16(include_top=True, weights='imagenet') I get this error TypeError: _obtain_input_shape() got an unexpected keyword argument 'include_top'
Predict the result, like in the video he uses cat, dog, human, horse. I want to add another image of human (that was not in the training data) and predict on it.
Condor Yes you can that.. download a human image into the same directory and in the code replace elephant with the name of the human image..Hope this answers your question
Hi Anuj, you did a good works. I saw almost all of your videos. I found an error while I run the programm in my IPythone concole. Please give a solution. Thanks File "", line 79, in model = VGG16(input_tensor=image_input, include_top=True,weights='imagenet') File "C:\Users\Murad Al Qurishee\Desktop\Deep learning\Book and code\Transfer-Learning-in-keras---custom-data-master\vgg16.py", line 102, in VGG16 include_top=include_top) TypeError: _obtain_input_shape() got an unexpected keyword argument 'include_top'
Replace "include_top =include_top" with "require_flatten=include_top" in vgg16.py . It is because of change in the keras documentation.It worked for me and I successfully ran the program
Hello Velpuri, I type "include_top =include_top" with "require_flatten=include_top" in vgg16.py. But is not working. It said the following error File "C:\Users\Murad Al Qurishee\Desktop\Deep learning\Book and code\Transfer-Learning-in-keras---custom-data-master\vgg16.py", line 102, in VGG16 require_flatten=include_top) TypeError: _obtain_input_shape() got an unexpected keyword argument 'include_top'
There is flaw in y_train and y_test shape , as the model is not training blc the shape of y train and y test is not proper. I dont know how it is working for ur dataset.
@@anujshah645 the shape of y train is 4 dimension however when i go for the training it says model can accept only 2d tensor. Which actually is correct blc 1st d for total pics and 2nd d for total no of classes. But in ur model it is still remaining 4d and it is giving error at my end but at ur end its working.
hi, anuj can you make a video for how to use Alexnet and vgg-16 both in parallel as feature extractor (i can send you paper links) ???? i need it in my FYP , kindly make any video or stuff like this, thanks alot.
i am trying it for vgg16 following your video, but when i used features 4096 instead of 128 in feature extractor training part of code , it throws error of some tensor values is higher etc
Hey Anuj Thanks for the great tutorial. I have tried both the approaches i.e. training a classifier and fine tuning. For the first one I am getting 96% accuracy but for the second one I am getting 25%. I have tried different datasets and it has the same problem. I have also tried longer training (100 epochs) for fine tuning but didn't help.
thanks for this great tutorial. when i was trying to run the code i was getting this error import numpy as np from vgg16 import VGG16 from resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.imagenet_utils import preprocess_input from imagenet_utils import decode_predictions Using TensorFlow backend. model = VGG16(include_top=True, weights='imagenet') Traceback (most recent call last): File "", line 1, in model = VGG16(include_top=True, weights='imagenet') File "C:\Users\dba\deep-learning-models-master\vgg16.py", line 170, in VGG16 model.load_weights(weights_path) File "C:\Users\dba\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\topology.py", line 2643, in load_weights with h5py.File(filepath, mode='r') as f: File "C:\Users\dba\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py\_hl\files.py", line 271, in __init__ fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr) File "C:\Users\dba\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py\_hl\files.py", line 101, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py\h5f.pyx", line 78, in h5py.h5f.open OSError: Unable to open file (File signature not found) pls i have tried everything i could still getting the same error pls whats happening thanks
Hello Anuj, this is good tutorial for VGGNet transfer learning in deep learning. I have some problem in my practice. In this video not include to calculate regarding confusion matrix, precision, recall and f1 score. I try to apply the code from your tutorial "Tutorial on CNN implementation for own data set in keras". But the value of these measures equal 0. You can see my result in link drive.google.com/file/d/1wwv-Vu5mgLGzzhMgRuLsssRjrMbhYfoO/view?usp=sharing , drive.google.com/file/d/1xaX8XIxYwNQI6ztRzgUsDZs-bmWLWa1_/view?usp=sharing . I hope you can help me. Thank a lot.
TypeError: _obtain_input_shape() got an unexpected keyword argument 'include_top' solution 1. clone this git repo: "github.com/keras-team/keras-applications" 2. then import it " from keras import applications" 3. Now import vgg16 "from keras.applications.vgg16 import VGG16" this will work for keras 2.2.4