Тёмный

Exploring Neurons || Transfer Learning in Keras for custom data - VGG-16 

Anuj shah
Подписаться 8 тыс.
Просмотров 41 тыс.
50% 1

This video explains what Transfer Learning is and how we can implement it for our custom data using Pre-trained VGG-16 in Keras.
The code: github.com/anujshah1003/Trans...
You can support me on Paypal :paypal.me/anujshah645?_ga=1.2...

Опубликовано:

 

2 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 147   
@priyanshgupta9474
@priyanshgupta9474 4 года назад
Lots of thanks to you, it was very informative . i was novice in Transerfer learning and had much confusion but after watching this video now have great understanding about it
@truliapro7112
@truliapro7112 6 лет назад
Anuj shah - You are outstanding. I like your simple teaching style, barely I comment but you forced me and earned a subscriber.
@anujshah645
@anujshah645 6 лет назад
Thank you
@navinpatle7651
@navinpatle7651 5 лет назад
I'm halfway through this tutorial but I have already started loving this. Good Work bro...keep it up
@anujshah645
@anujshah645 5 лет назад
Thanks.
@gopuprakash160
@gopuprakash160 5 лет назад
outstanding video. this was so helpful and clear!
@manasbudam7192
@manasbudam7192 6 лет назад
Great work anuj.Very useful
@srikanthvelpuri2973
@srikanthvelpuri2973 6 лет назад
Great Video..Great explanation
@ramchandracheke
@ramchandracheke 4 года назад
Thanks, Anuj! It's very helpful!
@GunnuBhaiya
@GunnuBhaiya 6 лет назад
thanks a ton mate! literally saved my academic year
@deepquest
@deepquest 6 лет назад
Great anuj.. You are rock..
@hiroshiperera7107
@hiroshiperera7107 6 лет назад
Hi Anuj. Thanks for the great tutorial. This helped me alot to understand VGG. I tried to freezeup to layer, block5_conv2 and train weights for my custom data. Here how I used it. last_layer=model.get_layer('block5_conv2').output x=Conv2D(512,(3,3),activation='relu', padding='same', name='block5_conv3')(last_layer) x=MaxPooling2D((2, 2), strides=(2, 2),name='block5_pool')(x) x=Flatten(name='flatten')(x) x=Dense(128,activation='relu',name='fc1')(x) x=Dense(128,activation='relu',name='fc2')(x) # Will add one layer over it. out=Dense(num_classes, activation='softmax',name='output')(x) custom_vgg_model=Model(image_input,out) #Creating the custom model using the Keras, Modelfunction. custom_vgg_model.compile(loss='categorical_crossentropy',optimizer='adadelta',metrics=['accuracy']) for layer in custom_vgg_model.layers[:-7]: layer.trainable=False custom_vgg_model.layers[7].trainable custom_vgg_model.compile(loss='categorical_crossentropy',optimizer='adadelta',metrics=['accuracy']) The trainable parameters are showing precisely. But all the epoches gets the same val_acc and nothing gets trained. What is the mistake I have done here. Please kinly help me
@rohitsethi2085
@rohitsethi2085 2 года назад
Nice explanation Anuj...
@sucream1004
@sucream1004 6 лет назад
great tutorial thanks!
@prabhacar
@prabhacar 4 года назад
good demo Anuj! welldone!
@anujshah645
@anujshah645 4 года назад
Thanks
@pranitapradhan9652
@pranitapradhan9652 6 лет назад
Great work as usual! I am waiting for a tutorial on semantic segmentation.
@mahery_ranaivoson
@mahery_ranaivoson 4 года назад
Looking forward to that as well. Plz keep commenting so that he will spot on that.
@Out_of_Continent
@Out_of_Continent 4 года назад
Very Impressive and very nicely explained. I have had to subscribe to learn more
@niks0822
@niks0822 6 лет назад
Thanks Anuj. This helped a lot.
@anujshah645
@anujshah645 6 лет назад
I am glad it helped. cheers!
@sam41619
@sam41619 6 лет назад
thanks anuj, it was very helpful
@anujshah645
@anujshah645 6 лет назад
welcome. thanks
@navjiwanhira3888
@navjiwanhira3888 3 года назад
Thanks a ton!!!
@bharatbargujar3071
@bharatbargujar3071 6 лет назад
Great video thank you. When we should expect transfer learning videos for object detection and semantic segmentation on Keras ?
@poornachandrasandur3340
@poornachandrasandur3340 6 лет назад
Thanks Anuj!!!
@halmouchdev3849
@halmouchdev3849 5 лет назад
Nice video is very usufel . I will recommand it for my freiend student !
@anujshah645
@anujshah645 5 лет назад
Thanks.
@1ashitguy
@1ashitguy 6 лет назад
Can VGG 16 also be used for license plate text recognition? Like would it work if my dataset had 0-9 and A-Z ?? Should I fine tune vgg16 or change only last layer?
@Dr.Bulla_Rajesh
@Dr.Bulla_Rajesh 5 лет назад
can i use it for different input shapes other than same sizes
@ShamimAhmed-wu4os
@ShamimAhmed-wu4os 6 лет назад
Hi Anuj, thanks a lot for making such wonderful tutorial. I need to know for input size (8X8X1) for 2 classes, should I change all the layers to train? or what will be the best possible way? Please help me out
@admir3429
@admir3429 6 лет назад
Hello, how can we save the model to use it like with the elephant example?
@OmidSafarzadeh
@OmidSafarzadeh 5 лет назад
This is amazing...
@anujshah645
@anujshah645 5 лет назад
Thanks .
@amirlasry8426
@amirlasry8426 6 лет назад
Should I normalize my data regarding to itself and then regarding to VGG16 mean and std?
@PP-rj4gv
@PP-rj4gv 6 лет назад
Hello Anuj, I am new to AI, Your videos are very good and I like the step by step explanation. Please do the videos from A to Z (importing libraries, loading, splitting, model defining, training, saving, testing, loading back, and run the new dataset from the saved model) on real-time use cases datasets. Which will be very useful for a lot of people. Thank You.
@anujshah645
@anujshah645 6 лет назад
Ok I have done some thing similar for CNN Implementation in keras: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-u8BW_fl6WRc.html
@haythemtellili4830
@haythemtellili4830 5 лет назад
very helpful
@RnFChannelJr
@RnFChannelJr 4 года назад
How to test the model ? That is for RGB image or must convert to grayscale image first ?
@UnprivilegedDelhi
@UnprivilegedDelhi 5 лет назад
in line 28 you if error comes in : from keras.applications.imagenet_utils import _obtain_input_shape you should replace this line with: from keras_applications.imagenet_utils import _obtain_input_shape
@kresnasudiatmika9147
@kresnasudiatmika9147 6 лет назад
thanks for tutorial , very helpful !!
@allmightqs1679
@allmightqs1679 5 лет назад
If I don't want to use pre-trained data, how to train with our own test data?
@RnFChannelJr
@RnFChannelJr 4 года назад
how to test predict with images ?
@meenakshichoudhary4554
@meenakshichoudhary4554 6 лет назад
Could you please upload a video on how to fuse feature scores of multiple CNN models i.e. hierarchy of CNNs.
@karishmajoshi4283
@karishmajoshi4283 6 лет назад
heyy .. can you please tell me ,what is the minimum number of images i need to train in transfer learning?
@kaustubhb4
@kaustubhb4 4 года назад
250 images should be enough for beginner
@pritamnegi9207
@pritamnegi9207 6 лет назад
Can u please make a similar video for inception v3??
@akshayvarshney9680
@akshayvarshney9680 5 лет назад
Hello, sir how you set the labels in the code now suppose I have 2 categories only than how I will set the labels and also please tell how to use the trained model to make predictions.
@anujshah645
@anujshah645 5 лет назад
you can assign 0 to one class and 1 to another class. I would suggest you to watch this video where I already explained how to do prediction with trained model ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-u8BW_fl6WRc.html
@GayatriDuwarah
@GayatriDuwarah 3 года назад
from vgg16 import VGG16 it says ModuleNotFoundError: No module named 'vgg16'. Please tell me how to solve it. I have used in Spyder this code
@maxinteltech3321
@maxinteltech3321 4 года назад
tnx
@phamthang1386
@phamthang1386 4 года назад
hello Anuj shah when I ran your code it got an error import numpy as np from vgg16 import VGG16 from resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.imagenet_utils import preprocess_input from imagenet_utils import decode_predictions Traceback (most recent call last): File "", line 2, in from vgg16 import VGG16 ModuleNotFoundError: No module named 'vgg16' *** How do I show it? thanks
@nehabindle3194
@nehabindle3194 5 лет назад
Hi when I am trying to run the selection , I am getting the following error ImportError: cannot import name '_obtain_input_shape' How to resolve this
@anujshah645
@anujshah645 5 лет назад
There is version problem, kindly check on that, I encountered that too but I don't exactly remember how I did it , however there are answers to this problem online already
@WKhan-jl3fv
@WKhan-jl3fv 6 лет назад
Anuj! Thanks for the videos. It is a 16 layers network. But I print the layers "print(len(custom_vgg_model.layers))". It shows 23. Can you kindly tell, how can we know the number of the deep layers in our model? Thanks
@anujshah645
@anujshah645 6 лет назад
+Wasif Khan 16 layers means 16 trainable layers. The other layers shown by keras maybe some dropout or maxpooling layer. Print model.sumary() and count the trainable layers only, it will be 16
@WKhan-jl3fv
@WKhan-jl3fv 6 лет назад
Thanks, Anuj. Your videos are really helpful.
@indausabiful
@indausabiful 6 лет назад
Hey, how do I perform transfer learning when the first neural net is one that I have modelled
@anujshah645
@anujshah645 6 лет назад
you can use your first own trained model as base model
@indausabiful
@indausabiful 6 лет назад
is there anywhere i can find some code for that?
@prativadas4794
@prativadas4794 5 лет назад
while running the code for vgg16 from keras implementation img = image.load_img(img_path, target_size=(224, 224)) i got the error IOError: [Errno 2] No such file or directory: 'elephant.jpg' how can i solve it? please give some suggestions.
@MegaSportsluver
@MegaSportsluver 5 лет назад
download an image of an elephant and name it elephant.jpg. Put that image in the same folder as the code you're running
@harshilcena
@harshilcena 5 лет назад
its not working for me. Im still getting the same error
@akhilkatpally4188
@akhilkatpally4188 6 лет назад
Hey when you say small dataset or medium dataset at 6:17 .. what is the size you are talking about ? a million or couple of thousands?
@anujshah645
@anujshah645 6 лет назад
+Akhil Katpally small dataset would mean few thousand and medium would mean tens of thousands
@akhilkatpally4188
@akhilkatpally4188 6 лет назад
thanks Anuj
@shujatalikhan7463
@shujatalikhan7463 5 лет назад
Hello anuj, i have a doubt about how to give labels for eg you have mentioned in the code that is -labels[0:202]=0 labels[202:404]=1 labels[404:606]=2 labels[606:]=3 plz, can you explain this ?
@anujshah645
@anujshah645 5 лет назад
Well I assigned labels as such because I loaded all the 4 samples in a list in sequence where the first 202 samples are from one class , the next 202 are from second class and so on.
@chiranjibisitaula
@chiranjibisitaula 6 лет назад
Nice video! How to extract features from the fine tuned network once we finish fine tune?
@anujshah645
@anujshah645 6 лет назад
just remove the last layer , get the output of second last layer or whichever layer you want, that will be the feature for that specific input. you can save the feature in txt or npz or pkl file for later use
@chiranjibisitaula
@chiranjibisitaula 5 лет назад
For instance, if I have this model: '''This script goes along the blog post "Building powerful image classification models using very little data" from blog.keras.io. It uses data that can be downloaded at: www.kaggle.com/c/dogs-vs-cats/data In our setup, we: - created a data/ folder - created train/ and validation/ subfolders inside data/ - created cats/ and dogs/ subfolders inside train/ and validation/ - put the cat pictures index 0-999 in data/train/cats - put the cat pictures index 1000-1400 in data/validation/cats - put the dogs pictures index 12500-13499 in data/train/dogs - put the dog pictures index 13500-13900 in data/validation/dogs So that we have 1000 training examples for each class, and 400 validation examples for each class. In summary, this is our directory structure: ``` data/ train/ dogs/ dog001.jpg dog002.jpg ... cats/ cat001.jpg cat002.jpg ... validation/ dogs/ dog001.jpg dog002.jpg ... cats/ cat001.jpg cat002.jpg ... ``` ''' from keras import applications from keras.preprocessing.image import ImageDataGenerator from keras import optimizers from keras.models import Sequential from keras.layers import Dropout, Flatten, Dense # path to the model weights files. weights_path = '../keras/examples/vgg16_weights.h5' top_model_weights_path = 'fc_model.h5' # dimensions of our images. img_width, img_height = 150, 150 train_data_dir = 'cats_and_dogs_small/train' validation_data_dir = 'cats_and_dogs_small/validation' nb_train_samples = 2000 nb_validation_samples = 800 epochs = 50 batch_size = 16 # build the VGG16 network model = applications.VGG16(weights='imagenet', include_top=False) print('Model loaded.') # build a classifier model to put on top of the convolutional model top_model = Sequential() top_model.add(Flatten(input_shape=model.output_shape[1:])) top_model.add(Dense(256, activation='relu')) top_model.add(Dropout(0.5)) top_model.add(Dense(1, activation='sigmoid')) # note that it is necessary to start with a fully-trained # classifier, including the top classifier, # in order to successfully do fine-tuning top_model.load_weights(top_model_weights_path) # add the model on top of the convolutional base model.add(top_model) # set the first 25 layers (up to the last conv block) # to non-trainable (weights will not be updated) for layer in model.layers[:25]: layer.trainable = False # compile the model with a SGD/momentum optimizer # and a very slow learning rate. model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9), metrics=['accuracy']) # prepare data augmentation configuration train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_height, img_width), batch_size=batch_size, class_mode='binary') # fine-tune the model model.fit_generator( train_generator, samples_per_epoch=nb_train_samples, epochs=epochs, validation_data=validation_generator, nb_val_samples=nb_validation_samples) How can I extract features of 256 dimensions for new image? Could you please write a bit of code? Thanks for your help Anuj ji!!
@prativadas4794
@prativadas4794 5 лет назад
if your kernel keeps dying while training the dataset then reduce your batch size to 16.
@MegaSportsluver
@MegaSportsluver 5 лет назад
Hi. I noticed that you changed the fc layers from 4096 units to 128 units. How do you decide how many units to change to when finetuning?
@anujshah645
@anujshah645 5 лет назад
well i tried to reduce the number of parameters as I have less class and less dataset, I don't want my new data to overfit
@MegaSportsluver
@MegaSportsluver 5 лет назад
@@anujshah645 Thank you for your reply!
@xellr
@xellr 5 лет назад
hi Anuj, I have a question, once you finished training your model with your own data, you saved the structure and the weights, then how do you load them and test what you just have done with one image?
@anujshah645
@anujshah645 5 лет назад
Hi , kindly watch this video to get a detailed explanation of saving and loading models along with weights and predicting on test data ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-NUuMg5m42-g.html
@anoubhav
@anoubhav 5 лет назад
At 30:00 we can directly take flatten as last layer instead of taking block5_pool right? Thanks for the amazing video
5 лет назад
yes, I also saw that but the model was freezed up to the flattened layer so it won't increase the trainable parameters.
@kishanlal676
@kishanlal676 4 года назад
Hey nice tutorial. I have 2000 training samples and 1000 validation samples. I am also using same vgg16 for cat and dog classification and I used 'sigmoid' as activation and 'binary_crossentropy' as loss. My training accuracy is 100% and validation accuracy is 98.65%. Is that weird or something? Because you got only 83% on validation data, but I got 98.65%. Increase in no. of samples increases the validation accuracy?
@anujshah645
@anujshah645 4 года назад
No that's perfectly fine. Also yes, increasing the data size helps in improving validation accuracy
@raptoreeninefour467
@raptoreeninefour467 6 лет назад
Can you please clarify the doubt how did you save the new model so that I can use it for my own predictions
@raptoreeninefour467
@raptoreeninefour467 6 лет назад
I ran the code for my own data set and I put my image instead of elephant in test.py but it till shows 10 predictions even though the classifier was trained for two
@anujshah645
@anujshah645 6 лет назад
model.save("model_name.h5")
@abhijeetsharma5715
@abhijeetsharma5715 6 лет назад
Your VGG16 architecture image has a flaw. Following the input image, there are: 2 conv-64 + 1 pool, 2 conv-128 + 1 pool, [[3 conv-256 + 1 pool]]
@anooprawat6244
@anooprawat6244 6 лет назад
Hey, Anuj Great video thank you. One question you said this is just example one can implement transfer learning with your own custom data,here you have shown with lesser data thats why its showing 100% accuracy on training (I guess). What you think how much data per class is enough for transfer learning and does variety in data for a class matters? Thanks Again
@anujshah645
@anujshah645 6 лет назад
+Anoop Rawat variation in data is always better. And regarding data, more the data better it is,
@geekinme561
@geekinme561 5 лет назад
Hello, After training a custom model, how do I get to use it to test another image.
@anujshah645
@anujshah645 5 лет назад
Once model is trian , you just gotta load image and do model.predict. you can watch my video on training CNN from scratch for more clarity and explanation and code
@geekinme561
@geekinme561 5 лет назад
@@anujshah645 Thanks a lot. this really helped my final year project.
@mochammadrevaldi1790
@mochammadrevaldi1790 4 года назад
why using Fc-128? can i customize the FC-layer?
@anujshah645
@anujshah645 4 года назад
I just used it for showing, you can obviously customize as per your problem
@karishmajoshi4283
@karishmajoshi4283 6 лет назад
Great video....but I get an error while running this code.." No module named 'imagenet_utils" even though I'am using latest version of keras(2.1.3)
@anujshah645
@anujshah645 6 лет назад
hmm it should be, a quick hack will be copying the imagenet_utils.py from my github repo and putting it in your working directory
@Rochanism
@Rochanism 6 лет назад
__init__.py
@alisaghi051
@alisaghi051 4 года назад
Thanks for your job. It is very clear. Can you explain about how delete and add new layers? Why cannot we do it by model.layers.pop() and model.add("new layer")?
@anujshah645
@anujshah645 4 года назад
You can do that way as well
@khalidalsaleh8883
@khalidalsaleh8883 2 года назад
Thnx bro! But why you are using the test data for validation?
@anujshah645
@anujshah645 2 года назад
Hi, that is just for the sake of this video. I principle we shouldn't, you are right.
@omarlanda2394
@omarlanda2394 Год назад
Dear Sir, How many memory are you using in your GPU?
@anujshah645
@anujshah645 Год назад
I have 11 GB GeForce GTX 1080 Ti
@VijayRaj-zd9pc
@VijayRaj-zd9pc 5 лет назад
Hello sir ... Your explanation is really really good ... it helped me a lot ... I just had 1 questions ... When you fine-tuned vgg16 model it's weights must have changed ... So where are those weights saved ?? I know it's a silly question ... But please can you help me out
@anujshah645
@anujshah645 5 лет назад
HI Vijay, The weights are saved after training with the command model.save_weights('fname.h5'). I would recommend to watch the video on cnn understanding and implementation to understand the theory as well as code in keras for loading data, training, saving model and reloading saved model and doing prediction, etc.: part-1: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-u8BW_fl6WRc.html part-2: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-NUuMg5m42-g.html
@muhammaduzairkhan2190
@muhammaduzairkhan2190 6 лет назад
Thanks for humble n good explanation. Could you please upload vid on one-hot encoding? Thanks
@m.revaldianggara7793
@m.revaldianggara7793 4 года назад
ResourceExhaustedError: OOM when allocating tensor of shape [] and type float [[node predictions_4/random_uniform/min
@hardikapatel4490
@hardikapatel4490 5 лет назад
from keras.applications.imagenet_utils import _obtain_input_shape ImportError: cannot import name '_obtain_input_shape'
@harshilcena
@harshilcena 5 лет назад
noob
@admir3429
@admir3429 6 лет назад
Hey Anuj. When I run this model = VGG16(include_top=True, weights='imagenet') I get this error TypeError: _obtain_input_shape() got an unexpected keyword argument 'include_top'
@abhishekpatel7812
@abhishekpatel7812 6 лет назад
This can be solve by changing 'include_top = include_top' to 'require_flatten = include_top' , In vgg16.py on line 102
@admir3429
@admir3429 6 лет назад
Thank you, I did just that after writing the question :) Do you happen to know how to test the network on other images, like with the elephant?
@abhishekpatel7812
@abhishekpatel7812 6 лет назад
I don't understand the question, do you want to predict the result for other image or train the model on different data set
@admir3429
@admir3429 6 лет назад
Predict the result, like in the video he uses cat, dog, human, horse. I want to add another image of human (that was not in the training data) and predict on it.
@srikanthvelpuri2973
@srikanthvelpuri2973 6 лет назад
Condor Yes you can that.. download a human image into the same directory and in the code replace elephant with the name of the human image..Hope this answers your question
@MuradAlQurishee
@MuradAlQurishee 6 лет назад
Hi Anuj, you did a good works. I saw almost all of your videos. I found an error while I run the programm in my IPythone concole. Please give a solution. Thanks File "", line 79, in model = VGG16(input_tensor=image_input, include_top=True,weights='imagenet') File "C:\Users\Murad Al Qurishee\Desktop\Deep learning\Book and code\Transfer-Learning-in-keras---custom-data-master\vgg16.py", line 102, in VGG16 include_top=include_top) TypeError: _obtain_input_shape() got an unexpected keyword argument 'include_top'
@srikanthvelpuri2973
@srikanthvelpuri2973 6 лет назад
Replace "include_top =include_top" with "require_flatten=include_top" in vgg16.py . It is because of change in the keras documentation.It worked for me and I successfully ran the program
@MuradAlQurishee
@MuradAlQurishee 6 лет назад
Thanks, Velpuri. You are awesome people that are blessed for mankind.
@MuradAlQurishee
@MuradAlQurishee 6 лет назад
Hello Velpuri, I type "include_top =include_top" with "require_flatten=include_top" in vgg16.py. But is not working. It said the following error File "C:\Users\Murad Al Qurishee\Desktop\Deep learning\Book and code\Transfer-Learning-in-keras---custom-data-master\vgg16.py", line 102, in VGG16 require_flatten=include_top) TypeError: _obtain_input_shape() got an unexpected keyword argument 'include_top'
@smilebig3884
@smilebig3884 5 лет назад
There is flaw in y_train and y_test shape , as the model is not training blc the shape of y train and y test is not proper. I dont know how it is working for ur dataset.
@anujshah645
@anujshah645 5 лет назад
what is the flaw that you observed, can you share so that others can also know
@smilebig3884
@smilebig3884 5 лет назад
@@anujshah645 the shape of y train is 4 dimension however when i go for the training it says model can accept only 2d tensor. Which actually is correct blc 1st d for total pics and 2nd d for total no of classes. But in ur model it is still remaining 4d and it is giving error at my end but at ur end its working.
@NMAAJIDKHANMIS
@NMAAJIDKHANMIS 4 года назад
What a guy. Top stuff mate :)
@aqsakiran9234
@aqsakiran9234 6 лет назад
hi, anuj can you make a video for how to use Alexnet and vgg-16 both in parallel as feature extractor (i can send you paper links) ???? i need it in my FYP , kindly make any video or stuff like this, thanks alot.
@anujshah645
@anujshah645 6 лет назад
I gues even you can do that, just feed the same input to two network and their feature, save it in a txt,npz or pkl file
@aqsakiran9234
@aqsakiran9234 6 лет назад
i am trying it for vgg16 following your video, but when i used features 4096 instead of 128 in feature extractor training part of code , it throws error of some tensor values is higher etc
@viralthakar552
@viralthakar552 6 лет назад
Hey Anuj Thanks for the great tutorial. I have tried both the approaches i.e. training a classifier and fine tuning. For the first one I am getting 96% accuracy but for the second one I am getting 25%. I have tried different datasets and it has the same problem. I have also tried longer training (100 epochs) for fine tuning but didn't help.
@anujshah645
@anujshah645 6 лет назад
what is your dataset size, try normalizing your input
@viralthakar552
@viralthakar552 6 лет назад
I am using the dataset you provided ... cat horse dog and rider. I have added the images of same labels from imagenet. I will try normalizing.
@dr.adedejiolu6409
@dr.adedejiolu6409 6 лет назад
thanks for this great tutorial. when i was trying to run the code i was getting this error import numpy as np from vgg16 import VGG16 from resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.imagenet_utils import preprocess_input from imagenet_utils import decode_predictions Using TensorFlow backend. model = VGG16(include_top=True, weights='imagenet') Traceback (most recent call last): File "", line 1, in model = VGG16(include_top=True, weights='imagenet') File "C:\Users\dba\deep-learning-models-master\vgg16.py", line 170, in VGG16 model.load_weights(weights_path) File "C:\Users\dba\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\topology.py", line 2643, in load_weights with h5py.File(filepath, mode='r') as f: File "C:\Users\dba\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py\_hl\files.py", line 271, in __init__ fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr) File "C:\Users\dba\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py\_hl\files.py", line 101, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py\h5f.pyx", line 78, in h5py.h5f.open OSError: Unable to open file (File signature not found) pls i have tried everything i could still getting the same error pls whats happening thanks
@anujshah645
@anujshah645 6 лет назад
+Adedeji Olu well I have not encountered such error. Did you google it
@dr.adedejiolu6409
@dr.adedejiolu6409 6 лет назад
i have google and couldnt find the solution. i downloaded the vgg16 manually and placed it in the model in keras
@RyeinGoddard
@RyeinGoddard 6 лет назад
Just click keep theme and don't show this message again. Then that pop-up will disappear...haha
@anujshah645
@anujshah645 6 лет назад
+Ryein Goddard haha. Thanks mate
@nayvigator30
@nayvigator30 6 лет назад
Hello Anuj, this is good tutorial for VGGNet transfer learning in deep learning. I have some problem in my practice. In this video not include to calculate regarding confusion matrix, precision, recall and f1 score. I try to apply the code from your tutorial "Tutorial on CNN implementation for own data set in keras". But the value of these measures equal 0. You can see my result in link drive.google.com/file/d/1wwv-Vu5mgLGzzhMgRuLsssRjrMbhYfoO/view?usp=sharing , drive.google.com/file/d/1xaX8XIxYwNQI6ztRzgUsDZs-bmWLWa1_/view?usp=sharing . I hope you can help me. Thank a lot.
@harshit1800
@harshit1800 5 лет назад
TypeError: _obtain_input_shape() got an unexpected keyword argument 'include_top' solution 1. clone this git repo: "github.com/keras-team/keras-applications" 2. then import it " from keras import applications" 3. Now import vgg16 "from keras.applications.vgg16 import VGG16" this will work for keras 2.2.4
Далее
And what is your height? 😁 @karina-kola
00:10
Просмотров 1,5 млн
Построил ДЕРЕВНЮ на ДЕРЕВЬЯХ!
19:07
TensorFlow Tutorial #10 Fine-Tuning
27:56
Просмотров 23 тыс.
Transfer Learning Using CNN(VGG 16)| Keras Tutorial|
12:36
VGG16 Neural Network Visualization
7:36
Просмотров 68 тыс.
9. VGG16 architecture and implementation
16:39
Просмотров 50 тыс.