Тёмный
Mhathesh TSR
Mhathesh TSR
Mhathesh TSR
Подписаться
Simple and easy ways to implement and work on Deep Learning Models and applications Examples. Making Code easier to be flexible for any applications from base models, here you can find most of the applications examples models to develop into your own ideas. Reinforcement learning coding to make model think, most of the stuff here is to make simple and easier to implement into your Environment and solve them. Great Legend told"Don't worry about it if you don't understand”, maths are not allowed here don't worry you won't find it here.
Комментарии
@assaddoutoum7169
@assaddoutoum7169 11 месяцев назад
it interesting video and very clear thanks. I use the code but the accuracy reached up to 92 and then started decreasing. is this the ways how it works?
@ajaykrishangairola3269
@ajaykrishangairola3269 Год назад
@mhathesh please share code?
@043_fazlerabbi5
@043_fazlerabbi5 Год назад
please code ?
@043_fazlerabbi5
@043_fazlerabbi5 Год назад
tnx Mhathesh TSR
@littlerforest
@littlerforest 3 года назад
I am having trouble when I run width_shift_range=0.2, ^ with the SyntaxError said: invalid syntax. and couldn't seem to find any solutions. Which part of my code was wrong?
@aldotanuwiratama9733
@aldotanuwiratama9733 3 года назад
Thanks for sharing this tutorial, but I want to ask, how to improve my accuracy's dataset in training?? Because it just reach average 3-4% when I trained it....
@sagarsharma-np2kq
@sagarsharma-np2kq 3 года назад
sagar.sharma221998@gmail.com mail me
@sagarsharma-np2kq
@sagarsharma-np2kq 3 года назад
i need your code please help me
@DEFINITE444
@DEFINITE444 3 года назад
2021 tutorial plss
@DybaTube
@DybaTube 3 года назад
!python setup.py doenst work! "Traceback (most recent call last): File "setup.py", line 908, in <module> ENV = Environment() File "setup.py", line 57, in __init__ self.installed_packages.update(self.get_installed_conda_packages()) TypeError: 'NoneType' object is not iterable"
@sonoda7723
@sonoda7723 3 года назад
could you share the code or the notebook?
@woonie3134
@woonie3134 3 года назад
Hey could I have the link for the colab??
@James-wd9ib
@James-wd9ib 3 года назад
25 GB of RAM, are you serious, good god
@adarshsingh936
@adarshsingh936 3 года назад
which dataset are you using
@superMg911
@superMg911 3 года назад
Do you know how can i adjust layers in the CnnPolicy? i tried to find the code about how to configure the CnnPolicy params but i couldn't find them, please help!
@superMg911
@superMg911 3 года назад
Do you know how can i adjust layers in the CnnPolicy? i tried to find the code about how to configure the CnnPolicy params but i couldn't find them, please help!
@superMg911
@superMg911 3 года назад
Hi, thank you so much, this video is helping me so much in my thesis project!
@superMg911
@superMg911 3 года назад
Do you know how can i adjust layers in the CnnPolicy? i tried to find the code about how to configure the CnnPolicy params but i couldn't find them, please help!
@superMg911
@superMg911 3 года назад
Hi, thank you so much, this video is helping me so much in my thesis project!
@basithasnainhameed1854
@basithasnainhameed1854 3 года назад
while compiling the model .fit_generator you got the error first, how did you fix that, kindly guide me i am stuck.
@vishaks9666
@vishaks9666 3 года назад
You have to add model.add(Flatten()) before model.add(Dense(5, activation='softmax'))
@meraphone2885
@meraphone2885 3 года назад
Please share your code
@043_fazlerabbi5
@043_fazlerabbi5 Год назад
from keras.models import Sequential from keras.constraints import * from keras.optimizers import * from keras.utils import np_utils from keras import Model from keras.layers import * from keras.preprocessing.image import ImageDataGenerator #original size (696, 520) f=256 s=256 #first model main_model = Sequential() main_model.add(Conv2D(32, kernel_size=3, input_shape=(f, s, 1),activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) main_model.add(Conv2D(32, kernel_size=3,activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) main_model.add(Conv2D(64, kernel_size=3,activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) #main_model.add(Conv2D(64, kernel_size=3,activation='relu')) #main_model.add(BatchNormalization()) #main_model.add(MaxPool2D(strides=(5,5))) #main_model.add(Dropout(0.5)) main_model.add(Flatten()) #lower features model - CNN2 lower_model1 = Sequential() lower_model1.add(MaxPool2D(strides=(5,5), input_shape=(f, s,1))) lower_model1.add(Conv2D(32, kernel_size=3,activation='relu')) lower_model1.add(BatchNormalization()) lower_model1.add(MaxPool2D(strides=(5,5))) lower_model1.add(Dropout(0.5)) lower_model1.add(Conv2D(32, kernel_size=3,activation='relu')) lower_model1.add(BatchNormalization()) lower_model1.add(MaxPool2D(strides=(5,5))) lower_model1.add(Dropout(0.5)) #lower_model1.add(Conv2D(64, kernel_size=3,activation='relu')) #lower_model1.add(BatchNormalization()) #lower_model1.add(MaxPool2D(strides=(5,5))) #lower_model1.add(Dropout(0.5)) lower_model1.add(Flatten()) #merged model merged_model = Concatenate()([main_model.output, lower_model1.output]) x = Dense(128, activation='relu')(merged_model) x = Dropout(0.25)(x) x = Dense(64, activation='relu')(x) x = Dropout(0.25)(x) x = Dense(32, activation='relu')(x) output = Dense(3, activation='softmax')(x) # add in dense layer activity_regularizer=regularizers.l1(0.01) final_model = Model(inputs=[main_model.input, lower_model1.input], outputs=[output]) final_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) traindir1="/content/drive/My Drive/DL_2_Dataset/bbrod/train" traindir2="/content/drive/My Drive/DL_2_Dataset/bbrod/train" testdir1="/content/drive/My Drive/DL_2_Dataset/bbrod/train" testdir2="/content/drive/My Drive/DL_2_Dataset/bbrod/train" input_imgen = ImageDataGenerator(rescale = 1./255, rotation_range=80, width_shift_range=0.6, height_shift_range=0.5, horizontal_flip=True,zoom_range=0.8,vertical_flip=True, validation_split=0.4) test_imgen = ImageDataGenerator(rescale = 1./255) batch_size=16 def generate_generator_multiple(generator,dir1, dir2, batch_size, img_height,img_width,subset): genX1 = generator.flow_from_directory(dir1, target_size = (img_height,img_width), class_mode = 'categorical', batch_size = batch_size, shuffle=False, color_mode='grayscale', seed=7,subset=subset) genX2 = generator.flow_from_directory(dir2, target_size = (img_height,img_width), class_mode = 'categorical', batch_size = batch_size, shuffle=False, color_mode='grayscale', seed=7,subset=subset) while True: X1i = genX1.next() X2i = genX2.next() yield [X1i[0], X2i[0]], X2i[1] inputgenerator=generate_generator_multiple(generator=input_imgen, dir1=traindir1, dir2=traindir2, batch_size=batch_size, img_height=f, img_width=s,subset="training") testgenerator=generate_generator_multiple(input_imgen, dir1=testdir1, dir2=testdir2, batch_size=batch_size, img_height=f, img_width=s,subset="validation") history=final_model.fit_generator(inputgenerator, #steps_per_epoch=trainsetsize/batch_size, steps_per_epoch=250 , epochs = 100, validation_data = testgenerator, validation_steps = 100, shuffle=False)
@rbayi2310
@rbayi2310 3 года назад
thanks !!! how much the loss and the accuracy you find in this model ??
@MrDzuManji
@MrDzuManji 3 года назад
I got error: Unexpected token < in JSON at position 6
@hussainalaaedi
@hussainalaaedi 3 года назад
please may i get code of this tutorial {DDPG agent(Actor-Critic) with Keras-RL for continuous env} by email halaaedi@gmail.com
@user-kq6gb6ej3d
@user-kq6gb6ej3d 3 года назад
A good tutorial, thanks!
@rohitapatil8819
@rohitapatil8819 3 года назад
What is the role of encoder in super resolution. Can we use only decoder
@naveenkumarm8828
@naveenkumarm8828 3 года назад
Dear Sir, For the above code the categorical_accuracy is not increased more than 0.2 accuracy after completing 100 epochs. how to increase the accuracy.
@ellisiverdavid7978
@ellisiverdavid7978 3 года назад
Hi! I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to apply the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images? Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos? I’d love to learn more from your expertise and advices as I explore this topic further. Thank you for the insightful explanation and demo by the way! Subscribed! :)
@kmnm9463
@kmnm9463 3 года назад
Hi, Can you please do videos on Boosting and Bagging algorithms in ML? Regards . KM
@kmnm9463
@kmnm9463 3 года назад
Hi, Excellent videos on Keras and image classification. Your explanation of each parameter of functions is really in depth and of great help. Please keep up the excellent work. Regards KM.
@jrohit1110
@jrohit1110 3 года назад
Hi! is there a discord channel for stable Baselines?
@rubenvanderheyde6868
@rubenvanderheyde6868 3 года назад
Can you please share the notebook code
@ayenainai
@ayenainai 3 года назад
Hi, I have been your clips its very detailed but I have a question, my GPU is NVIDIA, can that work?
@joshgibson539
@joshgibson539 3 года назад
x1.5 for standard Indian talking speed. Lol
@dawachyophel
@dawachyophel 3 года назад
Good ! How to get codes.. And further i do have some queries related other super resolution codes, pl assist me to do so..
@bhuvaneshs.k638
@bhuvaneshs.k638 3 года назад
Do u know how to merge 2 layers of (None, n) shape where n is some numbers.... To (None, n, n) dimension layer using Kronecker product in Keras ?
@intensewolf9101
@intensewolf9101 3 года назад
Thank you very much. What is your GitHub? May you please share this code?
@043_fazlerabbi5
@043_fazlerabbi5 Год назад
tnx....@MhatheshTSR
@043_fazlerabbi5
@043_fazlerabbi5 Год назад
from keras.models import Sequential from keras.constraints import * from keras.optimizers import * from keras.utils import np_utils from keras import Model from keras.layers import * from keras.preprocessing.image import ImageDataGenerator #original size (696, 520) f=256 s=256 #first model main_model = Sequential() main_model.add(Conv2D(32, kernel_size=3, input_shape=(f, s, 1),activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) main_model.add(Conv2D(32, kernel_size=3,activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) main_model.add(Conv2D(64, kernel_size=3,activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) #main_model.add(Conv2D(64, kernel_size=3,activation='relu')) #main_model.add(BatchNormalization()) #main_model.add(MaxPool2D(strides=(5,5))) #main_model.add(Dropout(0.5)) main_model.add(Flatten()) #lower features model - CNN2 lower_model1 = Sequential() lower_model1.add(MaxPool2D(strides=(5,5), input_shape=(f, s,1))) lower_model1.add(Conv2D(32, kernel_size=3,activation='relu')) lower_model1.add(BatchNormalization()) lower_model1.add(MaxPool2D(strides=(5,5))) lower_model1.add(Dropout(0.5)) lower_model1.add(Conv2D(32, kernel_size=3,activation='relu')) lower_model1.add(BatchNormalization()) lower_model1.add(MaxPool2D(strides=(5,5))) lower_model1.add(Dropout(0.5)) #lower_model1.add(Conv2D(64, kernel_size=3,activation='relu')) #lower_model1.add(BatchNormalization()) #lower_model1.add(MaxPool2D(strides=(5,5))) #lower_model1.add(Dropout(0.5)) lower_model1.add(Flatten()) #merged model merged_model = Concatenate()([main_model.output, lower_model1.output]) x = Dense(128, activation='relu')(merged_model) x = Dropout(0.25)(x) x = Dense(64, activation='relu')(x) x = Dropout(0.25)(x) x = Dense(32, activation='relu')(x) output = Dense(3, activation='softmax')(x) # add in dense layer activity_regularizer=regularizers.l1(0.01) final_model = Model(inputs=[main_model.input, lower_model1.input], outputs=[output]) final_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) traindir1="/content/drive/My Drive/DL_2_Dataset/bbrod/train" traindir2="/content/drive/My Drive/DL_2_Dataset/bbrod/train" testdir1="/content/drive/My Drive/DL_2_Dataset/bbrod/train" testdir2="/content/drive/My Drive/DL_2_Dataset/bbrod/train" input_imgen = ImageDataGenerator(rescale = 1./255, rotation_range=80, width_shift_range=0.6, height_shift_range=0.5, horizontal_flip=True,zoom_range=0.8,vertical_flip=True, validation_split=0.4) test_imgen = ImageDataGenerator(rescale = 1./255) batch_size=16 def generate_generator_multiple(generator,dir1, dir2, batch_size, img_height,img_width,subset): genX1 = generator.flow_from_directory(dir1, target_size = (img_height,img_width), class_mode = 'categorical', batch_size = batch_size, shuffle=False, color_mode='grayscale', seed=7,subset=subset) genX2 = generator.flow_from_directory(dir2, target_size = (img_height,img_width), class_mode = 'categorical', batch_size = batch_size, shuffle=False, color_mode='grayscale', seed=7,subset=subset) while True: X1i = genX1.next() X2i = genX2.next() yield [X1i[0], X2i[0]], X2i[1] inputgenerator=generate_generator_multiple(generator=input_imgen, dir1=traindir1, dir2=traindir2, batch_size=batch_size, img_height=f, img_width=s,subset="training") testgenerator=generate_generator_multiple(input_imgen, dir1=testdir1, dir2=testdir2, batch_size=batch_size, img_height=f, img_width=s,subset="validation") history=final_model.fit_generator(inputgenerator, #steps_per_epoch=trainsetsize/batch_size, steps_per_epoch=250 , epochs = 100, validation_data = testgenerator, validation_steps = 100, shuffle=False)
@engineersmenu
@engineersmenu 3 года назад
Code File Available?
@043_fazlerabbi5
@043_fazlerabbi5 Год назад
from keras.models import Sequential from keras.constraints import * from keras.optimizers import * from keras.utils import np_utils from keras import Model from keras.layers import * from keras.preprocessing.image import ImageDataGenerator #original size (696, 520) f=256 s=256 #first model main_model = Sequential() main_model.add(Conv2D(32, kernel_size=3, input_shape=(f, s, 1),activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) main_model.add(Conv2D(32, kernel_size=3,activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) main_model.add(Conv2D(64, kernel_size=3,activation='relu')) main_model.add(BatchNormalization()) main_model.add(MaxPool2D(strides=(5,5))) main_model.add(Dropout(0.5)) #main_model.add(Conv2D(64, kernel_size=3,activation='relu')) #main_model.add(BatchNormalization()) #main_model.add(MaxPool2D(strides=(5,5))) #main_model.add(Dropout(0.5)) main_model.add(Flatten()) #lower features model - CNN2 lower_model1 = Sequential() lower_model1.add(MaxPool2D(strides=(5,5), input_shape=(f, s,1))) lower_model1.add(Conv2D(32, kernel_size=3,activation='relu')) lower_model1.add(BatchNormalization()) lower_model1.add(MaxPool2D(strides=(5,5))) lower_model1.add(Dropout(0.5)) lower_model1.add(Conv2D(32, kernel_size=3,activation='relu')) lower_model1.add(BatchNormalization()) lower_model1.add(MaxPool2D(strides=(5,5))) lower_model1.add(Dropout(0.5)) #lower_model1.add(Conv2D(64, kernel_size=3,activation='relu')) #lower_model1.add(BatchNormalization()) #lower_model1.add(MaxPool2D(strides=(5,5))) #lower_model1.add(Dropout(0.5)) lower_model1.add(Flatten()) #merged model merged_model = Concatenate()([main_model.output, lower_model1.output]) x = Dense(128, activation='relu')(merged_model) x = Dropout(0.25)(x) x = Dense(64, activation='relu')(x) x = Dropout(0.25)(x) x = Dense(32, activation='relu')(x) output = Dense(3, activation='softmax')(x) # add in dense layer activity_regularizer=regularizers.l1(0.01) final_model = Model(inputs=[main_model.input, lower_model1.input], outputs=[output]) final_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) traindir1="/content/drive/My Drive/DL_2_Dataset/bbrod/train" traindir2="/content/drive/My Drive/DL_2_Dataset/bbrod/train" testdir1="/content/drive/My Drive/DL_2_Dataset/bbrod/train" testdir2="/content/drive/My Drive/DL_2_Dataset/bbrod/train" input_imgen = ImageDataGenerator(rescale = 1./255, rotation_range=80, width_shift_range=0.6, height_shift_range=0.5, horizontal_flip=True,zoom_range=0.8,vertical_flip=True, validation_split=0.4) test_imgen = ImageDataGenerator(rescale = 1./255) batch_size=16 def generate_generator_multiple(generator,dir1, dir2, batch_size, img_height,img_width,subset): genX1 = generator.flow_from_directory(dir1, target_size = (img_height,img_width), class_mode = 'categorical', batch_size = batch_size, shuffle=False, color_mode='grayscale', seed=7,subset=subset) genX2 = generator.flow_from_directory(dir2, target_size = (img_height,img_width), class_mode = 'categorical', batch_size = batch_size, shuffle=False, color_mode='grayscale', seed=7,subset=subset) while True: X1i = genX1.next() X2i = genX2.next() yield [X1i[0], X2i[0]], X2i[1] inputgenerator=generate_generator_multiple(generator=input_imgen, dir1=traindir1, dir2=traindir2, batch_size=batch_size, img_height=f, img_width=s,subset="training") testgenerator=generate_generator_multiple(input_imgen, dir1=testdir1, dir2=testdir2, batch_size=batch_size, img_height=f, img_width=s,subset="validation") history=final_model.fit_generator(inputgenerator, #steps_per_epoch=trainsetsize/batch_size, steps_per_epoch=250 , epochs = 100, validation_data = testgenerator, validation_steps = 100, shuffle=False)
@lakpatamang2866
@lakpatamang2866 3 года назад
So you are maintaining the resolution to the input images. How is it a super resolution? It should have been 64x64 super resolution to 256x256.
@lakpatamang2866
@lakpatamang2866 3 года назад
Yes I get your idea. But it contradicts from what super resolution really is.
@puvenes1994
@puvenes1994 3 года назад
thanks alot!!!!!
@md.saikatislamkhanbappy6085
@md.saikatislamkhanbappy6085 3 года назад
how you get InceptionResNetV2_ms_1.layers and InceptionResNetV2_ms_2.layers
@NiksGMD
@NiksGMD 3 года назад
when i use the code '!python setup.py' i get the error 'Please run this script with Python version 3.7 or 3.8 64bit and try again.' what should i do?
@NiksGMD
@NiksGMD 3 года назад
God Complex i’m using google colab and it didn’t update
@MikuDanceAnima
@MikuDanceAnima 3 года назад
@@mhatheshtsr4212 Traceback (most recent call last): File "setup.py", line 15, in <module> from pkg_resources import parse_requirements, Requirement ModuleNotFoundError: No module named 'pkg_resources' dont work
@Traincraft101
@Traincraft101 3 года назад
man I really want to upload my own music
@xxxod
@xxxod 3 года назад
You can
@lovelyzkpop7352
@lovelyzkpop7352 3 года назад
You can
@miho1545
@miho1545 4 года назад
Please, Please show me the hidden code please TT TT i really really want to see it. Please show me hidden code between 11 ~ 16. I can't understand how to check it after training. My email is wos1600@ajou.ac.kr
@Crasterius
@Crasterius 4 года назад
I have a problem in the upsampling module. NameError Traceback (most recent call last) <ipython-input-17-6ece540fb825> in <module>() ----> 1 Audio(f'{hps.name}/level_1/item_0.wav') NameError: name 'Audio' is not defined
@justinm2581
@justinm2581 4 года назад
if you were not willing to share atleast the dataset why the fuck did you upload this
@sirduckoufthenorth
@sirduckoufthenorth 4 года назад
This was hard to follow
@MsFearco
@MsFearco 4 года назад
You might have the knowledge to do a tutorial but your pronouncation is not the best plus there is background noise which is...
@MsFearco
@MsFearco 4 года назад
God Complex i still watch your tutorial :) try to buy a better microphone and practice before recording. That way you will sound much more confident/ clear.
@samiulhaque3338
@samiulhaque3338 4 года назад
@@MsFearco this is india.Nothing can be done
@mehuljan26
@mehuljan26 3 года назад
@@samiulhaque3338 Well at least you are getting the fucking content. So stop being a racist and suck it UP.
@pcsingh5217
@pcsingh5217 4 года назад
Very nice
@BekeMan27
@BekeMan27 4 года назад
how to plot both diagramm together with different color?
@각시탈
@각시탈 4 года назад
thanks,random Indian!