Тёмный

CNN Confusion Matrix with PyTorch - Neural Network Programming 

deeplizard
Подписаться 153 тыс.
Просмотров 21 тыс.
50% 1

Опубликовано:

 

20 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 50   
@deeplizard
@deeplizard 5 лет назад
👉 Check out the blog post and other resources for this video: 🔗 deeplizard.com/learn/video/0LhiS6yu2qQ
@abdullahalbanyan930
@abdullahalbanyan930 5 лет назад
Thank you again and again for the videos. For now, how to add the validation data and testing data to our model? we only did training data.
@weifanchen5602
@weifanchen5602 4 года назад
The quality of this online course series is so high, thank you for all the efforts
@rewangtm
@rewangtm 4 года назад
It always feels like watching a series with pytorch as hero; that ending is marvelous!
@tingnews7273
@tingnews7273 5 лет назад
Thank you guys great work. Now I get clear the confusion matrix and know how to use it. Not only call the function Before this course what I know: 1、There is a confusion matrix 2、We can use the package to generate it 3、How many preds are right I can get from it. What I learned: 1、Why call confusion - Measure the model confusion level.As human I think it’s clear 2、Why we don’t want to track grads - for performance 3、How we stop track grads. One is with (context manage).One is @.These two are python function it will be fun if you read about more. 4、How confusion matrix make. First make the target tensor(10*10), then counting. Question: 1、train_sets. Targets. Labels. Why they don’t just use the same name 2、What is the cat and stack difference.They all make two tensors as one 3、In fact about question2.There is more fun question. How can we understand tensor. I find most course teach you tensor like this: this is scalar , this is vector ,this is matrix . And extend it. Tensor you get it.But I can see these in real life . Even visualize them.But I can’t see tensors.When come to cat,stack tensors . I can’t get it any more
@deeplizard
@deeplizard 5 лет назад
Excellent learning! 1) Targets and labels mean the same thing. Both words have been used historically. It was "train_labels" though. In older versions of torchvision, you had to use train_labels or test_labels depending if the train argument was set to True or False, respectively. It looks like they are just consolidating that. Have a look here at this issue where the change was proposed: github.com/pytorch/vision/issues/577 2) Check here: deeplizard.com/learn/video/kF2AlpykJGY Please see discription of that video. 3) It takes time. The shape gives us the best way to visualize the length of each axis.
@rebeenali917
@rebeenali917 4 года назад
Thank you, this is amazing
@deeplizard
@deeplizard 4 года назад
No problem 😊
@chriskorfmann
@chriskorfmann 3 года назад
Good call on turning off the local gradient tracking. My memory usage dropped nearly 3GB with it off 😱
@aidenstill7179
@aidenstill7179 5 лет назад
Thanks
@urlocher54
@urlocher54 4 года назад
Thank you for the excellent videos! It looks like there is a typo in the plot_confusion_matrix function on on the website (to help future viewers). I would receive this error when running as-is "IndexError: tuple index out of range" error on the following line. This errors out, since cm.shape only has two axes (I think that's the right term), so it should be cm.shape[1] not cm.shape[2]. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[2])):
@petronetto
@petronetto 3 года назад
Also is a good practice call network.eval() before the evaluation (with torch.no_grad()) and before the training call network.train()
@killersiggy
@killersiggy 3 года назад
for those not using the note book plt.show() after plot_confusion_matrix().
@vasudhatapriya6315
@vasudhatapriya6315 4 года назад
Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. I got this error while making the confusion matrix using the sklearn code. Any suggestions?
@chenmargalit7375
@chenmargalit7375 5 лет назад
minute 13, we're getting good accuracy (88%) although we're using torch.no_grad which means we're not really training the data .. how do we get good results without training ?
@deeplizard
@deeplizard 5 лет назад
Hey Chen - This is a continuation of the previous video when we trained the network. The accuracy that we are seeing here is from training in that video. During training, the network's weights are updated, and they stay updated. Let me know if you still have questions about this.
@Aditya-ne4lk
@Aditya-ne4lk 4 года назад
@@deeplizard So after we have trained the data, the weights remain as they were at the end of the fifth epoch. When we call the model(images) to get the predictions, we are essentially using the same weights from the fifth epoch as they are already inside our model as a part of parameters, right? The only difference is that we turn of gradient tracking because we are not doing any backprop so there is no sense in tracking the gradients as they are not changing, and is only consuming more memory. Is this understanding correct?
@deeplizard
@deeplizard 4 года назад
@Aditya Correct!
@nabeelnajeeb4394
@nabeelnajeeb4394 4 года назад
hi! I was trying the the confusion matrix plotting part. My plot seems to cropped i.e data is not fitting in the figure. I am using the latest pytorch and cuda version
@sundarsanthanam6147
@sundarsanthanam6147 4 года назад
what is your torch.set_printoptions(linewidth = ??) , Is it 120??
@tonihuhtiniemi1222
@tonihuhtiniemi1222 5 лет назад
When plotting a conf matrix in notebook: ----> 4 from resources.plotcm import plot_confusion_matrix ModuleNotFoundError: No module named 'resources' ? Thanks!
@deeplizard
@deeplizard 5 лет назад
See this part of the video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-0LhiS6yu2qQ.html
@ТамерланМустафаев-ч7н
When I write this: preds_correct = get_num_correct(train_preds, train_set.targets) print('total correct:', preds_correct) print('accuracy:', preds_correct / len(train_set)) I always get an error that says this: The size of tensor a (10000) must match the size of tensor b (60000) at non-singleton dimension 0 How can I fix this?)
@aidenstill7179
@aidenstill7179 5 лет назад
can you make a video about the implementation of pattern recognition without frameworks?
@MohammedAwney84
@MohammedAwney84 5 лет назад
What if I make a classification for about 109 different classes Is confusion matrix suitable for that?
@deeplizard
@deeplizard 5 лет назад
I think so. I don't see why not. Even if the numbers are really small, we could just zoom in. The data is really what matters though. We just want to be able to quickly see where the high values are, especially for incorrect predictions. Suppose the pairs and values were in a table, we could just order from highest to lowest values.
@sundarsanthanam6147
@sundarsanthanam6147 4 года назад
This inference step is to be done always after training if I am right? why are we doing argmax with respect to first dimension?
@deeplizard
@deeplizard 4 года назад
You can do inference any time. However, if the network is not trained, the predictions will be random. With the prediction tensor, dim=0 is the dim that contains the images. dim=1 is the 10 predictions for each image. Our task with the argmax is to get the highest prediction value for each image. Those values run along dim=1.
@sundarsanthanam6147
@sundarsanthanam6147 4 года назад
@@deeplizard Thank you so much
@deepikapantola3203
@deepikapantola3203 5 лет назад
hello, I am using python 3.7 through spyder. While trying to import "resources.plotcm" I am getting the error --- "module resources.plotcm not found". Plz help
@deeplizard
@deeplizard 5 лет назад
Hi Deepika - See this part of the video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-0LhiS6yu2qQ.html
@swagatochatterjee7104
@swagatochatterjee7104 5 лет назад
How to do this for multilabel classification?
@tallwaters9708
@tallwaters9708 5 лет назад
This IS multilabel classification.
@swagatochatterjee7104
@swagatochatterjee7104 5 лет назад
@@tallwaters9708 this is uni label classification. I guess you are getting confused between binary and multiclass classification
@deeplizard
@deeplizard 5 лет назад
For the visualization, it's just a matter of creating the matrix tensor. In the matrix, you'd need to have squares for label_1, label_2, (label_1 and label_2) and so on. I think it would work like that.
@shashwatsingh2406
@shashwatsingh2406 5 лет назад
at 8:10 when I write the same code train_preds.shape gives a size of 600x10. how can I resolve this
@deeplizard
@deeplizard 5 лет назад
Seems like something may be wrong with your get_all_preds function. Double check your code. Use the site as a reference: deeplizard.com/learn/video/0LhiS6yu2qQ
@SuperIdo1
@SuperIdo1 4 года назад
this is really being nit-picky but @16:38 you can just do j, k = stacked[0]. Python has native unpacking. Also, you can change the following operation to: cmt[j, k] += 1. In any case thanks for the videos :)
@deeplizard
@deeplizard 4 года назад
Hey Superldo1 - You are welcome for the videos! I like being "nit-picky," and I like your suggestions. I will say that just because something "can" be done, doesn't necessarily mean it "should" be done. Coding is a form of expression and much of this stuff is subjective. I'm usually nit-picky when a change will make code more readable or easier to understand, but even this can be subjective. I find that more verbose writing often leads to easier readability. Short-hand stuff can often make things harder to understand or require more in-depth knowledge. Many groups set up coding standards that developers are expected to follow so that code expression is uniform across a code base. That was just some additional thoughts. Chris
@SuperIdo1
@SuperIdo1 4 года назад
@@deeplizard Respectable response :) I do agree readability should be a top concern. Cheers
Далее
1 Subscriber = 1 Penny
00:17
Просмотров 46 млн
DIY Pump Solutions
00:18
Просмотров 880 тыс.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Build PyTorch CNN - Object Oriented Neural Networks
23:23
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн