Thank you guys great work. Now I get clear the confusion matrix and know how to use it. Not only call the function Before this course what I know: 1、There is a confusion matrix 2、We can use the package to generate it 3、How many preds are right I can get from it. What I learned: 1、Why call confusion - Measure the model confusion level.As human I think it’s clear 2、Why we don’t want to track grads - for performance 3、How we stop track grads. One is with (context manage).One is @.These two are python function it will be fun if you read about more. 4、How confusion matrix make. First make the target tensor(10*10), then counting. Question: 1、train_sets. Targets. Labels. Why they don’t just use the same name 2、What is the cat and stack difference.They all make two tensors as one 3、In fact about question2.There is more fun question. How can we understand tensor. I find most course teach you tensor like this: this is scalar , this is vector ,this is matrix . And extend it. Tensor you get it.But I can see these in real life . Even visualize them.But I can’t see tensors.When come to cat,stack tensors . I can’t get it any more
Excellent learning! 1) Targets and labels mean the same thing. Both words have been used historically. It was "train_labels" though. In older versions of torchvision, you had to use train_labels or test_labels depending if the train argument was set to True or False, respectively. It looks like they are just consolidating that. Have a look here at this issue where the change was proposed: github.com/pytorch/vision/issues/577 2) Check here: deeplizard.com/learn/video/kF2AlpykJGY Please see discription of that video. 3) It takes time. The shape gives us the best way to visualize the length of each axis.
Thank you for the excellent videos! It looks like there is a typo in the plot_confusion_matrix function on on the website (to help future viewers). I would receive this error when running as-is "IndexError: tuple index out of range" error on the following line. This errors out, since cm.shape only has two axes (I think that's the right term), so it should be cm.shape[1] not cm.shape[2]. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[2])):
Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. I got this error while making the confusion matrix using the sklearn code. Any suggestions?
minute 13, we're getting good accuracy (88%) although we're using torch.no_grad which means we're not really training the data .. how do we get good results without training ?
Hey Chen - This is a continuation of the previous video when we trained the network. The accuracy that we are seeing here is from training in that video. During training, the network's weights are updated, and they stay updated. Let me know if you still have questions about this.
@@deeplizard So after we have trained the data, the weights remain as they were at the end of the fifth epoch. When we call the model(images) to get the predictions, we are essentially using the same weights from the fifth epoch as they are already inside our model as a part of parameters, right? The only difference is that we turn of gradient tracking because we are not doing any backprop so there is no sense in tracking the gradients as they are not changing, and is only consuming more memory. Is this understanding correct?
hi! I was trying the the confusion matrix plotting part. My plot seems to cropped i.e data is not fitting in the figure. I am using the latest pytorch and cuda version
When plotting a conf matrix in notebook: ----> 4 from resources.plotcm import plot_confusion_matrix ModuleNotFoundError: No module named 'resources' ? Thanks!
When I write this: preds_correct = get_num_correct(train_preds, train_set.targets) print('total correct:', preds_correct) print('accuracy:', preds_correct / len(train_set)) I always get an error that says this: The size of tensor a (10000) must match the size of tensor b (60000) at non-singleton dimension 0 How can I fix this?)
I think so. I don't see why not. Even if the numbers are really small, we could just zoom in. The data is really what matters though. We just want to be able to quickly see where the high values are, especially for incorrect predictions. Suppose the pairs and values were in a table, we could just order from highest to lowest values.
You can do inference any time. However, if the network is not trained, the predictions will be random. With the prediction tensor, dim=0 is the dim that contains the images. dim=1 is the 10 predictions for each image. Our task with the argmax is to get the highest prediction value for each image. Those values run along dim=1.
hello, I am using python 3.7 through spyder. While trying to import "resources.plotcm" I am getting the error --- "module resources.plotcm not found". Plz help
For the visualization, it's just a matter of creating the matrix tensor. In the matrix, you'd need to have squares for label_1, label_2, (label_1 and label_2) and so on. I think it would work like that.
Seems like something may be wrong with your get_all_preds function. Double check your code. Use the site as a reference: deeplizard.com/learn/video/0LhiS6yu2qQ
this is really being nit-picky but @16:38 you can just do j, k = stacked[0]. Python has native unpacking. Also, you can change the following operation to: cmt[j, k] += 1. In any case thanks for the videos :)
Hey Superldo1 - You are welcome for the videos! I like being "nit-picky," and I like your suggestions. I will say that just because something "can" be done, doesn't necessarily mean it "should" be done. Coding is a form of expression and much of this stuff is subjective. I'm usually nit-picky when a change will make code more readable or easier to understand, but even this can be subjective. I find that more verbose writing often leads to easier readability. Short-hand stuff can often make things harder to understand or require more in-depth knowledge. Many groups set up coding standards that developers are expected to follow so that code expression is uniform across a code base. That was just some additional thoughts. Chris