Great tutorial! A lot of useful information! Although applying some pretrained models in my problem I am left with a very atrange situation. After running a prediction on test data the model returned probabilities per label for images in test but those probabilities are very similarto other probabilities. So for example I get probability minimum 0.24992 and maximum 0.25801 for label X, min 0.396754 and max 0.399788 for label Y etc. So basically the result is almost the same for every image. While training for more epochs I'm down even with the same probabilities for every image for each class label and I do not know what to do now as I can't find any answer why it's like that. I tried manipulating parameters like learning rate, adding dropout but it did not help. Could you help me as an expert what might be the cause?
Try training with a very small number of images, and verify that you can get the training loss low, and the training accuracy high with those small number of images. This should work because the NN will overfit on those images (think of it as memorizing the images). Only if that works should you try a larger number of images.
@@NeilRhodesHMC Thanks a lot for a quick answer! By a very small number of images do you mean restricting the amount of images per training epoch or smaller batch size? In terms of a loss, loss is also pretty stable and hardly moving per epoch being set on 0.7 + - 0.1. It's very strange and I tested multiple architectures but still after x epochs still same results :(