I have 2 questions. How are the 1X1 and 3X3 CNN used trained to obtain the weight parameters? Also shouldn't 3X3 with stride 1 change the dimension, though it keeps the number of channels the same the size of the output feature would have changed and reduced by 2
How is this different from U-net? I think they're pretty similar if you think that in the U-net you're going down in the encoder, up in the decoder and sideways with the skip connections. It's like an upside-down U-net
This is quite informative and helpful. Can you please create a video on prediction heads in fpn as in how to assign a predicted bbox to a particular feature map. That would be quite helpful.
Yes, thinking to make some videos about different label assignment techniques. Now about your question - the right terminology or phrasing of your request would be how to assign an anchor box to a particular feature map.
I don't know if I got this wrong but if I take a 1x64x26x26 feature through a convolution that has a K=3 and S=1, I will definitely not end up with a 1x64x26x26, but with a 1x64x24x24. To achieve the desired shape would require a P=1. If I'm not correct, would someone please explain how the dimensions would work in this case?
Though I understand the theory it’s just that I have never implemented/used them myself. I prefer to share those concepts that I have implemented myself and applied on some real world problem. But not saying no :) maybe one day. Thanks for the ask though.
Instead of doing the upsampling via pytorch module and being angry about it, would it be any more useful to train an additional layer to do the upsampling instead? I'm thinking of a layer analogous to the decoder layer in an autoencoder.
No need to be angry at it :) … yes you could do that. As a matter of fact the additional layers after upsampling is to reduce it effects. The cost would be number of parameters. So it is always a trade off.