Dear Dr. Sreenivas, your video has always been helpful. I wonder if I am trying to blend my coordinated rasters that only contains probability from 0 to 1 instead of RGB, is it possible to use rasterio instead of opencv? Thank you.
Good video. In the part where you crop the input image to be suited to a certain size I would also recommend trying padding. This is because the user normally does not expect any data loss in the output, if that makes sense. Thank you again for your work, Sreeni.
Does this work for 3D patches of irregular shape? I am trying to feed images of 256x256x128 in patches of 256x256x32 and I am getting errors. What would the window size be in this case?
Hi, thanks for this very useful git. I don't understand: - Can we use this git with a pytorch model that predicts a mask but not one hot? - What is the input dimension of pred_function?
Hi sreeni ..great effort .. i had some issues with predict large image . i got an error like "im = np.array(im)[:, ::-1] MemoryError: Unable to allocate 10.5 GiB for an array with shape (22564, 20810, 3) and data type float64" . how to solve this issue please help me
Hello @DigitalSreeni: what about imagery with different spatial resolution? How do you deal with these issues ? I assume pooling would be a solution, but I am unsure.Thanks
I get right to the right to the part where I need to unpatchify, however it keeps saying "The patches dimension is not equal to the original image size". Somehow you are able to unpatchify without the 3 from the RGB channel. That is the only thing i am missing that is preventing my unpatchify. Am i missing something? Is it a different version of patchify or something?
I am try to dehaze an Image using DL, I am using also using patches, but when I combine the patches I can visually see boundary around patches. How can I use this method, as I using pytorch everything is in tensor and code available is for numpy array. Can you help me how to solve this problem?
Hi, I'm using patchify to cut UAV image in colab to make label for training dataset, but when I open it in qgis, it doesn't have georefenced coordinates, where am I doing wrong step?
Thanks for very informative video. Could you tell me about how to measure large image where few objects span to another patch. In that case, objects metrics are not accurate. I am using MaskRCNN model.
Thank you very much for the effort. I wonder about the following: you put quite some effort in illustrating the advantage of smooth blending. And to be clear, it shows. However, why don't you calculate metrics such as IoU, Dice or overall accuracy on both the non-smoothly blended and the smoothly blended result? These should show the advantage *quantitatively* rather than qualitatively, right?
Yes, you need to calculate IoU metrics to make sure you understand the accuracy of your final result. In this case I omitted that from my video as I try not to jam too many things in every video. Good point though... In general, you need to check all metrics when you are putting together a solution to an image analysis challenge.
Sorry, I do not have any insights into masters degree in deep learning as it depends on many factors, primarily your location. I am based out of the San Francisco bay area and I can definitely tell you that Stanford, UC Berkeley, and UC Davis are all good Universities for deep learning. In general, doing masters in this field is a good idea as more and more jobs are opening up in this field.
It was covered in the previous lecture: github.com/bnsreenu/python_for_microscopists/tree/master/228_semantic_segmentation_of_aerial_imagery_using_unet