The grouping implemented during 01:15:00 - 02:15:00 lacks one vital component. Each list of the DICOM file paths returned by glob, has to be sorted first prior to being grouped. Otherwise the elements of each group will be random and hence each subsequent slice will result in a random distance with its previous one inside any given group. This results in an error when each DICOM group is inverted to its corresponding NIfTI file. You can solve this by simply replacing all existing `glob()` with `sorted(glob())`. By including `sorted`, all the file names will be sorted and the grouped slices will have uniform axial-wise distances, preventing errors like the one mentioned above.
I love this course, it is useful I think in at 1:50:57 there is something wrong with the enumerate it should be for j, file in enumerate.... you already use the i in the outer loop.
Very good course. But there is something that I have to mention, I don't think people who came across here to watch an advanced course on medical image processing, need to learn how to install Python or relevant softwares
Hello, I followed your steps one by one but when I import monai the following error occurs:" AttributeError Traceback (most recent call last) Cell In[5], line 1 ----> 1 import monai File D:\Anacoda\envs\liver_segmentation\Lib\site-packages\monai\__init__.py:58 44 excludes = "|".join( 45 [ 46 "(^(monai.handlers))", (...) 54 ] 55 ) 57 # load directory modules only, skip loading individual files ---> 58 load_submodules(sys.modules[__name__], False, exclude_pattern=excludes) 60 # load all modules, this will trigger all export decorations 61 load_submodules(sys.modules[__name__], True, exclude_pattern=excludes) File D:\Anacoda\envs\liver_segmentation\Lib\site-packages\monai\utils\module.py:212, in load_submodules(basemod, load_all, exclude_pattern) 210 try: 211 mod = import_module(name) --> 212 importer.find_module(name).load_module(name) # type: ignore 213 submodules.append(mod) 214 except OptionalImportError: AttributeError: 'FileFinder' object has no attribute 'find_module'" Hope you will reply me as soon as possible so I can complete it
Is there any actual predicting? It seems like the testing loop is just validation and still requires a train and label pair. How would I use the trained model to predict/segment a single scan to get the subsequent segmented output?
One thing not clear to me is what is the purpose of segmentation. If you save this segmentation how can you read it into to python so that the model focuses on features of a specific region of interest?
He sellected few data which I think 10 from volume and 10 from label folders. Please note that the volume and label s you sellected must be the same. for example, if you choose Liver_24_3 volume the label must be Liver_24_3, etc.
Is there any difference between Monai networks (Resnet, Densenet , Unet ) and their standard versions? I see monai documentation mentioned that their networks are adapted from standard networks but I noticed a huge performance difference when trained 2D hand X-rays on Monai Resnet and Monai Densenet when compared to the ones trained on tensorflow. Could it be because of Monai networks design or transformations ?
When i am trying to load the nii gz file, I am getting this error " Error occurred while loading the selected files. Click 'Show details' button and check the application log for more information." Can anyone help me out
Hello sir, while doing the code of pre-processing part, i have encountered an issue "FutureWarning: : Class `AddChannel` has been deprecated since version 0.8. please use MetaTensor data type and monai.transforms.EnsureChannelFirst instead. warn_deprecated(obj, msg, warning_category)" After this i have replaced the AddChanneld to EnsureChannelFirstd and also imported the library of MetaTensor. while executing above scripts, my kernel was dead and it doesn't reflecting any thing other than this. Even i dont know where i have to use the metatensor library. Kindly help me resolving this issue sir.
Hi, Yes indeed the function AddChannel is deprecated. There is simple solution for this, you can use a numpy function that adds an axis to your array, you can use it to an axis at the beginning of your array and this will help you use the rest of the code with no problem.
hi, can i use this approach for detecting changes over a period of time of same places in satellite imagery.? Ijust want to detect changes in satellite imagery at different time
first thanku very much for such amazing explanation , but i wanna ask a question the data i have some patients have 20 dicoms slices and aothers have 400 dicoms slices so i wanna divide into groubs like u did so i divide in which each group have 20 dicoms in it or 64 dicoms or what?? to convert each group into nifti file
I'm getting an error in "from monai.transforms import( Compose, AddChanneld, LoadImaged, Resized, ToTensord, Spacingd, Orientationd, ScaleIntensityRanged, CropForegroundd," anything with 'd' makes an error !!anyone can help me?
Hi, this is helping me a lot in learning MONAI. Btw, could you please tell me how you construct or where to get the Training and Testing folders in the Preprocessing Content, cause I follow the video at the beginning, and the first dataset folders structure is not like that.
Training a model with the exact dimensions can not be possible if you have one of these two conditions: Using sliding window: if the image is very large (which 70% the case) and you don't have enough gpu memory then it will be impossible to train the model with the data as it is. Not using sliding window: you will not have the same number of slices for all the cases so you will need to normalize them and also you may have the problem discussed in the first point. So if you have one of these two problems then you can't use the data as it is, and spliting them to 128 or 64 is one method that I proposed. If you have a better idea you can use it as well :)
after running up the train file im having this error RuntimeError: Sizes of tensors must match except in dimension 4. Got 64 and 63 (The offending index is 0)
Hey I just had a quick question, can we train a U-net model to detect multiple instances of an object present in a single image or is it good for training models to detection of a single instance per image
Depends on the pattern under the masks in your labels used during training. If it detects that particular pattern at different places in an image, it will produce segmentation instances at all those places.
It's a possibility but it must be logical and with policies. We will used this planet resources. The totality of science used isnt it's maximum potential. Having longer life of 200 years and slowing aging is more likely to happen but on aspects that our resources would not be depreciated. We will decide on this. How will the world behave and process things will define its own future. Right now, we have the capability and capacity to terraform planets. I just need two nations or several ,if the rest would not do what it should be to fix things or increases instability I'm sorry. That life.
The amount of mistakes that you made from min. 49 to min 55 is making it hard to understand the difference between passions or whatever it is and dicom slices. I think have a script to refer to when you are reading would be great