Тёмный
Andrew Reader
Andrew Reader
Andrew Reader
Подписаться
Medical Imaging, Deep Learning, AI, Atomic Physics, Signals & Systems

Andrew J. Reader is a Professor of Imaging Sciences at King's College London, UK
Steepest Descent (Live Lecture)
25:01
Год назад
Комментарии
@be419
@be419 3 дня назад
Thank you so much for the informative video! I'm new to using MatLab-- where did you import the data of the sinograph for your code to work on it?
@AndrewJReader
@AndrewJReader 2 дня назад
Thanks for the feedback! In this code I used the phantom that is available in Matlab, and then used the radon function to create the sinogram data from that phantom. Hence I did not need to import any data for the sinogram.
@tabasdezh
@tabasdezh 3 дня назад
Thanks for sharing.
@AndrewJReader
@AndrewJReader 2 дня назад
Thanks for the feedback!
@leibaleibovich5806
@leibaleibovich5806 6 дней назад
Greetings! Professor Reader, I would like to ask you for help in creating a syllabus for self-learning, if I may. In a nutshell, I have a couple of old videos, which I want to de-noise and upscale. I decomposed videos into images and tried couple of existing solutions, i.e., ESRGAN in Google Collab. Some things worked better that others, but I was not happy with the result. Plus, upscaling takes large amount of time, so not really do-able on a free version of Colab. I am wondering if I can possibly run it locally. All in all, this topic got me interested. I would like to try more things on my own and experiment, rather than using a “pre-packaged” solution. However, even a cursory search showed me that computer vision / image processing is a vast domain! Right now I am in need of a syllabus or a roadmap. Could you kindly put me on a right track? I do have basic Python programming skills. As a hobbyist, where do I start? What do I need to learn? Having a structure has always been a problem in my self-education efforts. I would greatly appreciate any directions, given by you. Thank you!
@aachacon31
@aachacon31 11 дней назад
Very good!
@AndrewJReader
@AndrewJReader 2 дня назад
Thanks for the feedback! There is a newer version of the video here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-G0V6ulOIlJc.html
@jimmea6317
@jimmea6317 16 дней назад
I came up with a matrix-vector convolution kernel which is similar to yours except instead of flipping and shifting the matrix rows right, it flips the vector and shifts the matrix rows to the left descending. I suppose both methods work though, yours is just more intuitive
@AndrewJReader
@AndrewJReader 2 дня назад
Thanks for the feedback. Convolution is only really defined in one way (if we overlook edge effects) - and that would mean that the kernel is not flipped in any way at all when placed into columns of the matrix, just shifted. Hopefully that would be true for your approach too?
@jimmea6317
@jimmea6317 День назад
@@AndrewJReader when the flipped vector is multiplied by the shifted matrix rows, the rows are not flipped but are just shifted. The resultants both ways end up being equivalent, possibly hence the associative property of convolution's intrinsic symmetry. I guess linear algebra is complicated in this way mostly because of all the moving parts, but it sure does do the job efficiently
@rd-tk6js
@rd-tk6js 28 дней назад
Excellent explanation, very comprehensive and complete, thanks !
@AndrewJReader
@AndrewJReader 22 дня назад
Thanks so much for your feedback!
@anonymouscommentator
@anonymouscommentator Месяц назад
Thank you so much for the explanation!
@AndrewJReader
@AndrewJReader Месяц назад
Many thanks for the feedback!
@Archimedez_
@Archimedez_ Месяц назад
Great explanation, first video that makes it clear for someone who just wanted to know after seeing it on Stranger Things 😅
@AndrewJReader
@AndrewJReader Месяц назад
Great to hear, thanks so much for the feedback!
@miglena2s
@miglena2s Месяц назад
"The higher the frequency, the more energy we need to make a unit." Very nicely explained, Thank you!
@AndrewJReader
@AndrewJReader Месяц назад
Glad this helped, so simple on the one hand, yet with massive consequences!
@user-ss1dp7he8t
@user-ss1dp7he8t Месяц назад
Can make course in matlab ??
@AndrewJReader
@AndrewJReader Месяц назад
I have provided some example Python code here: ru-vid.com/group/PL557uxcMh3xwUsqofih09ZPqHMdhe-rui&si=FyNe0xAO62HU_Of3 In terms of Matlab, I have only done one video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-r5lzacT3HkE.html
@sAkIBtHewOlVeRiNe
@sAkIBtHewOlVeRiNe Месяц назад
Hello Sir, could you please explain " fig1.canvas.manager.window.move(0, 0)" the line or line number 20? manager has no attributes named window. This line is optional but why you write that which have no existence.
@AndrewJReader
@AndrewJReader Месяц назад
For the libraries / packages that I had installed on my system when I did this video, there was no problem at all. So for me at that time there was no error report. Other installations / environments / OS can of course vary. So it did exist for my setup. I used that line to move the window to the top left corner. Hope that helps, and thanks for the feedback.
@BAIZO777
@BAIZO777 2 месяца назад
Bro i want these codes 😢
@ahmadayoubi1830
@ahmadayoubi1830 2 месяца назад
great video professor thank you for sharing these valuable information! i am liking the series so far, but i was wondering if there would be any book we can take as reference in order to go deeper into this domain. thank you so much!
@AndrewJReader
@AndrewJReader 2 месяца назад
Many thanks for the feedback! In fact yes, I am working on a book now, but it might be a while yet for it to be released. I think the content I have presented here, and its level, is distinct from that in other courses / books.
@Rundas69420
@Rundas69420 2 месяца назад
Thanks for sharing this concept! I've swapped the system matrices with the "radon" and "iradon" function from skimage.transform. While the code runs without errors, the loss remains constant throughout all epochs. Converting between tensors and numpy arrays works using torch.from_numpy(array.astype(np.float32)).unsqueeze(0).unsqueeze(0) and tensor.detach().cpu().squeeze().numpy() respectively.
@AndrewJReader
@AndrewJReader 2 месяца назад
Thanks for the feedback! If you use radon and iradon from skimage.transform, the gradients won't propagate for training due to the functions using numpy arrays, rather than torch tensors. So just converting between torch tensors and numpy arrays before/after use of radon or iradon would not be sufficient. Hope that makes sense (I assume your current version is not working, when you say that the loss remains constant for all epochs?).
@Rundas69420
@Rundas69420 2 месяца назад
@@AndrewJReader Exactly. Although it's not a huge issue, since I can use the approach with the system matrix. Was just curious whether I can embed radon/iradon into this calculation, to avoid a file with gigabytes in size.
@AndrewJReader
@AndrewJReader 2 месяца назад
Yes you can use a radon / iradon function, it's just that it needs to work on torch tensors all the way through. Some people have already done this, for example take a look here: torch-radon.readthedocs.io/ (note that I have not used anything from that package)
@nibirsankar
@nibirsankar 2 месяца назад
Now this is extremely profound! Thanks for this in-depth insight to the idea
@AndrewJReader
@AndrewJReader 2 месяца назад
Thanks so much for the feedback, appreciated! Yes, the physics of what happens is remarkable!
@grindstm
@grindstm 2 месяца назад
The image reconstruction series is so thorough and is really helping my intuition. Thanks!
@AndrewJReader
@AndrewJReader 2 месяца назад
Many thanks for the feedback, appreciated!
@marvinkika7065
@marvinkika7065 3 месяца назад
I wanted to express my gratitude for your work, because your videos and scientific papers on deep learned CT image reconstruction have been invaluable references in the making of my bachelor thesis. I wish you all the best and continued success, professor.
@AndrewJReader
@AndrewJReader 3 месяца назад
Thank you so much for the feedback, means a lot. Likewise, very best wishes for your next steps.
@aopengli
@aopengli 3 месяца назад
Thank you for your video!I have a question, if the noise is multiple and distributed differently, isn't it harder to reduce the noise
@AndrewJReader
@AndrewJReader 3 месяца назад
Thanks for the feedback and question. This method for denoising is applicable whatever the type or distribution of noise present in an image. But of course, more severe noise levels will always mean it is harder to reliably reduce the noise while preserving the signal.
@jithinjoy4806
@jithinjoy4806 3 месяца назад
My favourite subject ❤❤❤
@AndrewJReader
@AndrewJReader 3 месяца назад
Really good to know! Thanks for the comment!
@dauletserikbol9148
@dauletserikbol9148 3 месяца назад
Thanks a lot for your video and effort sir!
@AndrewJReader
@AndrewJReader 3 месяца назад
Many thanks for the feedback!
@davidbeckschulte8246
@davidbeckschulte8246 3 месяца назад
Very good explanation!
@AndrewJReader
@AndrewJReader 3 месяца назад
Thanks for the feedback!
@055shubhamsonkar6
@055shubhamsonkar6 3 месяца назад
Dear professor , I was never this much clear about the concepts, Thank you for this much effort in each videos . please share the slides for the videos , if possible.
@jpbacano
@jpbacano 4 месяца назад
Nice video, Andrew! I just have one question: What would you do if you didn't have the true image? Thank you!
@AndrewJReader
@AndrewJReader 4 месяца назад
Many thanks! You can do a self supervised approach - a very simple example being to artificially create a noisier version of the image that you have, and train the CNN to denoise that back to the version of the image that had not been made noisier. Then use that trained CNN on the original image. But there are other methods that are more sophisticated of course.
@bsckr4993
@bsckr4993 4 месяца назад
ధన్యవాదాలు
@zhaobryan4441
@zhaobryan4441 4 месяца назад
prof,could u plz share the slides?
@chrisnewman7281
@chrisnewman7281 5 месяцев назад
Thanks again for the very detailed description of your script. I do have a quick question which I hope that you may be able to answer. It is the realm of deep image prior. I have a half tone image that I’ve taken from the newspaper. So my question is in a nutshell can this approach to DIP conver a half tone image to Grayscale. It’s not that the image is lacking detail but there needs to be some kind of conversion from an apparent tonal value which is actually black to corresponding grey scale value.
@AndrewJReader
@AndrewJReader 5 месяцев назад
Interesting question, it depends on the sampling of the image. In fact I would simply suggest that you could resize a highly (over)sampled version of the image (just regrid at lower resolution, assigning a value to the lower resolution image pixels according to the sum of corrsponding pixel values in the high res image). Hope that makes sense, no real need for DIP (which can do inpainting, and for a high res half tone image it would possibly just fill the gaps in accordance with the neighbouring values - and if at core the image is binary valued, then the result could be binary as well). It all depends on many things though. I would start with a simple non-deep learning approach for your problem.
@chrisnewman7281
@chrisnewman7281 5 месяцев назад
@@AndrewJReader that’s gonna keep me fairly busy
@goneshivachandhra7470
@goneshivachandhra7470 5 месяцев назад
Can you share the code ?
@hacr7hd
@hacr7hd 5 месяцев назад
Hi Andrew, Thanks a lot for these nice clips. I would like to request you have a look at www.mathworks.com/matlabcentral/fileexchange/24479-pet-reconstruction-system-matrix and also appendices code of my thesis at discovery.ucl.ac.uk/id/eprint/1450246/2/Munir_Ahmad_Final_Thesis.pdf just in case of interest.
@user-gj4pr6pc8v
@user-gj4pr6pc8v 5 месяцев назад
I really like the way you demo in the video, especially the intuition of uncertainty principle. It is really really straightforward for me.
@AndrewJReader
@AndrewJReader 5 месяцев назад
Thank you so much for the feedback, really appreciated.
@parisakhateri9858
@parisakhateri9858 5 месяцев назад
Thank you Andrew, amazing lecture! I don't know anybody else who can explain complex matters in such a clear way. Often, there's missing information in the flow of lectures, but yours are high quality without noise or missing data ;)
@AndrewJReader
@AndrewJReader 5 месяцев назад
Thank you so much Parisa, your feedback really means a lot!
@ZinebElGourain
@ZinebElGourain 5 месяцев назад
thank you so much
@AndrewJReader
@AndrewJReader 5 месяцев назад
Really appreciate the feedback
@aweirdguy9785
@aweirdguy9785 5 месяцев назад
This is a very interesting video and a rare one in this field on RU-vid. Thank you for this. I read many of your reviews on PET reconstruction and I'm very hopeful for the impact of CNNs (and GANs) on PET imaging. I'm a master's student in PET reconstruction and I'm currently assessing the potential of "Histo-Images" into reconstruction. I wonder if you think this new format is simply a new temporary *fad* in the literature or a potential new standard in clinical contexts (or something in-between). Anyhow, thank you again for this video and I will recommend your channel to my peers.
@AndrewJReader
@AndrewJReader 5 месяцев назад
Thanks so much for your feedback, really appreciated. I think histoimages have a definite future for practical and fast input to reconstruction networks. MLAP vs TOF-backprojection needs to be considered. Thanks also for recommending my channel to your peers!
@user-lp7cc3cd2j
@user-lp7cc3cd2j 5 месяцев назад
Thank you very much, sir! This is the best explanation of Fourier Transform I have ever seen! But I have a question. If you want to determine whether there is a specific frequency in f(t), you need to calculate an integral once. But there are countless frequencies, so do you need to calculate countless integral operations to know the result?
@AndrewJReader
@AndrewJReader 5 месяцев назад
Many thanks for the feedback! Yes indeed, one would need to calculate an integral (a product of 2 functions and then sum) for each and every frequency of interest. However, there is only a finite number of frequencies in the case of discrete functions (the discrete Fourier transform is used), so it is feasible. As it can however be slow, this is where the fast Fourier transform (FFT) comes in - it dramatically speeds up the calculation of the discrete Fourier transform. For continuous functions, analytic integration can be done, to give the solution in the form of another function.
@jacobvandijk6525
@jacobvandijk6525 5 месяцев назад
@ 0:18 Hi Andrew. I was wondering, isn't the photon "going upward" deflected from its path by collisions with atoms in the brain cells?
@AndrewJReader
@AndrewJReader 5 месяцев назад
Hi Jacob, thanks for the question. Yes, there is a probability of Compton scatter (inelastic) or even a tiny probably of Thomson scatter (elastic) of the 511 keV photons with the electrons in the attenuating medium (i.e. the brain, skull, etc.). However, a good fraction of these high-energy photons do escape without any interaction at all. (Photoelectric absorption can also occur with relatively low probability).
@jacobvandijk6525
@jacobvandijk6525 5 месяцев назад
@@AndrewJReader Aha, thanks! What is modern physics without probabilities ;-)
@simonhughes8139
@simonhughes8139 6 месяцев назад
Your videos are the best explanations of Qm,I finally have an understanding of H,U,P and wave functions,good stuff
@AndrewJReader
@AndrewJReader 6 месяцев назад
Great to hear, thanks so much for the feedback!
@saswatadas5489
@saswatadas5489 6 месяцев назад
Can u do without iradon
@AndrewJReader
@AndrewJReader 6 месяцев назад
Great question - yes, you can. One approach would be to use the deep image prior representation (or even just a pixel grid in fact, if not early stopping or regularisation sought) with a forward model only and use an optimiser such as those used in AI, such as Adam, but it would be slow
@saswatadas5489
@saswatadas5489 6 месяцев назад
@@AndrewJReader i mean unfiltered backprojection without iradon
@AndrewJReader
@AndrewJReader 6 месяцев назад
But backprojection is iradon (when used with no filter). The transpose of the radon transform can be found via iradon(no filter). If you mean to avoid using iradon as a function, then you can write your own version, such as, for example, what I did in this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-BXXLoVyAT0Q.htmlsi=D6PoRXZRiT3Jm9z3 (see about 18 mins into the video)
@rowenyunzhao8790
@rowenyunzhao8790 6 месяцев назад
Wonderful. This and part 1 videos help me have a quick literature review about ML in PET reconstruction.
@AndrewJReader
@AndrewJReader 6 месяцев назад
Really glad the videos have been helpful
@rowenyunzhao8790
@rowenyunzhao8790 6 месяцев назад
Thanks Andrew. This video just made the intuition behind the MAPEM method clearer.
@AndrewJReader
@AndrewJReader 6 месяцев назад
Great to hear, thanks for the feedback
@mohamedkassar7441
@mohamedkassar7441 6 месяцев назад
Thanks
@AndrewJReader
@AndrewJReader 6 месяцев назад
Thanks also!
@mohamedkassar7441
@mohamedkassar7441 6 месяцев назад
Thanks!
@AndrewJReader
@AndrewJReader 6 месяцев назад
Thanks so much for the feedback!
@chrtjune
@chrtjune 6 месяцев назад
incredibly helpful video
@AndrewJReader
@AndrewJReader 6 месяцев назад
Many thanks for the helpful feedback
@marianiemusarudin1575
@marianiemusarudin1575 7 месяцев назад
Hi Prof. Do you have any video explaining on OSEM reconstruction using matlab?
@AndrewJReader
@AndrewJReader 7 месяцев назад
Hi, thanks for the comment. I have not done a video on this yet, but it is not a difficult extension. It would look something like the following (the below is not optimised, nor tested, but hopefully clear enough to give the idea, feedback welcome): % Use ordered subsets (OS), just update based on a subset of the measured data number_subsets = 1; for ii = 1:num_iterations for ss=1:number_subsets subset_of_angles_to_use = azi_angles(ss:number_subsets:end); sensitivity_image = iradon(sinogram_ones(:,subset_of_angles_to_use),subset_of_angles_to_use,'none',xd); % Forward model the current reconstruction estimate fp_rec = radon(mlem_recon, subset_of_angles_to_use); % Ratio ratio = noisy_sino_gt_psf(:,subset_of_angles_to_use) ./ (fp_rec + 0.001); % BP the ratio bp_ratio = iradon(ratio,subset_of_angles_to_use,'none',xd); % Divide by SIGMA_i a_ij (the sensitivity image) correction_image = bp_ratio ./ (sensitivity_image + 0.001); mlem_recon = mlem_recon .* correction_image; end % Subset loop end % Iteration loop
@sistajoseph
@sistajoseph 7 месяцев назад
It was explicit. It helped to consolidate what I was thinking about Schrodinger's equation. The last part with square pulse is the clincher because the equation is trying to model a particle. The pulse is two D but particles like elections are three D, I think so. I try to understand it in two D first. An electron would be a spherical version of the pulse. Many thanks.
@AndrewJReader
@AndrewJReader 7 месяцев назад
Glad my video was able to help a little!
@florentb8578
@florentb8578 7 месяцев назад
Excellent visualisation
@AndrewJReader
@AndrewJReader 7 месяцев назад
Many thanks!
@shishir8598
@shishir8598 7 месяцев назад
Is there any implementation code, paper or any resources?
@AndrewJReader
@AndrewJReader 7 месяцев назад
Many thanks for your for your comment. I do have simple example code, for example in this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-BXXLoVyAT0Q.html. Other resources are available, for example, here: github.com/Abolfazl-Mehranian
@anasssofti9271
@anasssofti9271 7 месяцев назад
😎 Best Fourier Analysis! Thank you dear professor for this intuitive lecture !!
@AndrewJReader
@AndrewJReader 7 месяцев назад
Thanks so much for the feedback!!
@menhle1064
@menhle1064 7 месяцев назад
Thank you sir!! All my struggling solved in 33.21 min video.
@AndrewJReader
@AndrewJReader 7 месяцев назад
Delighted to hear! Thanks for the feedback!
@anasssofti9271
@anasssofti9271 8 месяцев назад
Thank you for your wonderful simulation and implementation, I have a simple question about why the BP operator (A.T) is actually the Transpose of the forward projection (A), I couldn't get the relation if you can please guide me in this matter.
@AndrewJReader
@AndrewJReader 7 месяцев назад
Many thanks for the feedback! When deriving MLEM, we find the need for A^T. This means we need to swap the rows and columns of A. The matrix A contains rows which are lines, each line being used in scalar product with the input vector. When considering A^T, it therefore has columns which are lines. To apply A^T to an input vector means that the input vector contains weighting factors for each column (i.e. a weigthed set of lines, which is backprojection). For more info please see this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ARGHLWkf5yU.html
@anasssofti9271
@anasssofti9271 7 месяцев назад
@@AndrewJReader Thanks for the clear response + video recommendation, It's really clear in my mind, your videos are like a knowledge reference to me.😄
@reshailraza514
@reshailraza514 8 месяцев назад
Hey Andrew, this is very helpful series. Do you have Diffuse optical imaging and its inverse solution finding using deep learning in your list?
@AndrewJReader
@AndrewJReader 8 месяцев назад
Many thanks for the feedback. I have no active work in optical imaging, so it's not on my list. Current modalities of focus: PET, MRI and beginning to look into ultrasound.
@travelthetropics6190
@travelthetropics6190 8 месяцев назад
Thank you very much for the great explanation professor. I am wondering what is the simulation tool which you have used at the end of the lecture. Is it publicly available ?
@AndrewJReader
@AndrewJReader 8 месяцев назад
Many thanks for the kind feedback. I wrote code from scratch for the simulations used in this video, and I don't think my code would be suited to public release, but thanks for asking!