@1:58:57 You said you will explain different procedures to generate different responses later. I did not find it till you start discussing Step 3. Could you illustrate further?
Many thanks for your excellent lectures, particularly those on diffusion models. I do have a few inquiries regarding models of conditional diffusion. Can we think of text vectors as the query (Q) and image vectors as the key (K) and value (V) in cross-attention instead of image vectors as the query (Q)?
Thank Soheil for sharing the updated deep learning theory courses. I ever followed Sohail's former lectures in 2020, where I learned the theoretical knowledge of deep learning in terms of representation, generalization, and optimization. I found that Soheil's course schedule this year has substantially changed to state-of-the-art transformer-based technologies, such as large language models, etc. I plan to catch up with Sohail's updated deep learning foundation course this year and really appreciate the new lecture videos.
Hi professor, I was also wondering that if you plan to to add some contents related to distanglement learning? like nonlinear ICA which I think is very theoretically interesting and important.
Thanks for updating this really amazing course. I've read the syllabus of this semester, and find it is really interesting, especially the generative models and multi-modal models part. Hope to see more latest course videos. Thanks a lot for your effort of sharing the contents of this amazing course.