Тёмный
Harvard CMSA
Harvard CMSA
Harvard CMSA
Подписаться
The Center of Mathematical Sciences and Applications is a multidisciplinary research center in the Faculty of Arts and Sciences at Harvard University. By bringing together researchers from an extensive variety of disciplines and institutions, the Center serves as a fusion point for mathematics, statistics, physics, and related sciences. Harvard’s William Caspar Graustein Professor of Mathematics, Shing-Tung Yau, was the Center’s first director.

The Center of Mathematical Sciences and Applications hosts postdocs, faculty, and special programs explored through various workshops. Seminars on topics ranging from Mirror Symmetry to Social Sciences are held weekly.
Deep Learning 9/26/24
1:00:29
4 часа назад
Deep Learning 9/24/2024
1:13:00
12 часов назад
Deep Learning 9/17/2024
1:04:41
14 часов назад
Deep Learning 9/12/2024
1:14:31
14 часов назад
Deep Learning 9/10/2024
1:08:44
14 часов назад
Marc Lackenby | The complexity of knots
1:09:19
16 часов назад
Комментарии
@joelwillis2043
@joelwillis2043 25 минут назад
timestamp where we import pytorch?
@IUT-e8x
@IUT-e8x 3 часа назад
Thank you
@TheKivifreak
@TheKivifreak День назад
Well-explained! This is a great research project :-)
@calcifer464
@calcifer464 2 дня назад
Please increase the resolution of the next lectures.
@AlgoNudger
@AlgoNudger 2 дня назад
Thanks.
@mawkuri5496
@mawkuri5496 2 дня назад
first!
@thatsfantastic313
@thatsfantastic313 3 дня назад
These lectures are real treasure!!
@DistortedV12
@DistortedV12 3 дня назад
What’s her name?
@GodofStories
@GodofStories День назад
its in the desc morno
@RakshithML-vo1tr
@RakshithML-vo1tr 3 дня назад
The lectures are very good but just can you please make sure to add or make changes in the angle of camera and focus on the board straight so that we could concentrate parallely
@noumankhan2123
@noumankhan2123 3 дня назад
Eli Grigsby you are incredible
@MsTheLyubov
@MsTheLyubov 5 дней назад
There is always reference to bottleneck performance in the materials. What is it?
@adamgohain3318
@adamgohain3318 6 дней назад
Incredible!
@AlgoNudger
@AlgoNudger 6 дней назад
Thanks.
@TheCrmagic
@TheCrmagic 6 дней назад
Thank you for sharing this.
@Tom-qz8xw
@Tom-qz8xw 8 дней назад
Understood 20% of this
@themartian9634
@themartian9634 9 дней назад
Why even am I watching this I don't know..... but still watching
@christophermayfield6043
@christophermayfield6043 7 дней назад
same
@timeslices7923
@timeslices7923 9 дней назад
Interesting topic, but audio quality is unlistenable.
@tankieslayer6927
@tankieslayer6927 12 дней назад
Imagine your life’s work contributed less than low eyeque matrix algebra.
@MiddletonEdgar-g5r
@MiddletonEdgar-g5r 16 дней назад
Walker Shirley Martinez Kenneth Garcia Sarah
@FaisalAlbulushi-x8t
@FaisalAlbulushi-x8t 24 дня назад
Anyone here in 2024
@tankieslayer6927
@tankieslayer6927 25 дней назад
All these physically inspired flows are ultimately useless for graphic generation since you can just use the most trivial flow possible. This is just like rest of the works of Tegmark.
@drakkhein
@drakkhein 22 дня назад
Please elaborate.
@tankieslayer6927
@tankieslayer6927 19 дней назад
@@drakkhein Seems like my reply got deleted (because of link?). The purpose of an image generation model is to flow from noise to target distribution given a prompt. To do this, you can simply just learn the vector field. You can look at the paper "Scaling Rectified Flow Transformers for High-Resolution Image Synthesis' which is used by the SOTA model FLUX. 1. Tegmark had a mediocre career in physics and does not have anything meaningful to say in AI.
@revimfadli4666
@revimfadli4666 27 дней назад
I wonder if this can somehow link with state space models like mamba, or with liquid networks
@xynonners
@xynonners День назад
there's a paper proving diffusion and modern hopfield networks are identical
@emmanuelS19
@emmanuelS19 Месяц назад
publicité nulle
@ablues15
@ablues15 Месяц назад
Good content, terrible recording. Bad audio, bad video, missing audio.
@damickillah
@damickillah Месяц назад
Maybe it would be better to call Siamese Networks, Conjoined Networks?
@tankieslayer6927
@tankieslayer6927 Месяц назад
This stuff is incredibly low iq. Seasoned mathematicians and physicists got bluffed by it because they are not familiar with the lingo.
@deeplearningexplained
@deeplearningexplained Месяц назад
Hey, Just wanted to say that this was a very solid lecture. Absolutely love the quality of teaching 👍
@artukikemty
@artukikemty Месяц назад
What is the proof or evidence that mathematics allow to model, at least, human level intelligence? If we continue using mathematics who are, by the way, incomplete, we will have the same results. Loss of time and resources. All this math tricks are inherently limited.
@anntakamaki1960
@anntakamaki1960 Месяц назад
I love France 🇫🇷
@aren6
@aren6 2 месяца назад
What a brilliant explanation wow. The way he explains the most complex functions possible in the most easily understandable ways is amazing.
@SnackFatson
@SnackFatson 2 месяца назад
❤ Pi factorization of the spherical square degree increases the precision of triangulation on curved surfaces. The truth about Physics: Don’t add a decimal place, multiply a factor!! ❤
@PKPTY
@PKPTY 2 месяца назад
powerful talk
@Jaylooker
@Jaylooker 3 месяца назад
I think the 3-manifolds mentioned at 14:12 as part of arithmetic Chern-Simons theory are Calabi-Yau 3-folds. Calabi-Yau manifolds satisfy mirror symmetry (A-model, B-model). Calabi-Yau manifolds are also have the symplectic geometry of Higgs bundles as moduli spaces of Higgs bundles which are equivalent to hyperkähler manifolds. Note Kähler manifolds have a symplectic structure, complex structure, and Riemannian structure. This mirror symmetry gives them the duality mentioned throughout and as this matches the A-model side with the symplectic topology of Higgs bundles and hyperkähler manifolds mentioned at 30:15.
@NhungLương-e7p
@NhungLương-e7p 3 месяца назад
👏👏👏
@NhungLương-e7p
@NhungLương-e7p 3 месяца назад
Thank you, Nazim Bouatta.
@yuriystakhiv9404
@yuriystakhiv9404 3 месяца назад
Welcome back
@young-jinahn6971
@young-jinahn6971 3 месяца назад
Easy to understand! Thank you
@양익서-g8j
@양익서-g8j 3 месяца назад
외계인들은 어디까지 만들었을지 궁금해요.
@21Zubair
@21Zubair 3 месяца назад
<3 Thank you so much, respected Professor! The session is awesome!
@AlMa-xi8wu
@AlMa-xi8wu 3 месяца назад
sound very bad
@bobbobson6867
@bobbobson6867 3 месяца назад
Hi Professor Griess. I have heard about you from your algebra. It is over my head but I heard it is very good 😊
@Times343
@Times343 4 месяца назад
Isn't he amazing!
@SinergiasHolisticas
@SinergiasHolisticas 4 месяца назад
Love it!!!!!!!!!
@gilsantos3021
@gilsantos3021 4 месяца назад
A educação e delicadeza desse professor é algo anormal
@PeaceyKeen
@PeaceyKeen 4 месяца назад
✨🩵🕊️🩵✨
@sarthakparikh5988
@sarthakparikh5988 4 месяца назад
great collection of lectures: very informative
@sarthakparikh5988
@sarthakparikh5988 4 месяца назад
fantastic intro!
@mbrochh82
@mbrochh82 4 месяца назад
Here's a ChatGPT summary: - Dan Fried introduces the Center of Mathematical Sciences and Applications at Harvard, highlighting its interdisciplinary research and events. - Yann Lecun, Chief AI Scientist at Meta and NYU professor, is the speaker for the fifth annual Dingstrom lecture. - Lecun discusses the limitations of current AI systems compared to human and animal intelligence, emphasizing the need for AI to learn, reason, plan, and have common sense. - He critiques supervised learning and reinforcement learning, advocating for self-supervised learning as a more efficient approach. - Lecun introduces the concept of objective-driven AI, where AI systems are driven by objectives and can plan actions to achieve these goals. - He explains the limitations of current AI models, particularly large language models (LLMs), in terms of planning, logic, and understanding the real world. - Lecun argues that human-level AI requires systems that can learn from sensory inputs, have memory, and can plan hierarchically. - He proposes a new architecture for AI systems involving perception, memory, world models, actors, and cost modules to optimize actions based on objectives. - Lecun emphasizes the importance of self-supervised learning for building world models from sensory data, particularly video. - He introduces the concept of joint embedding predictive architectures (JEPA) as an alternative to generative models for learning representations. - Lecun discusses the limitations of generative models for images and video, advocating for joint embedding methods instead. - He highlights the success of self-supervised learning methods like DinoV2 and iJEPA in various applications, including image and video analysis. - Lecun touches on the potential of AI systems to learn from partial differential equations (PDEs) and their coefficients. - He concludes by discussing the future of AI, emphasizing the need for open-source AI platforms to ensure diversity and prevent monopolization by a few companies. - Lecun warns against over-regulation of AI research and development, which could stifle innovation and open-source efforts. - Main message: The future of AI lies in developing objective-driven, self-supervised learning systems that can learn from sensory data, reason, and plan, with a strong emphasis on open-source platforms to ensure diversity and prevent monopolization.
@rezajax
@rezajax 4 месяца назад
جاودانگی زیباست
@Garbaz
@Garbaz 4 месяца назад
A correction of the subtitles: The researcher mentioned at 49:40 is not Yonglong Tian, but Yuandong Tian. For anyone interested in Yuandong & Surya's understanding of why BYOL & co work, have a look at "Understanding Self-Supervised Learning Dynamics without Contrastive Pairs".