The Center of Mathematical Sciences and Applications is a multidisciplinary research center in the Faculty of Arts and Sciences at Harvard University. By bringing together researchers from an extensive variety of disciplines and institutions, the Center serves as a fusion point for mathematics, statistics, physics, and related sciences. Harvard’s William Caspar Graustein Professor of Mathematics, Shing-Tung Yau, was the Center’s first director.
The Center of Mathematical Sciences and Applications hosts postdocs, faculty, and special programs explored through various workshops. Seminars on topics ranging from Mirror Symmetry to Social Sciences are held weekly.
The lectures are very good but just can you please make sure to add or make changes in the angle of camera and focus on the board straight so that we could concentrate parallely
All these physically inspired flows are ultimately useless for graphic generation since you can just use the most trivial flow possible. This is just like rest of the works of Tegmark.
@@drakkhein Seems like my reply got deleted (because of link?). The purpose of an image generation model is to flow from noise to target distribution given a prompt. To do this, you can simply just learn the vector field. You can look at the paper "Scaling Rectified Flow Transformers for High-Resolution Image Synthesis' which is used by the SOTA model FLUX. 1. Tegmark had a mediocre career in physics and does not have anything meaningful to say in AI.
What is the proof or evidence that mathematics allow to model, at least, human level intelligence? If we continue using mathematics who are, by the way, incomplete, we will have the same results. Loss of time and resources. All this math tricks are inherently limited.
❤ Pi factorization of the spherical square degree increases the precision of triangulation on curved surfaces. The truth about Physics: Don’t add a decimal place, multiply a factor!! ❤
I think the 3-manifolds mentioned at 14:12 as part of arithmetic Chern-Simons theory are Calabi-Yau 3-folds. Calabi-Yau manifolds satisfy mirror symmetry (A-model, B-model). Calabi-Yau manifolds are also have the symplectic geometry of Higgs bundles as moduli spaces of Higgs bundles which are equivalent to hyperkähler manifolds. Note Kähler manifolds have a symplectic structure, complex structure, and Riemannian structure. This mirror symmetry gives them the duality mentioned throughout and as this matches the A-model side with the symplectic topology of Higgs bundles and hyperkähler manifolds mentioned at 30:15.
Here's a ChatGPT summary: - Dan Fried introduces the Center of Mathematical Sciences and Applications at Harvard, highlighting its interdisciplinary research and events. - Yann Lecun, Chief AI Scientist at Meta and NYU professor, is the speaker for the fifth annual Dingstrom lecture. - Lecun discusses the limitations of current AI systems compared to human and animal intelligence, emphasizing the need for AI to learn, reason, plan, and have common sense. - He critiques supervised learning and reinforcement learning, advocating for self-supervised learning as a more efficient approach. - Lecun introduces the concept of objective-driven AI, where AI systems are driven by objectives and can plan actions to achieve these goals. - He explains the limitations of current AI models, particularly large language models (LLMs), in terms of planning, logic, and understanding the real world. - Lecun argues that human-level AI requires systems that can learn from sensory inputs, have memory, and can plan hierarchically. - He proposes a new architecture for AI systems involving perception, memory, world models, actors, and cost modules to optimize actions based on objectives. - Lecun emphasizes the importance of self-supervised learning for building world models from sensory data, particularly video. - He introduces the concept of joint embedding predictive architectures (JEPA) as an alternative to generative models for learning representations. - Lecun discusses the limitations of generative models for images and video, advocating for joint embedding methods instead. - He highlights the success of self-supervised learning methods like DinoV2 and iJEPA in various applications, including image and video analysis. - Lecun touches on the potential of AI systems to learn from partial differential equations (PDEs) and their coefficients. - He concludes by discussing the future of AI, emphasizing the need for open-source AI platforms to ensure diversity and prevent monopolization by a few companies. - Lecun warns against over-regulation of AI research and development, which could stifle innovation and open-source efforts. - Main message: The future of AI lies in developing objective-driven, self-supervised learning systems that can learn from sensory data, reason, and plan, with a strong emphasis on open-source platforms to ensure diversity and prevent monopolization.
A correction of the subtitles: The researcher mentioned at 49:40 is not Yonglong Tian, but Yuandong Tian. For anyone interested in Yuandong & Surya's understanding of why BYOL & co work, have a look at "Understanding Self-Supervised Learning Dynamics without Contrastive Pairs".