Тёмный

MIT 6.S087: Foundation Models & Generative AI. INTRODUCTION 

Rickard Brüel Gabrielsson
Подписаться 2,8 тыс.
Просмотров 17 тыс.
50% 1

Get ready to revolutionize your AI knowledge with MIT's introductory course (www.futureofai.mit.edu/) on Foundation Models & Generative AI! In this comprehensive program, you'll discover the latest breakthroughs in the AI world:
- ChatGPT
- Stable-Diffusion & Dall-E
- Neural Networks
- Supervised Learning
- Representation & Unsupervised Learning
- Reinforcement Learning
- Generative AI
- Self-Supervised Learning
- Foundation Models
- GANs (adversarial)
- Contrastive Learning
- Auto-encoders
- Denoising & Diffusion
Don't miss this opportunity to learn from the experts at MIT and take your AI skills to the next level. Subscribe now and be at the forefront of the AI revolution!

Наука

Опубликовано:

 

17 фев 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 15   
@xinfeng9680
@xinfeng9680 10 дней назад
Thanks so much for sharing. You have explained the AI concept and the basic principles in such an easy way, the language you are using is also straightforward and catchy at a breakneck speed, really enjoy learning all these, thanks!
@davidduan9449
@davidduan9449 5 месяцев назад
Thank you VERY MUCH for posting this!!! Love it.
@Red_Blue_Green
@Red_Blue_Green 5 месяцев назад
Thank you so much!! :D
@youtube_fantastic
@youtube_fantastic 4 месяца назад
Thank you Rickard for sharing these MIT lectures! This is amazing and free knowledge. Can't wait to watch all the lectures as they come out.
@Red_Blue_Green
@Red_Blue_Green 4 месяца назад
Thank you!!
@labsanta
@labsanta 3 месяца назад
🎯 Key Takeaways for quick navigation: 00:00 *🎓 Introduction to Future of AI Foundation Models and Generative AI* - Introduction to the lecture series on Foundation Models & Generative AI at MIT. - Explanation of the purpose: understanding the current AI landscape, the underlying changes, and diving deep into various subjects beyond surface-level. - Overview of previous year's topics and excitement around them, leading into current advancements and the trajectory of the course. 02:14 *🌐 Recent Developments in AI* - Discussion on recent advancements and hype surrounding AI. - Mention of increased investments, valuation of companies, regulatory actions, and industry drama. - Exploration of questions about artificial general intelligence (AGI) and its current state. 04:05 *🧠 Instructor's Background and Course Schedule* - Introduction to the instructor's background and expertise in AI. - Overview of the course schedule, including topics to be covered in upcoming lectures. - Mention of guest speakers and the thematic focus of each lecture. 06:35 *📚 Core Concepts Covered in the Course* - Explanation of core concepts: neural networks, supervised learning, unsupervised learning, reinforcement learning, generative AI, foundation models, and self-supervised learning. - Emphasis on providing intuitive understanding with examples from various domains. - Objective to distinguish between hype and foundational aspects of AI. 08:13 *🧩 Understanding AI Through Human Learning Processes* - Analogizing human learning with AI learning processes. - Examination of different influences on human learning: parents, genetics, academia, and environmental interactions. - Comparison of human learning models to AI learning paradigms: supervised learning, reinforcement learning, and self-supervised learning. 11:25 *🧠 Relational Understanding and Generative AI Models* - Explanation of how relational understanding contributes to learning in both humans and AI models. - Illustration of how generative AI models comprehend concepts through relational context. - Example of generating images to demonstrate contextual understanding in generative AI. 15:58 *🤔 Philosophical Perspectives on AI Evolution* - Exploration of philosophical perspectives influencing AI development: learning vs. designing, chaos vs. order, and bottom-up vs. top-down approaches. - Examination of historical influences, including ancient Greek philosophical ideas. - Consideration of the limitations of top-down, ordered thinking in understanding complex systems like AI. 22:07 *🔄 Bottom-Up Perspective and Adaptation in Chaotic Systems* - Argument for embracing a bottom-up perspective and adaptation in dealing with chaotic systems. - Recognition of human adaptability, intuition, and flexibility in navigating chaotic environments. - Critique of the over-reliance on top-down, ordered thinking in understanding complex phenomena. 23:15 *🧠 Understanding Chaos and the Brain* - Chaos is inherent in the world, and the brain serves as a tool to navigate and learn within this chaos. - Artificial neural networks attempt to replicate the brain's flexibility and adaptability. - Supervised learning, while structured, faces limitations due to scalability and the inability to label all aspects of the world accurately. 32:34 *🎓 Self-Supervised Learning and Its Applications* - Self-supervised learning relies on learning from data without human experts, making it scalable and applicable to a wide range of tasks. - Predicting the future based on past data and positive pair contrastive learning are examples of self-supervised learning algorithms. - Applications include understanding DNA sequences for protein structure prediction and analyzing consumer behavior in retail for targeted recommendations. 45:23 *💡 Understanding Self-Supervised Learning* - Self-supervised learning is the foundation of training models like CHP, enabling them to learn from unlabeled data. - Foundation models and generative AI are the output of self-supervised learning algorithms. - While the technical terminology can vary, the essence lies in leveraging self-supervised learning for training advanced AI models.
@micbab-vg2mu
@micbab-vg2mu 4 месяца назад
Great - easy to undestand :)
@cooperlikens6476
@cooperlikens6476 5 месяцев назад
Loved the video, thank you so much for sharing this. Will you be posting future lectures on this topic as well?
@Red_Blue_Green
@Red_Blue_Green 5 месяцев назад
Thank you! Yes! The whole course will be released here, week by week. Stay tuned!
@user-nz8jm2se5u
@user-nz8jm2se5u 5 месяцев назад
Great lecture. May I ask how to get access to the slides? Thanks.
@user-wr7kf4hl8u
@user-wr7kf4hl8u 5 месяцев назад
Great lecture! Can I ask which room and time this is held? I'm at MIT. thx!
@Red_Blue_Green
@Red_Blue_Green 5 месяцев назад
Thanks!! Tuesdays and Thursdays 2:00-3:15pm in E25-111
@noadsensehere9195
@noadsensehere9195 4 месяца назад
Really great resource with Andrew Karpathy's video's
@josephb6574
@josephb6574 18 дней назад
Question, why do AI models need so much data to learn but humans do not? Children do not need to read the whole internet to learn what a cat is. Why?
@Red_Blue_Green
@Red_Blue_Green 18 дней назад
Children do need to observe the world constantly for several years to learn what a cat is. Every observation gives new correlations and contrasts. They receive the visual, the sounds, the smells, the touch, etc from observing the world through their senses constantly -- this is a huge amount of data. Still, they are much better at incorporate this information effectively than any AI model is close to
Далее
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
Просмотров 1,3 млн
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 407 тыс.
What are AI Agents?
12:29
Просмотров 95 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 809 тыс.
Mapping GPT revealed something strange...
1:09:14
Просмотров 205 тыс.