Тёмный

Markov Matrices 

MIT OpenCourseWare
Подписаться 5 млн
Просмотров 52 тыс.
50% 1

MIT 18.06SC Linear Algebra, Fall 2011
View the complete course: ocw.mit.edu/18...
Instructor: David Shirokoff
A teaching assistant works through a problem on Markov matrices.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Опубликовано:

 

12 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 35   
@boruiwang1738
@boruiwang1738 2 года назад
Huge thanks to you!! Very clearly explained at a comfort pace. Its nearly final and my teacher's only covering the theorems and some calculation examples. This mit series really showed me what matrices could achieve and the connection between concepts. (I especially like the fibonacci part and this partical part) Good job!
@user-fr1we2xk9u
@user-fr1we2xk9u 8 месяцев назад
Herein we observe an advantage of being left-handed. :)
@prajyot2021
@prajyot2021 2 года назад
Such brief and impeccable lecture Totally enjoying it
@fedepan947
@fedepan947 4 года назад
Thank you! Good explanation. But I think it is not necessary to calculate the decomposition A = UDU-1. We know that the probability after k steps is Pk = c(λ1)(^k)x1 + d(λ2)(^k)x2 where x1 and x2 are the eigenvectors and λ1, λ2 the eigenvalues, with P0 we can calculate the coefficients c and d for k=0. After 100 steps the probability is Pk for k = 100.
@dexterity3696
@dexterity3696 4 года назад
Definitely, maybe he hasn't taken the course by prof. Strang. LOL
@thedailyepochs338
@thedailyepochs338 4 года назад
lol i was expecting him to that and he never did
@thedailyepochs338
@thedailyepochs338 4 года назад
@@dexterity3696 he definitely didn't, if he did he would have named the eigenvector S and the diagonal eigenvalue matrix capital Lambda
@nprithvi24
@nprithvi24 3 года назад
I guess the main point of recitation is not just to solve for an answer but make students recall previous methods discussed in the class. For example, calculating the inverse of a matrix part was relatively discussed 3-4 lectures before this one and there's a good chance students might have forgot about it. This tutorial was a good refresher.
@nilslorand
@nilslorand Год назад
love his enthusiasm :) Good video
@surajmirchandani4613
@surajmirchandani4613 5 лет назад
Best one yet. Really cleared everything up in this chapter.
@kostikoistinen2148
@kostikoistinen2148 2 года назад
This guy can explain things well. He says, "Welcome back." Now I’m trying to find the first video for which this video is a sequel. Could someone tell me where that first video is?
@mitocw
@mitocw 2 года назад
The RU-vid playlist for the course: ru-vid.com/group/PL221E2BBF13BECF6C. The course materials on MIT OpenCourseWare: ocw.mit.edu/18-06SCF11. Best wishes on your studies!
@stephenclark9917
@stephenclark9917 2 месяца назад
The Markov matrix A is a transpose of what is usually presented.
@peterhind
@peterhind Год назад
So I sort of understand right until the end. With the final probability for n = infinity, being one third, one in two; how does that translate to the answer to the question 'What is the probability it is at A and B after an infinite number of steps'. Is the answer that it's six more times as likely to be at B than A ?
@Oleg86F
@Oleg86F Год назад
We start with matrix A and vector p0=(1,0) - meaning 100% probability particle in the point A. After infinite number of steps (which are A^n * p0 we approaching to the vector (1/3, 2/3) which means : particle in point A- 1/3 (~33% probability) particle in point B - 2/3 (~67% probability)
@peterhind
@peterhind Год назад
@@Oleg86F Thanks, It's making more sense now
@user-jz6ou6ce1z
@user-jz6ou6ce1z 9 месяцев назад
So interesting lecture and problem on Markov matrix!
@Amit.58
@Amit.58 11 месяцев назад
Wow quite amazing problem❤❤❤
@AnupKumar-wk8ed
@AnupKumar-wk8ed 6 лет назад
Very good video and very clearly explained.
@abhilast6629
@abhilast6629 6 лет назад
Hey Indian bro do you love mathematics?
@AnupKumar-wk8ed
@AnupKumar-wk8ed 6 лет назад
@@abhilast6629 Sure I do.
@Maunil2k
@Maunil2k 5 месяцев назад
Very well explained !!
@benbug11
@benbug11 3 года назад
Very well explained, thank you
@levihuddleston1020
@levihuddleston1020 16 дней назад
Rad, thanks!
@ankanghosal
@ankanghosal 3 года назад
Very helpful video. Thanks mit
@federizz686
@federizz686 3 года назад
Love this
@richard_guang
@richard_guang Год назад
This guy reminds me of Will from good Will hunting
@theodorechan4343
@theodorechan4343 5 месяцев назад
this was great
@ricardoV94
@ricardoV94 2 года назад
I get different eigenvalues: (1, -0.2)
@EmanuelCohen-HenriquezCiniglio
@EmanuelCohen-HenriquezCiniglio 9 месяцев назад
Goat
@cssaziado
@cssaziado 5 лет назад
Thank you, m7
@GoatzAreEpic
@GoatzAreEpic Год назад
ty fam
@fackarov9412
@fackarov9412 2 года назад
cool
@user-em4vq5cy4x
@user-em4vq5cy4x 5 месяцев назад
gg
@reginalnzubehimuonaka6659
@reginalnzubehimuonaka6659 2 года назад
For an MIT solution, it lacks some proof. We do not always see, we need a detailed explanation. But it is fine.
Далее
Powers of Matrices and Markov Matrices
17:54
Просмотров 47 тыс.
The Transition Matrix
13:03
Просмотров 193 тыс.
ДОМИК ДЛЯ БЕРЕМЕННОЙ БЕЛКИ#cat
00:45
Эконом такси в твоем городе 😂
00:59
Pseudoinverses
14:40
Просмотров 26 тыс.
Finite Math: Markov Chain Example - The Gambler's Ruin
20:26
Chains f(g(x)) and the Chain Rule
35:21
Просмотров 92 тыс.
Intro to Markov Chains & Transition Diagrams
11:25
Просмотров 101 тыс.
Markov Chains Clearly Explained! Part - 1
9:24
Просмотров 1,2 млн
25. Symmetric Matrices and Positive Definiteness
43:52
Просмотров 127 тыс.
ДОМИК ДЛЯ БЕРЕМЕННОЙ БЕЛКИ#cat
00:45