Тёмный
Jakob Foerster
Jakob Foerster
Jakob Foerster
Подписаться
Machine Learning and Artificial Intelligence are talked about a lot in the press. Yet they are rarely taught as part of undergraduate courses or in high school.
Luckily there are fantastic resources available on RU-vid for the curious mind. This channel is an attempt to curate some of those courses. The courses I am listing have served me great in my PhD in Deep Reinforcement Learning.
Комментарии
@bertobertoberto242
@bertobertoberto242 28 дней назад
uhm 0.1x0.1=1?
@mahmoudehab8627
@mahmoudehab8627 Месяц назад
Absolutely one of the gems that are just there on the internet but hidden by some bullshit courses. We should keep digging to find more of these!
@eul3rr
@eul3rr 2 месяца назад
As a math student who is interested in information theory and neural networks, i discovered this gem of a lecture series when i was looking for videos to fall asleep to! In fact i've finished the lectures when i was sleeping :D Now I decided to start it properly and just finished watching this lecture and taking notes. I would love to send David a mail when i finish the course. Thanks for leaving this behind my man, rest in peace.
@gadepalliabhilash7575
@gadepalliabhilash7575 2 месяца назад
Could anyone explain how is the flip pobability is 10^-13 and and 1% failure is 10^-15. Video reference is at 22:10
@dharmendrakamble6282
@dharmendrakamble6282 6 месяцев назад
🎉
@SphereofTime
@SphereofTime 7 месяцев назад
2:00
@hulksmash4311
@hulksmash4311 8 месяцев назад
???? Probability has nothing to do with it..Moron.
@elouantheot4639
@elouantheot4639 8 месяцев назад
I am a bit troubled about the result of the tickets lottery. I would expect to buy from all tickets from the all-0s ticket up to the (1000 choose 123) tickets to have 99% of winning (so it would rather be => sum of (N choose n) with n goes from 0 to 123). But here, the teacher only considers the (1000 choose 123) tickets which does not represent 99% chance of winning the game. I can't see where I failed to understand this result ... @Jakob Foerster ?
@0XAI644
@0XAI644 8 месяцев назад
Mean of the binomial = N×p = 100 -> P(X<=100) ≈ 0.5, +3 std etc...
@bbarakaalaizer9011
@bbarakaalaizer9011 9 месяцев назад
Swahili😊
@bbarakaalaizer9011
@bbarakaalaizer9011 9 месяцев назад
Swahili😊
@eddiehazel1259
@eddiehazel1259 9 месяцев назад
clipping reduces as it goes on, information content regained, i think maybe you shouts when you are nervous. more interstitial melodies, perhaps. the beautiful entropy
@eddiehazel1259
@eddiehazel1259 9 месяцев назад
yes this works at 36:27
@goulchat1
@goulchat1 9 месяцев назад
Cambridge chalk
@eddiehazel1259
@eddiehazel1259 9 месяцев назад
i think there may be an oversufficient entropy issue with the potato you are using to record this. computer science's best minds 👍
@AtharvSingh-vj1kp
@AtharvSingh-vj1kp 10 месяцев назад
I'm a bit curious about when these Lectures were recorded. Was it 2003??
@a_user_from_earth
@a_user_from_earth 10 месяцев назад
what an amazing lecture and great teacher. May you rest in peace. You will not be forgotten.
@johnxiaoyuzhang9048
@johnxiaoyuzhang9048 11 месяцев назад
Clear and intuitive, easy to understand! Thank you for sharing!
@artmaknev3738
@artmaknev3738 Год назад
After listening to this lecture, my IQ went up 20 points!
@christopheguitton7523
@christopheguitton7523 Год назад
J'ai compris pourquoi répéter 3 fois la même information de suite à mes enfants n' était pas forcément efficace :-)
@phillustrator
@phillustrator Год назад
I wish he explained why the answers to the sequence of yes or no question turned out to be the number sought (42) in binary.
@Qugfvraceysgvigaivys
@Qugfvraceysgvigaivys 10 месяцев назад
It's because the fist question is basically asking what the MSB is. For example, if the answer is yes, we know the binary number is: "1xxxxx", where 'x' is still unknown. The next question determines the next digit: "10xxxxx". The last question determines the LSB: "101010". I think the LSB (even or odd) is easiest to think about intuitively. If you ask if the number is odd, and the answer is "Yes", you know that the number in binary has a '1' as the LSB.
@justascholar
@justascholar Год назад
I'm still learning it in 2023, enjoyed with great excitement. Thank you.
@sahhaf1234
@sahhaf1234 Год назад
21:30: why mixture of gaussians??
@sahhaf1234
@sahhaf1234 Год назад
contents are superb... voice quality is atrocious...
@frasio3930
@frasio3930 Год назад
thanks for uploading these excellent lectures (R.I.P David
@connorfrankston5548
@connorfrankston5548 Год назад
Applying entropy for this weighing problem is peculiar, since the entropy depends on how we are defining the states of the system. For example, for the first weighing, it seems totally irrelevant whether the scale tips left or tips right. So in this view, it would be preferable to set up the first weighing so that it is equally probable for the scale to tip at all as opposed to being balanced. This would indicate that the most informative (greedy) first weighing is actually the case where we have a 50% chance of leaving the odd ball out, which is to weigh three against three and leave six aside. However, in that case I think there may be a conflation between the physical state of the scale and the epistemic state of the balls. The correct approach to a greedy solution is to maximize the epistemic entropy, which I believe is achieved by the 4 vs. 4 weighing.
@yeni_nick
@yeni_nick Год назад
So much reference to the book 😒
@edmoore
@edmoore Год назад
The book is freely available online in pdf form.
@d.langary7000
@d.langary7000 Год назад
*Higher rates with feedback?!* Seems possible to me! 11:38 With a feedback line, to correctly send N bits, we need to transmit N*(1+f) bits on average. In other words, the average needed transmission bits per source bit is 1+f. As a result, the capacity of the feedback channel would be 1/(1+f), which is slightly more than (1-f) !! Am I the only one who sees it this way? What am I getting wrong here?
@maxdickens9280
@maxdickens9280 Год назад
No, you have to send N * 1 / (1 - f) bits on average. Because the probability of sending "?" is f (i.e. the probability of sending the CORRECT symbol is 1 - f), by binomial distribution, the average bits you have to send is 1 / (1 - f), NOT (1 + f).
@maxdickens9280
@maxdickens9280 Год назад
To show that N * (1 + f) is indeed wrong, suppose f = 1, then all bits are erroneus, so you have to send infinite bits, instead of N * (1 + 1) = 2N
@iirclife
@iirclife Год назад
Sigh ... totally lost ... back to the beginning of the video ...
@sahhaf1234
@sahhaf1234 Год назад
towards 56:00 he uses the term "bit error" and "block error" but doesnt define them properly..
@maestro_100
@maestro_100 Год назад
Wow, Great Lecture! Thank You! Please could we get a link to the submarine game? or maybe the source code?
@박재우학부재학전기전
channel immigration
@mohamedrabie4663
@mohamedrabie4663 2 года назад
RIP prof David, you were, and still a great inspiration to us
@eddiehazel1259
@eddiehazel1259 9 месяцев назад
ah man sad to hear 😔
@reactiveland3111
@reactiveland3111 2 года назад
wow, one project .. some additional exercises and (1-6) chapters to read :)))) Cambridge it is
@ridwanwase7444
@ridwanwase7444 2 года назад
the link of getting free book is not working,can anybody tell from where i can get that free book? Thanks in advance
@jedrekwrzosek6918
@jedrekwrzosek6918 2 года назад
I looove the lectures! Thank you for the upload!
@fireflystar5333
@fireflystar5333 2 года назад
This is a great lecture. The design of case problem is really helpful here. Thanks for the lecture.
@leodu561
@leodu561 2 года назад
41:10 Transition from random walk to Hamiltonian Monte Carlo 57:39 Overrelaxation (Adler's and Neal's methods) 1:07:50 Slice Sampling 1:22:10 Exact Sampling
@asam9203
@asam9203 2 года назад
That mean i need to buy more than half amounts of tickets to be in the zone of winners
@edmundkemper1625
@edmundkemper1625 2 года назад
5:45 , how are you more likely to land in the long code words? I dont understand
@Qugfvraceysgvigaivys
@Qugfvraceysgvigaivys 10 месяцев назад
Lets make an extreme example, you have two code words that have equal probability, one is a="0", the other is b="1111111". If your message is abbaba, the output is: "011111111111111011111110". If you chose a bit at random from the output, you can see that choosing a "1" is much more likely, not because "b" appeared more, but because the code word for b is much longer.
@edmundkemper1625
@edmundkemper1625 2 года назад
What is the budget that the prof is talking about, when talkin about kraft inequality (supermarket)
@Sb142sf
@Sb142sf 2 месяца назад
keeping the values added together within 1
@edmundkemper1625
@edmundkemper1625 2 года назад
isn;t he wrong in claiming the code words in 41:10 , are uniquely decodable? , say refer 41:10, a be 1 , b be 10 , c be 100 and d be 000 , isn;t it obvious a is a prefix of b and c , b being a prefix of c?
@dharmendrakamble6282
@dharmendrakamble6282 2 года назад
Information theory
@GOODBOY-vt1cf
@GOODBOY-vt1cf 2 года назад
34:43
@GOODBOY-vt1cf
@GOODBOY-vt1cf 2 года назад
32:13
@GOODBOY-vt1cf
@GOODBOY-vt1cf 2 года назад
15:52
@GOODBOY-vt1cf
@GOODBOY-vt1cf 2 года назад
13:24
@GOODBOY-vt1cf
@GOODBOY-vt1cf 2 года назад
9:50
@giantbee9763
@giantbee9763 2 года назад
Rest in peace David :) Thanks for the lectures.
@LucasSilva-rf5mf
@LucasSilva-rf5mf 2 года назад
Sir D Mackay was an inspiration. RIP
@zzsong21
@zzsong21 2 года назад
Can someone tell me why the error rate can only get arbitrarily close to zero and not just be zero?
@Omar-th5vv
@Omar-th5vv 2 года назад
I'm new to information theory. At 4:24 why is the received signal is "approximately" and not "equal" to the transmitted signal + noise? like what else is there other than the transmitted signal and the noise? Thanks for sharing such helpful lectures.
@caribbeansimmer7894
@caribbeansimmer7894 2 года назад
It gets extremely difficult trying to model everything that affects the signal, but it's relatively easy to add noise to the signal, but not just noise but an assumption that the noise is additive (hence the plus sign), white, and gaussian( normally distributed). As you would know from statistics, a normal distribution has nice properties for estimation. This stuff gets really complex but I guess you get the idea. Note, we can make other assumptions about the characteristics of noise
@nishantarya98
@nishantarya98 Год назад
Received = Transmitted + Noise makes a *lot* of simplifying assumptions! In the real channel, the noise might be multiplied, not added. The transmitted signal might go through weird transformations, like you might receive log(Transmitted), or it might be modeled as a filter which changes different parts of your Transmitted signal in different ways!