Тёмный
No video :(

02 - Discrete probability recap, Naïve Bayes classification 

Alfredo Canziani
Подписаться 39 тыс.
Просмотров 1,1 тыс.
50% 1

Course website: atcold.github....

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 8   
@user-co6pu8zv3v
@user-co6pu8zv3v 2 месяца назад
Thank you, Alfredo!
@alfcnz
@alfcnz 2 месяца назад
🥰🥰🥰
@wolpumba4099
@wolpumba4099 2 месяца назад
*Summary* *Probability Recap:* * *[**0:00**]* *Degree of Belief:* Probability represents a degree of belief in a statement, not just true or false. * *[**0:00**]* *Propositions:* Lowercase letters (e.g., cavity) represent propositions (statements). Uppercase letters (e.g., Cavity) are random variables. * *[**5:15**]* *Full Joint Probability Distribution:* Represented as a table, it shows probabilities for all possible combinations of random variables. * *[**10:08**]* *Marginalization:* Calculating the probability of a subset of variables by summing over all possible values of the remaining variables. * *[**17:04**]* *Conditional Probability:* The probability of an event happening given that another event has already occurred. Calculated as the ratio of joint probability to the probability of the conditioning event. * *[**16:14**]* *Prior Probability:* The initial belief about an event before observing any evidence. * *[**16:40**]* *Posterior Probability:* Updated belief about an event after considering new evidence. *Naive Bayes Classification:* * *[**32:48**]* *Assumption:* Assumes features (effects) are conditionally independent given the class label (cause). This simplifies probability calculations. * *[**32:48**]* *Goal:* Predict the most likely class label given a set of observed features (evidence). * *[**44:04**]* *Steps:* * Calculate the joint probability of each class label and the observed features using the naive Bayes assumption. * Calculate the probability of the evidence (observed features) by summing the joint probabilities over all classes. * Calculate the posterior probability of each class label by dividing its joint probability by the probability of the evidence. * Choose the class label with the highest posterior probability as the prediction. * *[**36:24**]* *Applications:* * *Digit Recognition:* Classify handwritten digits based on pixel values as features. * *[**47:34**]* *Spam Filtering:* Classify emails as spam or ham based on the presence of specific words. * *[**33:56**]* *Limitations:* * *Naive Assumption:* The assumption of feature independence is often unrealistic in real-world data. * *[**42:11**]* *Data Sparsity:* Can struggle with unseen feature combinations if the training data is limited. *Next Steps:* * *[**1:05:58**]* *Parameter Estimation:* Learn the probabilities (parameters) of the model from training data. * *[**59:53**]* *Handling Underflow:* Use techniques like logarithms and softmax to prevent numerical underflow when multiplying small probabilities. i used gemini 1.5 pro to summarize the transcript
@alfcnz
@alfcnz 2 месяца назад
They are a bit off. The first two titles should not be simultaneous nor at the very beginning. Similarly, Gemini thinks that the first two titles of Naïve Bayse Classification are also simultaneous. I can see, though, how these could be helpful, if refined a bit.
@joeeeee8738
@joeeeee8738 2 месяца назад
What software do you use to present? Looks great!
@alfcnz
@alfcnz 2 месяца назад
Microsoft PowerPoint 🙃
@datagigs5478
@datagigs5478 2 месяца назад
Do you cover the whole course on RU-vid ?
@alfcnz
@alfcnz 2 месяца назад
Please, check out the first video of the playlist, where an overview of the course in provided. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-GyKlMcsl72w.html
Далее
How Bayes Theorem works
25:09
Просмотров 541 тыс.
16. Markov Chains I
52:06
Просмотров 348 тыс.
01 - Course first part recap, Naïve Bayes intro
1:05:08
Просмотров 3,2 тыс.
The Grandfather Of Generative Models
33:04
Просмотров 53 тыс.
Likelihood Estimation - THE MATH YOU SHOULD KNOW!
27:49
Why Does Diffusion Work Better than Auto-Regression?
20:18
06 - Optimisation and gradient ascent
58:59