Тёмный

Day 7-BackPropogration In Recurrent Neural Network And NLP Application|Krish Naik 

Krish Naik
Подписаться 1,1 млн
Просмотров 31 тыс.
50% 1

Опубликовано:

 

22 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 18   
@simranvolunesia
@simranvolunesia 2 года назад
1st distribution: Pareto Distribution (power law probability distribution, talks about 80-20 rule i.e. 80% of outcomes are result of 20% causes) Convert pareto distribution to normal distribution: using box cox transformation (it converts to nearly normal distribution, formula is: x_new = log(x_old) if lambda = 0 x_new = (x_old**lambda - 1)/lambda otherwise) Check if distribution is normal gaussian distribution or not: Draw QQ plots (Quantile-Quantile plot) where we draw actual quantiles for data and theoretical quantiles and then look at deviation. What is standard normal distribution? Normal distribution with mean 0 and std 1 Plot A: Right Skewed mean > median > mode Plot B: Left Skewed mode > median > mean Difference between fit_transform and transform fit_transform do two operations on input data in one go which are fit the transformer as per the calculations on input data and then apply those calculations to input data transform: will only apply calculations on input data as per transformer. When we use fit_predict and what is it? fit_predict fits the model as per input data and later on make a prediction as well using trained model. We generally use it in clustering algorithms like DBSCAN where we can only do fit_predict or fit. Predict is not possible there. What is difference between standardization and normalization? 1. Normalization is basically min max scaling 2. Values belong to [0,1] or [-1,1] 3. Affected by outliers 4. Useful when we don't know about distribution 1. Standardization converts data in a manner that mean of transformed data is 0 and std=1 2. No bound on values 3. Not much affected by outliers 4. Useful when data distribution is normal gaussian distribution. PS: Do correct me if I went wrong somewhere.
@vipulpatil7687
@vipulpatil7687 Год назад
Thanks a lot
@aliali-sm3dq
@aliali-sm3dq 10 месяцев назад
Thank you sir. would you please put all of interview question in seperated video?
@pankajkumarbarman765
@pankajkumarbarman765 2 года назад
Thank you sir for this Amazing lecture 🥰🥰🥰🥰
@nik7867
@nik7867 2 года назад
RNN’s can work on sentences having small length as there hasn’t been any significant change in weights with respect to distance among the words and the context is preserved . Am I correct ?
@mainuddinali9561
@mainuddinali9561 2 года назад
great
@pranit7266
@pranit7266 5 месяцев назад
Sir please upload the class material
@mrityunjayupadhyay7332
@mrityunjayupadhyay7332 2 года назад
Amazing explanation
@pranjalsingh1287
@pranjalsingh1287 2 года назад
do make video of lstm and rnn with the context of sound files processing
@sandipansarkar9211
@sandipansarkar9211 2 года назад
finished watching
@himanshuagarwal3788
@himanshuagarwal3788 Год назад
sir, can you please tell what product you use for writing & presenting that in screen...
@pranavrane5679
@pranavrane5679 Год назад
Not able to find course material on ineuron. Could you please help?
@TheNishi42
@TheNishi42 2 года назад
Back propagation always done using gradient descent only ?
@solo-ue4ii
@solo-ue4ii Год назад
NOPE IG
@MoosaMemon.
@MoosaMemon. 5 месяцев назад
No, there are other better optimizers out there. The most popular and widely used optimizer for almost every scenario is "Adam's Optimizer".
@HarshaVardhan-hd2ts
@HarshaVardhan-hd2ts 3 месяца назад
Since in RNN , the weights are same when intialized , is that the same case for CNN also ? Can someone help me with this
@mengli6949
@mengli6949 Год назад
what if the hidden output o' of different dimension than the x input? if x input is 300 dim, how do we choose the length of o'?
@dhanushnayak1674
@dhanushnayak1674 2 года назад
hi krish r u fyn..looking dull.
Далее
Кольцо Всевластия от Samsung
01:00
Просмотров 528 тыс.
Why Recurrent Neural Networks are cursed | LM2
13:17
Просмотров 16 тыс.
I Built a Neural Network from Scratch
9:15
Просмотров 371 тыс.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 657 тыс.