Тёмный

Introduction to Random Forest | Intuition behind the Algorithm 

CampusX
Подписаться 212 тыс.
Просмотров 46 тыс.
50% 1

Get familiar with Random Forest in a straightforward way. This video provides an easy-to-understand intuition behind the algorithm, making it simple for beginners to grasp the basics of Random Forest in machine learning.
Code used: github.com/campusx-official/1...
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in/s/store
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
E-mail us at support@campusx.in
⌚Time Stamps⌚
00:00 - Intro
00:29 - Intuition
14:35 - Code Demo

Опубликовано:

 

10 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 47   
@user-df5no7ol1q
@user-df5no7ol1q Год назад
you are great teacher....great data scientist....hats off to you....because of you all the basic , small n small concepts getting cleared...you are faaabbb
@siyays1868
@siyays1868 Год назад
I m a great fan of u & ur channel. Thanku so much for working hard. Itna mast Random forest shayad hi kisine explain kiya hoga youtube par. Not only Random Forest actually all other algorithms or different concepts in Data science , all videos of urs r the best of the best.
@siyays1868
@siyays1868 Год назад
Best ever channel & best ever teacher for data science . Thanku very very much.
@rajatchauhan4410
@rajatchauhan4410 8 месяцев назад
Hi, thanks for the great explanation but just a silly mistake I think after we did column sampling, we will need to give data point for prediction in same order as the features selected at 30:00
@bhupendersharma0428
@bhupendersharma0428 Год назад
It is best Channal for data Science It will grow 1M in 2023 i swear
@lakshaychauhan380
@lakshaychauhan380 6 месяцев назад
Nhi hue bhai🙃
@shashibhushanjha7325
@shashibhushanjha7325 2 месяца назад
Its good that less but genuine people are following this channel. Knowledge is for everyone but only dedicated minds will make use of it, other just follow their adrenaline rush.
@_pareekshithmcMcpareekshith
@_pareekshithmcMcpareekshith 4 месяца назад
the way you teach is absolutely amazing , keep up the good work ,thankyou
@anilkathayat1247
@anilkathayat1247 2 месяца назад
Your explanation on each small point is next level! Great job sir.
@sourabhyadav8258
@sourabhyadav8258 2 месяца назад
Great content totally diffrent way of teaching!! Mark my word nobody spent so much time on a Single project but here the story is diffrent!!
@aounhaider8335
@aounhaider8335 11 месяцев назад
Your videos on ML are amazing. Following this playlist!!
@aniruddhadeshmukh3571
@aniruddhadeshmukh3571 9 месяцев назад
But little bit long
@sanjaisrao484
@sanjaisrao484 7 месяцев назад
​@@aniruddhadeshmukh3571yes
@arpittrivedi6636
@arpittrivedi6636 Год назад
Bahut badiya sir. God bless you 🙏🙏
@johnson2784
@johnson2784 6 месяцев назад
massive respect ❤, you are a great teacher
@SourabhGupta108
@SourabhGupta108 Год назад
I think there is some mistake while using column sampling, in a prediction part you are passing the same input array to all the decision trees while it should be different according to the sampled dataframe.
@Shubham_gupta18
@Shubham_gupta18 Месяц назад
yes, it would be
@amolhire9482
@amolhire9482 15 дней назад
yes we cant predict values like that if we have input data scaled with diff. features , yes sir se thoda mistake hogaya hain.
@narendraparmar1631
@narendraparmar1631 5 месяцев назад
Very Well Explained Thanks Sir
@vishnuvardhanjadava4186
@vishnuvardhanjadava4186 4 месяца назад
there is a small mistake in the column_row sampling(last one). df1,2,3 have different features which are trained on different models and while performing the prediction, you passed he same features for all three models and took majority(aggregation). apart from this rest is awesome. I have a question. lets say I run the random forest by giving my dataset with columns from F1 to F10 with target variable Y. I ran some sklearn model or something. i did train test split and i did fit and now, I want to perform prediction. since, each DT model in RF has different and only few cols(let's say 50%) and my test data will have all the cols. Are the test data features gonna be passed to respective DT model with respective features that the model was trained on? or is there some other mechanism? please explain.
@saikrishna-p9c
@saikrishna-p9c 10 дней назад
same doubt bro .comment if u get the answer
@balrajprajesh6473
@balrajprajesh6473 Год назад
best of the best!
@minalgupta7456
@minalgupta7456 Месяц назад
I m a great fan of u & ur channel
@amolhire9482
@amolhire9482 15 дней назад
sir hum random features selection karenge aur us par value prediction bhi karenge dt se lekin agar hume testing / input data whole features ka mila toh prediction kaise hoga ? i think row sampling is fine but features sampling not understood .
@AbdulRahman-zp5bp
@AbdulRahman-zp5bp 2 года назад
THANK YOU 3000
@ABHISHEKKUMAR-gv4di
@ABHISHEKKUMAR-gv4di 2 года назад
I am not able to find the link for the visualisation tool
@Justine20366
@Justine20366 3 года назад
Hi, i have a small question about decision trees. Is it ok to have a decision tree with a max depth of 7, bcs i noticed how it produced a big tree but also had the better accuracy training and test score than if I had reduced the max depth.
@campusx-official
@campusx-official 3 года назад
Yes you can have a decision tree of max depth 7 or more. It depends on the data
@Justine20366
@Justine20366 3 года назад
@@campusx-official oh okay, both my train and test data show to be ok in terms of not over or under fitting ! Btw thank you sir, your video literally saved my life! Bless you
@vishalkumar-us7ys
@vishalkumar-us7ys 2 года назад
While teaching preprocessing you mentioned that we check for duplicate data (rows) and drop it as it can create problem while building model then how random forest handles the duplicate data which comes from sampling with replacement techniques.
@campusx-official
@campusx-official 2 года назад
Good question, Atul. See, the problem with duplicate data is that it increases model bias which may lead to overfitting. But the way RF works, it is able to handle this bias.
@ashutoshthokare2127
@ashutoshthokare2127 4 месяца назад
Thank u sir
@taslima5007
@taslima5007 3 месяца назад
You are great
@souravaich6620
@souravaich6620 2 года назад
I am still confused on sampling without replacement, if we have 10 features and we selecting let say 5 features for each DT, then my 10 features will be exhausted in 2 DTs only. Then how come it's distributing in i.e. 100 DTs? Same with row sampling we are giving 25% of rows to each DTs then it should be exhausted after 4 DTs, then how come we are training the other DTs. Please help me with this, I am totally confused with 'without replacement' option.
@aazeebh3734
@aazeebh3734 2 года назад
In sampling without replacement: For the 1st tree, if 5 columns are to be selected: For the 1st column, all 10 columns are available for selection. After the 1st column is selected, then the next column could be any of the remaining 9 columns.. So, this avoids repeating of the columns. For the next DT, the process starts from scrath. All the 10 columns are available for selection and the same process as above follows. In sampling with replacement, after the 1st column is selected, while selecting the 2nd column, it will have all the 10 columns to choose from, including the column already selected as 1st column.
@kindaeasy9797
@kindaeasy9797 4 месяца назад
Wow
@sandipansarkar9211
@sandipansarkar9211 2 года назад
finished watching
@RohitKumar-wb4pe
@RohitKumar-wb4pe Месяц назад
column sampling function not working . can anybody resolve it ?
@brayanrai2880
@brayanrai2880 6 месяцев назад
Best one
@patelpavan5479
@patelpavan5479 9 месяцев назад
ca we take two output in RF
@tanb13
@tanb13 Год назад
I think the explanation of bootstrapping is wrong in the video. As per Wikipedia (en.wikipedia.org/wiki/Bootstrap_aggregating) , in bootstrapping process the size of the sample dataset has to be same as the original dataset and it is not a smaller subset of the original dataset as explained in the video. I think what you have explained (creating smaller samples from the original dataset) is concept of bootstrapping but applicable from statistics standpoint and not in context of Random Forests. Still I will be grateful if you can kindly explain your reasoning with some related articles.
@luffy6761.
@luffy6761. Год назад
sir please provide notes for this ml playlist
@purushottammitra1258
@purushottammitra1258 3 года назад
How come 65th video after 62nd(ensemble technique) ??
@campusx-official
@campusx-official 3 года назад
Check the playlist
@purushottammitra1258
@purushottammitra1258 3 года назад
@@campusx-official okay got you!
@SatyamBonaparte
@SatyamBonaparte 3 месяца назад
This video was a bit confusing ngl
@hamzal.2986
@hamzal.2986 4 месяца назад
Since you're speaking Indian, at least write the title in Indian!!! So we avoid wasting our time!
Далее
Bagging | Introduction | Part 1
31:13
Просмотров 38 тыс.
Is it impossible to cut off so much?💀🍗
00:14
Просмотров 3,9 млн
Random Forest Algorithm Clearly Explained!
8:01
Просмотров 569 тыс.