Тёмный

Implementing a Spam classifier in python| Natural Language Processing 

Krish Naik
Подписаться 1 млн
Просмотров 115 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 170   
@priyasinha2251
@priyasinha2251 4 года назад
I am not a girl who generally comments on you tube videos but I am learning from your videos and this is my genuine comment that you are amazing and your concept in data science is very clear and to the point. I am very happy that the teacher like you is present here. Superb job Sir !
@amankukar7586
@amankukar7586 2 года назад
Who asked you if you generally comments or not?
@unknownfacts3716
@unknownfacts3716 2 года назад
@@amankukar7586 good one bro
@unknownfacts3716
@unknownfacts3716 2 года назад
pehli fursat mei nikal yahan zyaada formality mat kar
@moindalvs
@moindalvs 2 года назад
"I am not a girl" okay can't say these days "who generally comments on youtube videos" first of all youtube doesn't have any comment history data to prove this second How dare you call this another youtube video? How dare you generalised an educational video that free of cost while people pay an hefty amount of price for such contents? shame on you!
@vipinbansal8886
@vipinbansal8886 4 года назад
I was trying to understand NLP concepts referring to various books and videos from last two months but concepts were not clear for me.But this explaination is really awesome .Explained in very easy way .Thanks Krish
@utkar1
@utkar1 5 лет назад
Thank you, the whole NLP playlist is very helpful!
@arjyabasu1311
@arjyabasu1311 4 года назад
Exactly
@yonasbabulet3836
@yonasbabulet3836 2 года назад
i have seen a lot of youtube tutorials , but i cant find tutorial like you which are clear and more precise. keep going.
@piyushaneja7168
@piyushaneja7168 4 года назад
You are great sir, its very difficult to find a good channel that explains the code line by line ❤💥👏
@javiermarti_author
@javiermarti_author 5 лет назад
You are an excellent teacher. Thanks for making/uploading these videos
@mansoorbaig9232
@mansoorbaig9232 4 года назад
Great work Krish. You have this knack of explaining the things in pretty simple manner.
@dipakwaghmare1228
@dipakwaghmare1228 Год назад
Sir meri tapshya hi puri ho gae ye apka lecture dekhake ❤️thank you so so so so so much sir ❤️❤️❤️❤️
@niksvp93
@niksvp93 3 года назад
The best possible tutorial on Data Science/Machine Learning on RU-vid. Cheers to you brother! :D
@navrozlamba
@navrozlamba 4 года назад
I would say to prevent leakage we should split our data before we fit_transform on the corpus. So in other words, we are teaching vocabulary to our model on the whole dataset which defeats the purpose of splitting into train and test after. The whole purpose of the test set is to test our model on unique data that our model has never seen before. Please correct me if I am wrong! Cheers!!
@cristianovivk4935
@cristianovivk4935 4 года назад
i agree should split before fit_transform to prevent leakage.,.....
@iEntertainmentFunShorts
@iEntertainmentFunShorts 4 года назад
+1
@tejashshah5202
@tejashshah5202 3 года назад
Agree, split before getting BOW.
@КаратэПацан-я6б
@КаратэПацан-я6б 2 года назад
Hi. The CountVectorizer is not a ML model, it just converts to vectors(matrix of numbers)
@sauravkumar-cw5bm
@sauravkumar-cw5bm 3 года назад
I used Lemmatization and TF-IDF in text preprocessing and got an accuracy score of 0.971.
@matanakhni
@matanakhni 3 года назад
Best NLP videos of all time . A complete gist , mind you not for the faint hearted . Execllent job Krish. Initially ibhad given up NLP completely but now have renewed vigour after such exemplary teaching
@sivabalaram4962
@sivabalaram4962 2 года назад
You are genius in explanation krish Naik Ji, your the best 👍👌👌👌
@lifebytesss
@lifebytesss 4 года назад
Just amazing sir , cant comment you too usefull sessions thankyou
@ABHINAVARYA
@ABHINAVARYA 3 года назад
Best playlist to learn NLP. Thank you Krish.. 🙂
@dheerajkumar9857
@dheerajkumar9857 3 года назад
Excellent , very happy to see such type of explanation @Krissh Naik, we will definitely do good.
@rahuljaiswal9379
@rahuljaiswal9379 5 лет назад
u r awesome teacher, it really helpful for me...... god bless u
@DhananjayKumar-oh2hh
@DhananjayKumar-oh2hh 3 года назад
you are really great sir. each and every topic u have explained very well. Hats off to u.
@sanandapodder5027
@sanandapodder5027 4 года назад
Thank you very much sir,your videos are really very helpful i am learning NLP from your channel first time . I don't know machine learning thats why facing little problem
@ushirranjan6713
@ushirranjan6713 4 года назад
Its really a fantastic video sir. Your really explained the many things which can be understand in very easy manner. Thanks a lots sir!!!
@debatradas1597
@debatradas1597 2 года назад
Thank you so much Krish Sir...!!!
@mandeep8696
@mandeep8696 3 года назад
Thank You Krish for sharing the knowledge.
@lifeisbeautiful1111
@lifeisbeautiful1111 11 месяцев назад
keep up the good work.Thanks
@gauravpardeshi6056
@gauravpardeshi6056 2 года назад
very good video sir...thank you
@ashishn.c.7913
@ashishn.c.7913 4 года назад
I am getting these accuracy values for different combinations: Stemming and CountVectorizer accuracy=98.5650% Lemmatization and CountVectorizer accuracy=98.29596% Lemmatization and TfidfVectorizer accuracy=97.9372197309417% Stemming and TfidfVectorizer accuracy=97.9372197309417%(same as Lemmatization and TfidfVectorizer)
@ManiKandan-ol9gm
@ManiKandan-ol9gm 3 года назад
Really no words to represent you.....lottttttttttts of love sir❤️tq so much sir means alot
@AdityaKumar-cr9mc
@AdityaKumar-cr9mc 2 года назад
You are simply amazing
@mohammedsohilshaikh6831
@mohammedsohilshaikh6831 2 года назад
I am so much addicted to his videos, sometimes even forget to like the video.😂
@nehasrivastava8927
@nehasrivastava8927 4 года назад
Thanku sir...for the wonderful explanation
@farnazfarhand5957
@farnazfarhand5957 3 года назад
it was so clear and helpful, thank you so much
@AltafAnsari-tf9nl
@AltafAnsari-tf9nl 3 года назад
Thank you so much for sharing your knowledge with us
@vinimator
@vinimator 4 года назад
Hi Krish, I am the newest subscriber of your channel and I hope your this video will help me to complete a project of mine own. Thank you so much. Will continue to learn
@tarung7088
@tarung7088 3 года назад
Here the dataset is highly imbalenced (i.e ham : 4825, spam : 747) so got the high accuray
@billyerickson353
@billyerickson353 10 месяцев назад
🎯 Key Takeaways for quick navigation: 00:00 📚 *Introduction to Spam Classifier Project* - Creating a spam classifier using natural language processing. - Overview of the dataset from UCI's SMS Spam Collection. - Reading and understanding the dataset structure. 01:47 📂 *Exploring the Dataset and Data Preprocessing* - Explanation of the SMS spam collection dataset. - Reading the dataset using pandas and handling tab-separated values. - Data cleaning and preprocessing steps using regular expressions and NLTK. 05:46 🧹 *Text Cleaning and Preprocessing* - Using regular expressions to remove unnecessary characters. - Lowercasing all words to avoid duplicates. - Tokenizing sentences, removing stop words, and applying stemming. 13:52 🎒Creating *the Bag of Words* - Introduction to bag-of-words representation. - Implementation of count vectorization using sklearn's CountVectorizer. - Selecting the top 5,000 most frequent words as features. 17:27 📊 *Preparing the Output Data* - Converting the categorical labels (ham and spam) into dummy variables. - Finalizing the output data with one column representing the spam category. - Overview of the preprocessed data for training the machine learning model. 21:04 📊 *Data Preparation for Spam Classification* - Data preparation involves creating independent (X) and dependent (Y) features. - Explanation of dummy variable trap in categorical features. - Introduction to the train-test split for model training. 22:30 🛠️ *Addressing Class Imbalance and Train Spam Classifier* - Discussion on class imbalance issue in the data. - Introduction to Naive Bayes classification technique. - Implementation of the Naive Bayes classifier using multinomial Naive Bayes. 24:22 📈 *Evaluating Spam Classifier Performance* - Explanation of the prediction process using the trained model. - Introduction to confusion matrix for model evaluation. - Calculation of accuracy score for the spam classifier (98% accuracy). 27:50 🔄 *Improving Spam Classifier Accuracy* - Suggestions for improving accuracy, including the use of lemmatization. - Mention of addressing class imbalance for better performance. - Recommendation to explore TF-IDF model as an alternative to count vectorization. Made with HARPA AI
@Skandawin78
@Skandawin78 5 лет назад
Good job Krish with the NLP playlist
@chandrakanthshalivahana1417
@chandrakanthshalivahana1417 5 лет назад
hello,sir i am very happy that u r making videos..please make more videos on kaggle competitions...
@suvarnadeore8810
@suvarnadeore8810 3 года назад
Thank you krish sir
@mujeebrahman5282
@mujeebrahman5282 4 года назад
Sometimes the error is good for health😂
@indian-inshorts5786
@indian-inshorts5786 4 года назад
Sir u r too good
@nehamanpreet1044
@nehamanpreet1044 4 года назад
Please make videos on word embedding like word2vec/GloVe/BERT/Elmo/GPT/XLNet etc
@mbmathematicsacademic7038
@mbmathematicsacademic7038 2 месяца назад
I used logistic regression ,multiclass was specified and I achieved 94.3% accuracy on test data and 95.7% accuracy on test data
@amruthasankar3453
@amruthasankar3453 Год назад
Thankyou sir❤️🔥
@sandipansarkar9211
@sandipansarkar9211 4 года назад
Thanks Krish .Superb explanation once again.All my concepts about NLP is very crystal clear.I know career in NLP is superb.But can you explain what is its exact value in terms of data science carrer. Please guide and feel free to reply as I am eagerly waiting. Thanks once again.
@sathishk8685
@sathishk8685 5 лет назад
Hi Krish, Excellent explanation
@Anurag_077
@Anurag_077 3 года назад
Wonderful
@usaikiran96
@usaikiran96 10 месяцев назад
How to decide when to use count vectorizer, or tfidf? How to decide whether/when to use Stemming or Lemmatization? Like in this example why didnt you use tfidf instead of bag of words? And why lemmatization was not used instead of stemming?
@nehamanpreet1044
@nehamanpreet1044 4 года назад
Sir please make videos on LDA, NMF, SVD and Word2Vec Models
@furkhanmehdi6405
@furkhanmehdi6405 4 года назад
Legend ❤️
@arjyabasu1311
@arjyabasu1311 4 года назад
Awesome work sir !!
@jinks6887
@jinks6887 2 года назад
You are bhagwaan for me Sir
@datascience3008
@datascience3008 2 года назад
Awesome
@shahariarsarkar3433
@shahariarsarkar3433 2 года назад
Brother you are making helpful content for us. Can you tell me how to remove the stopwords of other languages like Bangla or Hindi etc?
@soumyadev100
@soumyadev100 3 года назад
Hi Krish, good session. I have one comment. For getting test corpus, better practice may be to use transform. Fit transform on train and only transform test. And train test split to be done before we build corpus. Let me know what you think.
@monicameduri9692
@monicameduri9692 Год назад
Thanks a lot!
@shreyasb.s3819
@shreyasb.s3819 3 года назад
Thank u so much
@babyyoda5140
@babyyoda5140 4 года назад
Boss please also include sentiment analysis and topic modelling to your already wonderful repertoire!
@parimalbhoyar8579
@parimalbhoyar8579 4 года назад
very helpful...!!!
@roshankumarsharma8725
@roshankumarsharma8725 4 года назад
Sir in this model why we have used MultinomialNB and not BernoulliNB ? and can we use BernoulliNB this instead of MultinomialNB
@nikhilsharma6218
@nikhilsharma6218 4 года назад
i have 2 questions first : Why only multinomialNB, is there specific reason, cant we use bernoulliNB or gaussianNB ?? second : if dataset is imbalanced we have used complimentNB, but how do we know that dataset is balanced or imbalanced??
@manikhindwan6790
@manikhindwan6790 4 года назад
BinomialNB - when spam classification is being done with a two step decision approach i.e if 'X' is present, then 'spam' else 'not spam' GaussianNB - used when the values are present and are continuous MultinomialNB - counts the presence of words and the frequency of occurrence to decide the decision boundary
@puttacse
@puttacse 4 года назад
Hi Krish, Why are we hard-coding Max_features=5000, What if this code is Migrated to Production as-is and face more Tokens/Features in Live Data(Ex: if live data has 0.1 Million(1 Lakh) features)? In this scenario, Do our Model fails?
@gowrisancts
@gowrisancts 4 года назад
Good one... actually u may need to use bernoulis naive bayes model as it deals with binary values 0 and 1...correct me if am wrong
@JoshDenesly
@JoshDenesly 5 лет назад
Hi Krish, Please make a project relating to Bigram , unigram also . Thank you
@krishnaik06
@krishnaik06 5 лет назад
Sure I will do that
@ranjanjena2996
@ranjanjena2996 5 лет назад
i have created the model and saved the same using joblib. I am not getting how to use the model for prediction? Is there anyway where i can pass the email text to the body and model can detect spam or ham. I am newbie plz help. Thanks
@aleenajames7609
@aleenajames7609 4 года назад
Have you got how to do? If yes please let me know also
@premranjan4440
@premranjan4440 3 года назад
We could have used drop_first in get_dummies label instead of iterating the whole array.
@insightworld9910
@insightworld9910 5 лет назад
By using lemmatization method we get accuracy of97.6
@avinashsingh7698
@avinashsingh7698 4 года назад
Sir, can you please make a video on 'Generate paraphrase from the text using NLP'.
@emajhugroo109
@emajhugroo109 4 года назад
Hello sir, I would like to know how to calssify a new message as ham or spam after building the NB model
@yogeshprajapati7107
@yogeshprajapati7107 4 года назад
You can do it like this. df=pd.DataFrame(['this message is a spam'],columns=['message']) corpus=[] for i in range(0,len(df)): review=re.sub('[^a-zA-Z]',' ',df['message'][i]) review=review.lower() review=review.split() review=[ps.stem(word) for word in review if word not in stopwords.words('english')] review=' '.join(review) corpus.append(review) df=cv.transform(corpus).toarray() pred=spam_detect_model.predict(df) label=pred[0] if label==1: print('Spam') else: print('Ham')
@joelkhaung
@joelkhaung 3 года назад
@@yogeshprajapati7107 how does model handle for features 2500 when doing predict? I believe there will mismatch between number of features from new message and number of features from trained model. can share how to overcome this?
@juanelnino
@juanelnino 2 года назад
I have a ERROR it is saying unhashable type of list even if all the steps are same
@maYYidtS
@maYYidtS 5 лет назад
excellent........ sir instead of taking max_feature parameter at 16:43.....wt if we apply PCA or LDA on that total columns...
@rajarshidgp2003
@rajarshidgp2003 2 года назад
instead of pd.get_dummies , we can sklearn.preprocessing.LabelEncoder can be used
@afaqueumer7968
@afaqueumer7968 2 года назад
Hello Sir...can you please make video on Topic Analysis - LDA. There isn't any clear cut videos on utube yet like yours.
@aninditadas832
@aninditadas832 3 года назад
hello sir, why have we not used lemmatization here? Stemming may or may not give meaningful words but we need meaningful words here right?
@sardar92
@sardar92 5 лет назад
very nice kindly post new videos
@abhishekpurohit3442
@abhishekpurohit3442 4 года назад
Sir, is Deep learning necessary to be learned before coming to this playlist (as I see Keras and LSTM being there in the last videos)??
@jpssasadara3624
@jpssasadara3624 3 года назад
nice
@tapabratacse
@tapabratacse 2 года назад
why didnt u use label encoder for terget column spam/ham
@Lijoperumpuzhackal
@Lijoperumpuzhackal 4 года назад
I had gone through the 7 videos in the playlist . Well explained in every videos . Can you please tell me how can implement this program in real scenario ? Everyone has completing their videos by making only the models . So pls try to explain how we can use this model ? If I have text message. Then how to find whether it is spam or not using this model ..
@krishnaik06
@krishnaik06 4 года назад
Check my deployment playlist u will get to know
@suhailhafizkhan9800
@suhailhafizkhan9800 5 лет назад
How can we visualize at the actual result for a clarification?Thanks
@ashwinbj
@ashwinbj 3 года назад
practically how to check weather the message is spam or ham.? ie how to pass the message in the mode.
@anilkumar-dm8om
@anilkumar-dm8om 5 лет назад
what ever you explained was ossum sir, Parts of speech (POS) - can you do video on this :) and even bigram, unigram topics also we want
@salmankhan-vq7pc
@salmankhan-vq7pc 3 года назад
Hi nice lecture. I have a dataset with 1.3 million rows. I used your code When I perform bagging of words my Google Collab get crashed. Any solution.
@akashr9973
@akashr9973 3 года назад
Hi sir, please correct me if I'm wrong. In the line number 30 you are applying the transform function for the whole data , won't it be data leakage? The transform has to be applied after splitting the data right? Thank you.
@КаратэПацан-я6б
@КаратэПацан-я6б 2 года назад
Hi. The CountVectorizer is not a ML model, it just converts to vectors(matrix of numbers)
@dushyanthande1556
@dushyanthande1556 11 месяцев назад
sir i have tried running the code but the shape of x function and y is not the same so train test split is not working its saying Found input variables with inconsistent numbers of samples: [11144, 5572]
@saratht8223
@saratht8223 2 месяца назад
Hi Kris, supposing we need to implement a functionality for identifying spam afresh, how can we come up with a solution. The sample data used here already have something tagged as spam and ham by someone, sometime, somewhere. In practice, do we need to have a sample data upfront? Can you please advice?
@pradeepvaranasi
@pradeepvaranasi 2 года назад
Can we just use an if-else condition on the label column to derive the 0-1 (spam-ham) column? What is the purpose of using the get_dummies function for a binary class column?
@ashishgeorge2766
@ashishgeorge2766 4 года назад
can we apply label encoder instead of one hot encoding at label column
@awaisniaz5300
@awaisniaz5300 4 года назад
yes we can apply but when feature have two category
@abhishekpurohit3442
@abhishekpurohit3442 4 года назад
Sir, why did we go for Bag of Words and not for TF-IDF? Is TF-IDF only used for sentiment analysis?
@thunuguntlaruparani2058
@thunuguntlaruparani2058 3 года назад
There wouldn't be a data leakage problem if we use fit_transform on entire data?
@kanishkapatel9077
@kanishkapatel9077 4 года назад
How to make GUI for this project ? any idea about it? It would be of great help !
@Rohan-cw9gn
@Rohan-cw9gn 3 года назад
U can use stream lit framework without knowledge of html, css u can make beatiful web apps
@devinpython5555
@devinpython5555 3 года назад
At time 20.44 I think we should consider ham column as independent feature Y. Because say first sentences is positive sentence ham=1 spam=0 , if u consider spam column as independent feature it gets opposit meaning , negative sentence as positive and vise versa. Could someone correct if I'm wrong
@AarushiMishra-x3w
@AarushiMishra-x3w Год назад
can't we make this code work in jupyter notebook instead of spyder because i cant really see any output for spyder
@veeragandhamvenkatasubbara4286
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 1, column 0
@Sonu-bc7sm
@Sonu-bc7sm 4 года назад
Hi Krish, Nice video, Just had a question. What if i put the model in production and the new message have a word which are not part of my training dataset then the features won't match and the model will give error?
@deepanshupant8282
@deepanshupant8282 4 года назад
Sir is it the full NLP playlist or u will sdd more Do reply
@pinkalshah5237
@pinkalshah5237 4 года назад
With lemmatization and max_features accuracy is 97%
@techbenchers69
@techbenchers69 4 года назад
Sir, What is the reason behind choosing navie bays classifier.why not other classifier
@praddhumnasoni3364
@praddhumnasoni3364 2 года назад
sir, how to predict on real world text ( means text from gmail or something).
@yashwanthsrinivas4590
@yashwanthsrinivas4590 4 года назад
Hello Krish,How can we handle mulitple label classificaton problems?
@deepakjoshi4699
@deepakjoshi4699 2 года назад
I tried with Tf-IDF but my score is better with bag of words ? is it possible or am I making some mistakes?
@ashishanand9981
@ashishanand9981 5 лет назад
hello sir if we have different number of labels or category such as business,sports, entertainment,category,politics,tech,history.then how can we get the dummy variables and bag of words and how to find which present are the which labels.
Далее
Word2Vec Easily Explained- Data Science
22:50
Просмотров 170 тыс.
🎙Пою РЕТРО Песни💃
3:05:57
Просмотров 1,3 млн
Creating a Spam Filter using Naive Bayes
33:37
Просмотров 21 тыс.
Natural Language Processing|BagofWords
12:30
Просмотров 101 тыс.
How I’d learn ML in 2024 (if I could start over)
7:05
5 Useful F-String Tricks In Python
10:02
Просмотров 311 тыс.
How I'd Learn AI (If I Had to Start Over)
15:04
Просмотров 814 тыс.
🎙Пою РЕТРО Песни💃
3:05:57
Просмотров 1,3 млн