This channel is dedicated to demystifying the fundamentals of data science through straightforward examples and accessible explanations. It is designed for individuals without prior knowledge of computer programming, statistics, machine learning, or artificial intelligence. Our content aims to provide a high-level understanding of data science concepts that can be easily comprehended by viewers from diverse backgrounds. The videos will focus on simplicity and clarity, ensuring that the material is approachable and engaging for everyone.
My Music source : www.bensound.com/royalty-free-music
Thanks, Aman for the amazing video. Actually, I have one question. What if there are multiple variables which have high VIF value? Can I remove them all at once or should I calculate after removing each feature and then remove?
I thought this was called computational biology or biochemistry and has been in practice for drug development for over 20 years. Suddenly it has gotten a new name - AI. I was doing this 20 years go running a molecule though a software to see which type of molecules can fit into the active site of the enzyme or receptor or other regulatory parts to see how it can be modulated. In any case the author has simplified that the molecule goes to clinical trial. The molecule has to be synthesized, tested in vitro (in test tube or in cells in culture), then a series of test conducted for safety in 2 mammalian animal forms and only then an investigational approval for humans IIND) will be approved by the FDA. AI has not made medical discovery faster. They just renamed older technologies AI
Thank you for your feedback. Drug discovery is not my core area hence I researched on this topic and put on my views on how AI could be helpful. Any comments/knowledge/feedback are more than welcome.
Hi ,I have been learning Data Analytics, Data science and Machine learning ,will you please suggest which modules of AWS should I learn I dont know anything about AWS . will you please suggest from where should I start and which modules should I learn
@@UnfoldDataScience "You are truly amazing. While many RU-vidrs tend to overcomplicate even the simplest concepts, you make everything so easy to understand. You’re giving confidence to those who want to learn from the ground up. It would be great if you could launch your courses on platforms like Scalar, Udemy, Edureka, or Simplilearn, where people globally can benefit from your teaching. A lot of courses out there charge huge fees for minimal content, misleading new learners into thinking they've mastered data science. You could create bootcamps based on your RU-vid content, offering a comprehensive syllabus that truly helps learners. Having a strong foundation in data science is essential, and I highly recommend your channel as the perfect starting point. Even though I’ve completed my M.Tech in Data Science and Engineering from a reputed institute, I still come back to your videos for refreshers and to brush up on concepts. Thank you for everything you do!"
Awesome video. Could not see the code in drive. May be this will help others who are referring: import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.metrics import classification_report, confusion_matrix import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns """## Understanding imp parametrts """ df = pd.read_csv('/content/iris.csv') df.head() from sklearn import svm, datasets iris = datasets.load_iris() X = iris.data[:, :2] # we take only 1st 2 features y = iris.target h = 0.2 #step size in the mesh # we create an instamce of SVM and fit our data. We dont scale our #data since we want to plot the support vectors C = 1.0 # SVM regularization parameter svc = svm.SVC(kernel='linear', C=C).fit(X, y) rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(X, y) poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, y) #create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) #title for the plots titles = ['svc with linear kernel', 'svc with RBF kernel', 'svc with ploynomial (degree3) kernel'] for i, clf in enumerate((svc, rbf_svc, poly_svc)): #plot the decision boundary. For that, we will assign a color to each #point in the mesh [x_min, x_max]x[y_min, y_max]. plt.figure(figsize=(14, 10)) plt.subplot(2, 2, i + 1) plt.subplots_adjust(wspace=0.4, hspace=0.4) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8) #plot also training points plt.scatter(X[:, 0], X[: , 1], c=y , cmap=plt.cm.coolwarm) plt.xlabel('Sepal length') plt.ylabel('Sepal width') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.xticks(()) plt.yticks(()) plt.title(titles[i]) plt.show() from sklearn.model_selection import GridSearchCV param_grid = {'C' : [0.1,1,10,100], 'gamma' : [1,0.1,0.01,0.001]} grid = GridSearchCV(SVC(), param_grid, verbose=2) grid.fit(X,y) print(grid.best_params_) """## note : if we change gamma as 700 , we r not worried about the how complex the model becomes but give me the porper classification. the model is complex . this model wil overfit. ## if u change C = 500 , i dont care about decision boundary .. decision boundary is complex but give the better classification ## if the paramerts value as low , then it would be better . """
Very well explained Aman. All these days I was not clear about how it retains information for short duration and long duration. I was also questioning myself how lstm predicts new words in a sequence. Today it has become clear to me. Thank you again.
We use descrptive statistics (measure of Central tendency: mean: x=x1+x2../n since u can't predict that the basket of apples have the same weight but if u told us you want the green ones and give us a samle we can use inferential statistics cuz we have a sample(1 green apple) and we use the probability sampling methods of a stratified random sampling! Is that true or not?
Very well explained. I have watched the video multiple times to learn and freshen up RAG concepts. If you could also make a video about basics of LangChain concepts, that will be much appreciated. Thank you!
Really , you are more than Krish naik or other youtube channels by explaining the concepts in very simple term. Awesome and Thanks a lot for teaching us free :)
Core Point: A core point is a point that has enough neighboring points within a specified distance (called epsilon or eps). Specifically, if a point has at least min_samples points (including itself) within a distance of eps, it is considered a core point. Border Point: A border point is a point that doesn't have enough neighboring points to be a core point, but it is within the eps distance of a core point. Border points are on the edge of a cluster, but they are not dense enough to form their own core.
hey brother. good explanation. i found smth that other ppl didnt touch . well done! if I may comment on smth: Im as a member of your audience got bit distracted that you repeat one word again and again . This word is : ok ? ok? Aman , you dont need to take confirmation from us. we came to you to learn .. be the BOSS !!! and your videos will become more flowy .. good luck!