Тёмный

Unsupervised Learning explained 

deeplizard
Подписаться 152 тыс.
Просмотров 111 тыс.
50% 1

Опубликовано:

 

23 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 66   
@deeplizard
@deeplizard 6 лет назад
Machine Learning / Deep Learning Tutorials for Programmers playlist: ru-vid.com/group/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: ru-vid.com/group/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
@Otonium
@Otonium 4 года назад
Yes, please. go deeper into autoencoder someday.
@deepaksingh9318
@deepaksingh9318 6 лет назад
Yess.. Pleas do a video on auto encoder as wel. And just to let uh know that urs is best videos i have found so. Far. Best and easiest to understand the concepts..
@Waleed-qv8eg
@Waleed-qv8eg 6 лет назад
I really love love this playlist. it gives you a clear understating of Machine Learning and Deep Learning! I have a comment, in Python this is [1,2, .....] called a list but a tuple would be like this (1,2, .....) [(1,2), (1,15)] this is a list of tuples and this is [[1,2], [3,4]] a list of lists! Thank you so much!
@martinmartin6300
@martinmartin6300 4 года назад
It might be worth mentioning that you can still validating unsupervised learning with accuracies. You can for example use a labeled validation set of data as a benchmark for the unsupervosed learner. For example, suppose a speaker recognition task. You can come up with a labeled data set for this purpose. Than you apply the data where the learner is training from scratch as is goes. Afterwards, you can validate against this data set. The assumption is that the learner will do similarly well for a new set of speakers. Note that it does not make sense to apply supervised leatning in the first place as the set of speakers might very well changing from run to run.
@uchihashisui4597
@uchihashisui4597 3 года назад
New question proposed for the related quizz, kudos for these amazing courses !! { "question": "Accuracy is typically a metric used in the unsupervised learning process", "choices": [ "False", "True", " ", " " ], "answer": "False", "creator": "Hivemind", "creationDate": "2021-08-17T22:53:53.216Z" }
@jrod238
@jrod238 5 лет назад
Thank u for speaking clearly.
@maheshbabu-oe5vh
@maheshbabu-oe5vh 3 года назад
Hello, U actually sing so sweet in all of your lecture videos. Kindly sing the auto-encoders too. Eagerly awaiting to listen to that.
@solanofurlan443
@solanofurlan443 4 года назад
First things first, this is the best series on RU-vid about ML out there. But I wanna know why you keep saying 'tuples' at 1:47 when refering to the list of height and weight samples? Is it a convention to call samples tuples or the data should actually be on the tuple format?
@deeplizard
@deeplizard 4 года назад
Thank you :) For the example mentioned, each sample had two features: height and weight. A tuple, just being a finite ordered list of elements, would be one appropriate way to store such a sample. A list, an array, or any other type of data structure would be fine as well. Whatever you choose to store your data in, you'll likely need to process it to be in a particular format anyway before you send it to your model. Examples of such processing are shown in the Keras series.
@abdulhameedmalik4299
@abdulhameedmalik4299 2 месяца назад
Best video madam
@tymothylim6550
@tymothylim6550 3 года назад
Thank you very much for this video! I really enjoyed learning about this and the auto-encoder was new for me! Very interesting and helpful for me!
@viniciusneto6824
@viniciusneto6824 5 лет назад
Hi! All previous videos have been great so far. Thank you! But for this particular one I felt you were talking mostly about Autoencoders and not Unsupervised Learning in general, as you did when covering Supervised Learning in the last video. Is there a more in-depth video in any playlist? Anyway, thanks a lot! I appreciate your work!
@deeplizard
@deeplizard 5 лет назад
Hi Vinícius - You're welcome! In general, unsupervised learning only means that we train our model with *unlabeled* data. This seems a bit abstract and hard to conceptualize in its own right, especially after only being exposed to supervised learning techniques. To illustrate how we can train models without data, we explore the common unsupervised learning techniques of autoencoders and clustering. You may also find it helpful to study the corresponding blog for this video as well: deeplizard.com/learn/video/lEfrr0Yr684 It has mostly the same content as the video but is in written format. The top of the blog focuses on unsupervised learning in general before jumping into examples.
@justchill99902
@justchill99902 5 лет назад
Great explanation. You always pull it off so incredibly. Question - You said "accuracy" is not the metric to judge the performance of a clustering algorithm. So how do we judge it's performance?
@LatpateShubhamManikrao
@LatpateShubhamManikrao 2 года назад
that was some clear exlplanation there!
@canmetan670
@canmetan670 6 лет назад
For a 5 minute video, this was a great explanation. Thanks.
@deeplizard
@deeplizard 6 лет назад
Thanks, Can!
@mabasadailycode1781
@mabasadailycode1781 2 года назад
Thank you , great video 🕺
@aliiabedii
@aliiabedii 2 года назад
thank you
@AmakanAgoniElisha
@AmakanAgoniElisha 4 года назад
Hi, thanks for the video. Is it necessary to perform feature selection or extraction if you intend to perform unsupervised learning?
@arjunbakshi810
@arjunbakshi810 4 года назад
Would love to learn about autoencoding too
@eleccafe98
@eleccafe98 Год назад
Perfect
@technicallyluke9993
@technicallyluke9993 2 года назад
finally understood :)
@jonyejin
@jonyejin Год назад
Unsupervised learning: Doing tasks without correct labels. = making good feature representation. i.e.) Clustering task: input to good clustering representation, so that datas could be clustered nicely. AutoEncoder: learning good vectorized representation, so that noised datas could remove noise.
@afdanv
@afdanv 3 года назад
{ "question": "One common application of autoencoders is:", "choices": [ "Denoising data", "Detrending data", "Predicting labeled data", "Reducing Inputs" ], "answer": "Denoising data", "creator": "Alex D", "creationDate": "2021-10-01T11:50:50.123Z" }
@MyFlabbergast
@MyFlabbergast Год назад
I hope full-fledged tutorial on Autoencoders (concept+code) did get added later on.
@gustavomartinez6892
@gustavomartinez6892 6 лет назад
Very good information!!!! very simple. Geat job, genious, excelent
@qusayhamad7243
@qusayhamad7243 3 года назад
thank you very much for this clear and helpful explanation.
@levtunik997
@levtunik997 4 года назад
I couldn't understand from the video how 2 clusters without a label help us in solving something..BTW great video..short and concise
@thenkprajapati
@thenkprajapati 5 лет назад
Please create a video on Autoencoders. If you already have, please share the link.
@deeplizard
@deeplizard 5 лет назад
Thanks for the recommendation, Naresh!
@Qornv
@Qornv 6 лет назад
Thank you again for these videos
@deeplizard
@deeplizard 6 лет назад
You're welcome, member! Thanks for watching!
@AnimilesYT
@AnimilesYT 4 года назад
Could an autoencoder also be used to heavily compress video footage so that we can get low bitrates while still getting good image quality? Maybe it could get one or two normally compressed frames per second and use those images as a reference to what the other images are supposed to look like, but this is just pure speculation and I have no clue if this could add any value to the network.
@sgrouge
@sgrouge 6 лет назад
Very clear. Thanks
@luistorres7661
@luistorres7661 2 года назад
{ "question": "What is the main difference between supervised and unsupervised learning?", "choices": [ "The input data is not labeled", "The input data must be reconstucted", "The data is clustered by its structure", "The loss function is a logarithmus" ], "answer": "The input data is not labeled", "creator": "Luis Torres", "creationDate": "2022-01-05T14:11:32.851Z" }
@rohitjagannath5331
@rohitjagannath5331 6 лет назад
Great Videos so far presented in a concise way. Can i know when exactly you would be coming up with video series on AutoEncoders(Unsupervised Learning)?
@deeplizard
@deeplizard 6 лет назад
Thanks, rohit! I currently don't have an exact time frame for the coverage of autoencoders, but it is definitely on my list!
@rohitjagannath5331
@rohitjagannath5331 6 лет назад
deeplizard I guess u can touch up on GANS - Generative Adversarial Networks as well. I’m really looking forward for those videos. My research on the later starts in a few days... But great work from your end. Appreciate it.
@raajanand2
@raajanand2 3 года назад
Could you post a link to this presentaion?
@Waleed-qv8eg
@Waleed-qv8eg 6 лет назад
Hello again, Do you think if we use autoencoders as of an early step to make a model for image detection for example so we can get images with no noises so the train set will be clear before the prediction step! So what I mean is the autoencoders is a good start for training an image detection model. Is this right? Thanks
@deeplizard
@deeplizard 6 лет назад
Hey الانترنت لحياة أسهل - The autoencoder will first need to be trained on on non-noisy images so that it can learn the important features of the data. Then, with what it has learned from training, it can accept noisy images and denoise them based on it's knowledge of the images it was originally trained on. If you passed the model noisy images to begin with, it wouldn't have prior knowledge of the "important" features of the images, so it wouldn't be able to decipher between these features and noise. You'd have to train it first on clear images so the model could learn what features are important. Does this help clarify?
@Waleed-qv8eg
@Waleed-qv8eg 6 лет назад
deeplizard Thank you, I got it. How about if there is such a function to remove noise first to make the images clear then will pass them to a model that we want to build to detect what task we want. Sorry I’m asking a lot but the reason is I’m interested in image processing in a field of machine learning! Have a great day!
@deeplizard
@deeplizard 6 лет назад
No problem, الانترنت لحياة أسهل. I'm not aware of a function myself that will do this from a neural network standpoint since the ones I'm aware of, like autoencoders for example, will need to be trained first to recognize what is important versus what is considered noise.
@photographymaniac2529
@photographymaniac2529 4 года назад
You nailed it mam👏👏👏
@barney3142
@barney3142 6 лет назад
(In theory) Can I train a model unsupervised with a lot of data and later on give labels (manually) to the groups and use the trained model for classification?
@deeplizard
@deeplizard 6 лет назад
Hey Barnabás - Perhaps, but the types of unsupervised models we used in this video would not work well for a classification task. For the scenario you described, it sounds like semi-supervised learning would be more appropriate. This topic is covered here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-b-yhKUINb7o.html
@MinhVu-fo6hd
@MinhVu-fo6hd 5 лет назад
I love your voice. You have such a beautiful voice. Cheer.
@MRGCProductions20996
@MRGCProductions20996 4 года назад
you are insane
@lancemarchetti8673
@lancemarchetti8673 Год назад
Blirriant!
@thespam8385
@thespam8385 4 года назад
{ "question": "Autoencoders output:", "choices": [ "A reconstruction of the input", "An encrypted form of the data", "An estimation of the input's label", "A feature map" ], "answer": "A reconstruction of the input", "creator": "Chris", "creationDate": "2019-12-12T04:06:11.601Z" }
@deeplizard
@deeplizard 4 года назад
Thanks, Chris! Just added your question to deeplizard.com
@gerelbatbatgerel1187
@gerelbatbatgerel1187 5 лет назад
ty
@muji_dipto
@muji_dipto 3 года назад
{ "question": "Which of the following is a use case of Autoencoders?", "choices": [ "Denoise data or images in the inputs", "Convert categorical data to numeric data", "Reduce overfitting ", "Denoise data or images in the outputs" ], "answer": "Denoise data or images in the inputs", "creator": "AresThor", "creationDate": "2021-06-29T12:36:39.046Z" }
@rodom.8753
@rodom.8753 5 лет назад
it is the same learning but in a organized auto-coding (pre-coded) way --AI its not smarter it just records more variable input
@primodernious
@primodernious 5 лет назад
i think i know how we supposed to make a artifical brain. we need not layers but separate single layered networks that is specialized in different type of data, then a output network that separate the data of the different types of pretrained data input to another set of outputs but by allowing this network to self classify one type of data with anoter so that the output of that data goes into another network designed for example to synthesize speach. we need to think of input and output as a hiracy like a pyramid. we also need to take neural net based speach synthesizer output and feed that into a sound recognizer network as primary input together with ordinary sound input so that the larger middle network can hear itself or see that what it sees is the same as it already recognized. i run someones neural network based chatbot and got the impression it was not just thinking, but had some reasoning skills to. that made me think the network was able to separate input data from previous input data, and that would mean than a neural network could just as well have separated different types of input data just as similar ones. its the structure of the wiring that makes the secret of artifical brain model and not how many nodes and layers you have. if we want a robot strucutre, we need to feed input of different types into separate networks that is then feed into a larger network that is then feed into individual smaller networks again to create ouput. the reason why we need smaller networks to feed into a larger on is that the larger one would act as the brain circuit of the self, where the smaller ones behave like lesser brains but more spesific. i assume that the sound in your ear goes into your brain lobes first and then it is wired from network to network higher and higher up in the hiracy until it reach the top of the pyramid and then it get split into other specialized networks designed to do spesific tasks. in the human body some networks would drive muscles and touch sensing and others vocal cord speach synthesis.
@Nandu369
@Nandu369 6 лет назад
will there be a video on autoencoders??
@deeplizard
@deeplizard 6 лет назад
Hey ch - We have autoencoders on our list of potential topics to cover in future videos!
@Christian-mn8dh
@Christian-mn8dh 5 лет назад
so a GAN is an auto encoder?
@edmonda.9748
@edmonda.9748 5 лет назад
Autoencoders, autoencoders, autoencoders, ....please
@xiaomichina5884
@xiaomichina5884 4 года назад
After listening to your such a sweet voice my brain neurons are predicting how beautiful you are... Here is the prediction result : Train =>(Listen sweet voice); Validation=>0.86; Testing =>Your Reply;
@longmai9343
@longmai9343 4 года назад
There is no such thing as unsupervised learning . There are only clustering, semi-supervised learning and supervised learning
@KshitizKamal
@KshitizKamal 4 года назад
ghatiya hai
Далее
Semi-supervised Learning explained
3:46
Просмотров 88 тыс.
МУЖСКИЕ ДУХИ
00:33
Просмотров 108 тыс.
У БЕЛКИ ПОЯВИЛИСЬ КОТЯТА#cat
00:20
Why Does Diffusion Work Better than Auto-Regression?
20:18
ML Was Hard Until I Learned These 5 Secrets!
13:11
Просмотров 303 тыс.
Simple Explanation of AutoEncoders
10:31
Просмотров 105 тыс.
How AIs, like ChatGPT, Learn
8:55
Просмотров 10 млн
What is Machine Learning?
8:23
Просмотров 209 тыс.
Training an unbeatable AI in Trackmania
20:41
Просмотров 13 млн