Machine Learning / Deep Learning Tutorials for Programmers playlist: ru-vid.com/group/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU Keras Machine Learning / Deep Learning Tutorial playlist: ru-vid.com/group/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
Yess.. Pleas do a video on auto encoder as wel. And just to let uh know that urs is best videos i have found so. Far. Best and easiest to understand the concepts..
I really love love this playlist. it gives you a clear understating of Machine Learning and Deep Learning! I have a comment, in Python this is [1,2, .....] called a list but a tuple would be like this (1,2, .....) [(1,2), (1,15)] this is a list of tuples and this is [[1,2], [3,4]] a list of lists! Thank you so much!
It might be worth mentioning that you can still validating unsupervised learning with accuracies. You can for example use a labeled validation set of data as a benchmark for the unsupervosed learner. For example, suppose a speaker recognition task. You can come up with a labeled data set for this purpose. Than you apply the data where the learner is training from scratch as is goes. Afterwards, you can validate against this data set. The assumption is that the learner will do similarly well for a new set of speakers. Note that it does not make sense to apply supervised leatning in the first place as the set of speakers might very well changing from run to run.
New question proposed for the related quizz, kudos for these amazing courses !! { "question": "Accuracy is typically a metric used in the unsupervised learning process", "choices": [ "False", "True", " ", " " ], "answer": "False", "creator": "Hivemind", "creationDate": "2021-08-17T22:53:53.216Z" }
First things first, this is the best series on RU-vid about ML out there. But I wanna know why you keep saying 'tuples' at 1:47 when refering to the list of height and weight samples? Is it a convention to call samples tuples or the data should actually be on the tuple format?
Thank you :) For the example mentioned, each sample had two features: height and weight. A tuple, just being a finite ordered list of elements, would be one appropriate way to store such a sample. A list, an array, or any other type of data structure would be fine as well. Whatever you choose to store your data in, you'll likely need to process it to be in a particular format anyway before you send it to your model. Examples of such processing are shown in the Keras series.
Hi! All previous videos have been great so far. Thank you! But for this particular one I felt you were talking mostly about Autoencoders and not Unsupervised Learning in general, as you did when covering Supervised Learning in the last video. Is there a more in-depth video in any playlist? Anyway, thanks a lot! I appreciate your work!
Hi Vinícius - You're welcome! In general, unsupervised learning only means that we train our model with *unlabeled* data. This seems a bit abstract and hard to conceptualize in its own right, especially after only being exposed to supervised learning techniques. To illustrate how we can train models without data, we explore the common unsupervised learning techniques of autoencoders and clustering. You may also find it helpful to study the corresponding blog for this video as well: deeplizard.com/learn/video/lEfrr0Yr684 It has mostly the same content as the video but is in written format. The top of the blog focuses on unsupervised learning in general before jumping into examples.
Great explanation. You always pull it off so incredibly. Question - You said "accuracy" is not the metric to judge the performance of a clustering algorithm. So how do we judge it's performance?
Unsupervised learning: Doing tasks without correct labels. = making good feature representation. i.e.) Clustering task: input to good clustering representation, so that datas could be clustered nicely. AutoEncoder: learning good vectorized representation, so that noised datas could remove noise.
Could an autoencoder also be used to heavily compress video footage so that we can get low bitrates while still getting good image quality? Maybe it could get one or two normally compressed frames per second and use those images as a reference to what the other images are supposed to look like, but this is just pure speculation and I have no clue if this could add any value to the network.
{ "question": "What is the main difference between supervised and unsupervised learning?", "choices": [ "The input data is not labeled", "The input data must be reconstucted", "The data is clustered by its structure", "The loss function is a logarithmus" ], "answer": "The input data is not labeled", "creator": "Luis Torres", "creationDate": "2022-01-05T14:11:32.851Z" }
Great Videos so far presented in a concise way. Can i know when exactly you would be coming up with video series on AutoEncoders(Unsupervised Learning)?
deeplizard I guess u can touch up on GANS - Generative Adversarial Networks as well. I’m really looking forward for those videos. My research on the later starts in a few days... But great work from your end. Appreciate it.
Hello again, Do you think if we use autoencoders as of an early step to make a model for image detection for example so we can get images with no noises so the train set will be clear before the prediction step! So what I mean is the autoencoders is a good start for training an image detection model. Is this right? Thanks
Hey الانترنت لحياة أسهل - The autoencoder will first need to be trained on on non-noisy images so that it can learn the important features of the data. Then, with what it has learned from training, it can accept noisy images and denoise them based on it's knowledge of the images it was originally trained on. If you passed the model noisy images to begin with, it wouldn't have prior knowledge of the "important" features of the images, so it wouldn't be able to decipher between these features and noise. You'd have to train it first on clear images so the model could learn what features are important. Does this help clarify?
deeplizard Thank you, I got it. How about if there is such a function to remove noise first to make the images clear then will pass them to a model that we want to build to detect what task we want. Sorry I’m asking a lot but the reason is I’m interested in image processing in a field of machine learning! Have a great day!
No problem, الانترنت لحياة أسهل. I'm not aware of a function myself that will do this from a neural network standpoint since the ones I'm aware of, like autoencoders for example, will need to be trained first to recognize what is important versus what is considered noise.
(In theory) Can I train a model unsupervised with a lot of data and later on give labels (manually) to the groups and use the trained model for classification?
Hey Barnabás - Perhaps, but the types of unsupervised models we used in this video would not work well for a classification task. For the scenario you described, it sounds like semi-supervised learning would be more appropriate. This topic is covered here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-b-yhKUINb7o.html
{ "question": "Autoencoders output:", "choices": [ "A reconstruction of the input", "An encrypted form of the data", "An estimation of the input's label", "A feature map" ], "answer": "A reconstruction of the input", "creator": "Chris", "creationDate": "2019-12-12T04:06:11.601Z" }
{ "question": "Which of the following is a use case of Autoencoders?", "choices": [ "Denoise data or images in the inputs", "Convert categorical data to numeric data", "Reduce overfitting ", "Denoise data or images in the outputs" ], "answer": "Denoise data or images in the inputs", "creator": "AresThor", "creationDate": "2021-06-29T12:36:39.046Z" }
i think i know how we supposed to make a artifical brain. we need not layers but separate single layered networks that is specialized in different type of data, then a output network that separate the data of the different types of pretrained data input to another set of outputs but by allowing this network to self classify one type of data with anoter so that the output of that data goes into another network designed for example to synthesize speach. we need to think of input and output as a hiracy like a pyramid. we also need to take neural net based speach synthesizer output and feed that into a sound recognizer network as primary input together with ordinary sound input so that the larger middle network can hear itself or see that what it sees is the same as it already recognized. i run someones neural network based chatbot and got the impression it was not just thinking, but had some reasoning skills to. that made me think the network was able to separate input data from previous input data, and that would mean than a neural network could just as well have separated different types of input data just as similar ones. its the structure of the wiring that makes the secret of artifical brain model and not how many nodes and layers you have. if we want a robot strucutre, we need to feed input of different types into separate networks that is then feed into a larger network that is then feed into individual smaller networks again to create ouput. the reason why we need smaller networks to feed into a larger on is that the larger one would act as the brain circuit of the self, where the smaller ones behave like lesser brains but more spesific. i assume that the sound in your ear goes into your brain lobes first and then it is wired from network to network higher and higher up in the hiracy until it reach the top of the pyramid and then it get split into other specialized networks designed to do spesific tasks. in the human body some networks would drive muscles and touch sensing and others vocal cord speach synthesis.
After listening to your such a sweet voice my brain neurons are predicting how beautiful you are... Here is the prediction result : Train =>(Listen sweet voice); Validation=>0.86; Testing =>Your Reply;