As a side note, one of the first machine learning programs I ever built was an email filter, which we called a "ham-spam classifier." Makes me almost want to get more emails! Almost.
If we made bagels average small as donuts J green bot will gets more lower precision , diameter and mass has no real causal relations to donuts and bagels. Learning depends on the real causal relations and procedure had been designed to experience. Also procedure design depends on real causal relations.
Most of Algeria is in the Sahara desert although most of the population lives on the Mediterranean coast. I have a feeling that you already knew that though ;P معلومة تك
It's important to note that Supervised Learning and Neural Networks don't always have to go together. You can train a neural network using a different method, and you can use supervised learning on AI models other than neural networks.
I was wondering that too. I also love bagels. Sure, I don't want to bite into a bagel when I'm expecting a donut, and I don't want to bite into a donut when I'm expecting a bagel, but like... they're both good.
Avec un nom comme Boudreau j'aimerais faire l'assomption que ton prof de science en primere était ton expérience avec Brain Pop, et vos etes alle à une école secondaire juste à l'autre bord de la rue lmao
Guys I just want to say thankyou in name of HUMANITY! Your job is so important, and trust me im a communicator myself, I will spread this, and work for the utter common knowledge. Thanks, for real, THANKS.
I disagree with the concept of toasting because I feel that it detracts from the texture and flavor of Noo Yawk bagels (and makes the cream cheese runny and messy). However, you've given me the idea of toasting donuts (particularly the "cake" type), in order to possibly achieve the consistency of home-baked corner pieces of cornbread!
A important mistake at 5:31 the electric action potential between neurons are all the same size,yes it is , but also real neuron has the different connection weights to other neurons, so real neuron can do the different size of signal too.
Well during human synapsis the comunication is not only electrical but electrochemical. The levels of that chemical components are what actually produce the answers or the neuron´s electrical shots and that levels are influenced also by many other environmental, genetical or individual inherent conditions. Then is not that easy compare humans and AI or create AI copying human behaviour.
@@TheTariqibnziyad Neuroscience it´s a multidisciplinary area. Then why an AI professional should be talking about biology concepts? It´s all about sharing knowledge, pal.
So I'm curious, is JohnGreen-Bot an actual AI computer? Or is it just hypothetical? I don't know much about computer science, but I really enjoy watching this course.
I did the initial lessons of the "artificial neural networks" course on Brilliant, which resemble this episode - and it was a light and helpful start. I would like to check the Coursera course too, ty :D
I think we’re using the word precise incorrectly. The word you want to be using is accuracy. If you always say you have one bagel or one donut then you are using absolute precision. If you say there’s a 75% chance that’s a bagel then you’re being precise to two significant figures. Neither of these has any impact on whether the answer is correct. If you get the answer right 80% of the time, that’s how accurate you were. An AI that always says “that’s a bagel or a donut” will have low precision but extremely high accuracy because it will always be right (assuming it’s only ever shown bagels and donuts)
That's how the terms are used in other fields. In machine learning (and statistical modeling more generally), they are different. Accuracy, precision, recall, F-1, sensitivity, specificity, and AUC are some of the metrics used that all have a specific meaning in this context. There are some great videos about it if you look up the term "confusion matrix". Heck, a few of my students made some that weren't too bad that might still be up somewhere.
@@shadebug, because the words "accurate" and "precise" are too broad in their common use to be useful in machine learning. We need to be able to specify how well it performs from different perspectives, in different situations, and within certain contexts. The type of language used in other fields just isn't sufficient. So, we use language borrowed from other fields like early 20th century botany, demographics in the 1950s, and sports reporting in the 1980s. Then we have a framework that unites the disparate names into a unified taxonomy.
I know this is simplistic for the purpose of explaination but they really missed an opportunity to add a simple classification based on the ratio of mass to size. If it’s large it’s a bagel unless it’s light. Truly the fact that we’re picking light for donuts and heavy for bagels is good but Considering that all the data is there it makes more sense to do it as a ratio.
At 2:53 this description to neuron’s working mechanism is simple as too simple as easy to mistake students to understand. Reality the working mechanism of neuron has been designed more much complex than you think (because of natural selection). 3:10 neuron talking to each other by passing neurotransmitter (and the electrical signal is not the only way the neuron talking to another neuron) And the neuron isn’t using electric action potential directly to another neuron (this animation will makes you think wrong easy) the electric signal will makes the neurotransmitter transmit to another neuron and it will not save the energy if the previous one sends the neurotransmitter it doesn’t recognize.
lol, I didn't catch the intro theme on the last video so I just thought John Green was tossing some voice clips at your for your channel... Here I am on the second video discovering this is Crash Course. Good on ya!
Seriously, I'm worried about them not covering basics first. Is it assumed students came into this already having completed some other crash course series?
of course it can , a standard audio cassette could hold around 130mb of audio data (like that of a dial up modem) thats plenty for simple AI. but cassette technology has come a long way. standard sized data cassettes used for backup and storage can hold MANY gigabytes of data, and some newer ones can hold more than a hundred terrabytes. ...the problem comes from random read cycles... it takes hours to load all of the data from the cassette.
Precision = likelihood of it being a positive, given that the model told you it was a positive. Recall = likelihood of it being negative, given that the model told you it was a negative. ...right? The explanation kinda confused me. Where neither alone is a perfect measure, & it's best to factor both into how you measure your model's usefulness. Depending on what you're measuring, it can be a lot more tolerable for one to be bad than the other.
Why would you use mass and diameter to sort bagels and donuts? Wouldn't other characteristics like sweetness, appearance and providence be more useful? Eg. it looks like it has a frosted topping, it tastes sweet and it came from a donut shop, therefore it's a donut.
Fraser McFadyen He did mention that including more characteristics would increase the processing power needed. And the original objective is classifying it before tasting it.
And here's me, watching all of these, patiently waiting for some mention of bottom up AI. Both videos have been interesting so far, but mimicking human brain function to me is more realistic, if exponentially harder, for getting the results we're ultimately looking for.
Yeah, this has been an incredibly strange way to present it. I'm used to much simpler things like KNN or decision trees being taught early on instead of trying to jump straight into neural networks.