Im a software engineer by trade and have to say I love this series thus far. Shows a great simplification of neural networks and the concepts that help them run. I cant wait to see how the lab is put together :D
As an AI scientist, I'm echoing the comments from others: this was an excellent explanation of the basics of neural network training, without digging into the complexities of gradient descent. Great job, can't wait for the lab!
I'm a college student learning this stuff in class and you have helped me so much; also the altitude metaphor is an amazing way of visualizing this thanks Jabrils!
Add-on fact: Computational optimization of molecular stability (i.e. in drug development) is done with a similar algorithm. The "plane" used in that case is entropy (or energy). Re-inserting energy into the system to watch it change molecular structure to a possibly lower energy level (higher stability) is a common step in that process.
I started my CS degree with the intention of going into game dev. But the BS in the game industry, carykh, and this are making me reconsider refocusing on AI. Thanks Jabril and Crash Course!
Just goes to show how important it is, to talk to industry SME's (Subject Matter Experts) when determining what's important. Anyone who has worked at a pool would tell you that determining whether a day is a weekend/school holiday or not, and whether the Swim school or Aqua classes are running (or not), will have a significant effect on customer numbers, and therefore on staffing requirements. Basically, if you're not reducing your error bars, maybe you've missed a significant factor.
@@Rithmy I dunno, maybe it's a pool with a huge Chinese or Rom community nearby, and a stack of the patrons DO decide whether to swim based on horoscopes. You might want to ask more questions before arbitrarily dismissing things.
This assigning blame backpropagation thing reminds me how people work. Just add that some neurons are never to blame (they are bosses) and other neurons never change their behavior: boom, you have an average workplace.
Are the AI weights a sort of universal measure that are single weights for a given area of analysis? In other words, are the weights adjusted as singular weights and this then has relevance to any question that may be inputted in to the neural network? Or are there a multitude of weights for each of analysis (i.e.; an area such as weather) that make the neural network more fluid and open to been adjusted for specific questions rather than adjusted at each time for every possible question that may come its way? Or does it depend on the neural network in question?
Well, that, but also there's a lot of matrix math involved in backpropagation, and GPUs are designed to optimize matrix math (since it's also commonly used for graphics transformations).
say that the medications have available routes to take them IV, oral … the summaries of the assessment and implementations have parts only for those routes. Also say every route for that medicine has that medicines basic special instruction text, plus the amount of the ingredients in each med may have limits per day to compare with other meds that might contain that med so rules on amount per day. programmatically with AI cut down the text then have licensed staff review and authorizations to edit and then check for patient outcomes of reasons either stress alert fatigue of too much on had reference info basically why there were any errors and then put that back in to give a more precise summary for reference and again repeat the licensed staff reviews of that text for edits. Battle hardened and fast to generate but do this with supervised role play training to make up for the columns of data there is not information services for yet.
All the weights change, every time. The question is - change to what? That's what we're trying to explain by the jungle analogy. The weights are the latitude and longitude, and the error is the altitude. The weights are adjusted so as to minimize the altitude/error.
The majority of this video is essentially about gradient descent, but without actually mentioning it... PS: if you have a very large data set it can be faster to use stocastic gradient decent.
Oh John Green Bot is Q'Bert Q is for quantum lets do it in Q# with Microsoft AI flow chart programming I do visual blocks better then raw math like signal R honestly I even better with 'Enhanced Intelligence' Ei Ei O using the Excel and peoples actual workflow but...
Don't worry: I'm a computer scientist who took several machine learning courses at uni and has since re-learned it several times... and I still don't know how to do it. The math is complicated.
صديقي أنا عربي وأشاهدها الان رغم عدم اتقاني الكبير للانجليزي كل ما عليك فعله هو أن تحمل تطبيق screen translet و تشغل الترجمة الانجليزية فهو يترجم على الشاشة
reminded of that lasers shot through a pinpoint hole for precision groupings effected by thought intent and true nonrandom interference true fringe science experiment
I unsubscribed from this channel over a year ago. So why am i suddenly re-subscribed? Imagine my confusion when I saw this video pop up in my subscription box... I've heard of other people mentioning things like this before but i always thought they were either lying or just not remembering correctly. Today i eat crow. Good-bye again and hopefully this time it will stick.
I had high hopes for this series, but tbh, Im very disappointed.. way too diluted with John green it and poor storytelling analogies .. the material is exciting enough.. I will try and stick with it (no promises ) mostly because your computer science series was so fantastic
The subject is interesting, but honestly he doesnt feel like he actually knows what he is talking about, but much more like he just reads stuff other people wrote. It is a combination of his intonation, vocal rythm and (quite frankly overly) exaggerated body language - especially with his hands. Not a single motion or phrase is made without him moving his hands a lot and it is offputting. I am sure he knows his stuff, but his demeanor doesnt support much of this notion.
So people who talk with their hands and rehearse before presenting in front of a camera are less likely to know the subject they're presenting? Hm, seems like your internal neural network has overfit and learned some spurious correlations!