Тёмный

Neural Networks and Deep Learning: Crash Course AI #3 

CrashCourse
Подписаться 16 млн
Просмотров 341 тыс.
50% 1

Опубликовано:

 

23 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 178   
@SavingSpace
@SavingSpace 5 лет назад
By the end of this course John Green Bot will replace John Green.
@i_smoke_ghosts
@i_smoke_ghosts 5 лет назад
yea dunno if this one's gonna take off mate. back to the drawing board ay
@fatimasalmansiddiqui1182
@fatimasalmansiddiqui1182 5 лет назад
I think John Green will come in at least one episode of this series though.
@MisterJasro
@MisterJasro 5 лет назад
Nha this is actually a prequel to all other crash course series. There never was a "real"John Green.
@kavinumasankar6544
@kavinumasankar6544 5 лет назад
@@MisterJasro Then why is John Green in the credits?
@river_brook
@river_brook 5 лет назад
@@kavinumasankar6544 Time travel, of course!
@knack8284
@knack8284 5 лет назад
Just remember, the brain perceives things through a series of guesses. so with billions of neurons doing complex statistical analysis, nobody is as bad at math as they think :)
@druidiron
@druidiron 5 лет назад
You have clearly never seen me do math.
@knowledgemagnet4077
@knowledgemagnet4077 4 года назад
@@druidiron egotist
@WolfiePH
@WolfiePH 5 лет назад
8:56 > Answers simple question correctly > Moonwalks away John green bot is basically all 1st grade elementary boys ever.
@radreed
@radreed 4 года назад
Why only boys? What you tryna say
@maevab2923
@maevab2923 4 года назад
@@radreed because john green bot looks more like a boy than a girl and is also literally named john. Don't be so sensitive
@Chelsieelf
@Chelsieelf 5 лет назад
This is perfect for me to understand about AI since I'm taking Neuro! Thank you so much 💙💙
@yourbuddyunit
@yourbuddyunit 5 лет назад
I'll never be able to articulate how wonderful it is to be learning this from a fellow black man. This is a blessing. Thank you infinitely my brotha. Thank you, you inspire me to be greater.
@idkmy_name7705
@idkmy_name7705 Год назад
I wish Crash Course would make notes for the courses in written format. So I can recall the learnt materials easily. Also this series is fireee
@JukeboxTheGhoul
@JukeboxTheGhoul 5 лет назад
Captcha uses this to teach computers as well as checking if we're human
@kilianblum8161
@kilianblum8161 5 лет назад
Neptune Productions small detail but “teach computers” is misleading. In captcha you label data to train models. A computer would be used to train the model or apply the model later, it doesn’t “learn” anything.
@Souchirouu
@Souchirouu 5 лет назад
Yeah, it's kinda weird that the test to prove your human is training computers to be able to do the same thing. So there will be a point where Captcha will change/become more complex because computers can solve the current generation ones as well as humans can.
@thomas.02
@thomas.02 5 лет назад
@@Souchirouu imagine solving complex calculus that even wolfram alpha couldn't handle just to sign up to something
@JukeboxTheGhoul
@JukeboxTheGhoul 5 лет назад
@@thomas.02 There are certain things that Humans are really just better at doing than computers are. Take for example AI in videogames. A human can probably execute a tactical manuveur, but they take time to process and plan. Computers can take instant action and rush the human before it has chance to think. (I'm referring to Total War: Atilla)
@avaavazpour2786
@avaavazpour2786 5 лет назад
So they're teaching a bot how to verify that it is not a bot. Fine logic.
@lincolnpepper816
@lincolnpepper816 5 лет назад
single best explanation of neural networks I've seen.
@simpleskills7222
@simpleskills7222 5 лет назад
Crash Course is the place to be. I expecially love this series. This channel has inspired me to create my own channel. It is new and I would love to get some support/guidance on how to improve.
@mikeywatson5654
@mikeywatson5654 5 лет назад
Keep trying dude
@dejohnny2
@dejohnny2 5 лет назад
Jabrill, you hit a home run with this video. 5 stars dude!
@stevenfeldstein6224
@stevenfeldstein6224 4 года назад
I find it strange that Alex Krizhevsky (as 12/20/19) doesn’t have his own page on Wikipedia nor does he appear in the Wikipedia pages for neural networks nor machine learning, yet his work is cited over 84,000 on google scholar.
@cameronhunt5967
@cameronhunt5967 5 лет назад
You guys should put a link to Sethbling’s video about marIO.
@MaksymCzech
@MaksymCzech 5 лет назад
AI is basically math. To understand, how backpropagation learning in neural nets works, you need to know your multidimensional calculus, chain derivation rule, and some undergraduate-level linear algebra. That's all there is to it.
@ShaneHummus
@ShaneHummus 5 лет назад
Curiosity made me watch this crash course AI series.
@lamidom
@lamidom 5 лет назад
It means you have more than one neuron
@whocares2087.1
@whocares2087.1 5 лет назад
This is a really great series. **takes notes**
@beingpras
@beingpras 5 лет назад
Deep learning and understanding is really what differentiates most successful people. NO matter what fields they are in!!
@werothegreat
@werothegreat 5 лет назад
You have a very mellow, soothing voice. Just wanted to say that!
@MiguelAlastreP
@MiguelAlastreP 5 лет назад
This is awesome. Great big picture about AI spaghetti 5:48
@thepowerful7593
@thepowerful7593 5 лет назад
3% chance of that
@hoanganhnguyen9678
@hoanganhnguyen9678 12 дней назад
wonderful. Just astonished to know that as we dive into the "deeper" parts of the neural network, we can understand less on what's happening behind the scene. It can be so abstract to the point that it evades our cognition.
@elihinze3161
@elihinze3161 5 лет назад
I need you to narrate an audiobook. Your voice is so soothing..
@amanatee27
@amanatee27 5 лет назад
This is a great series, thank you all for taking the time to make it! For future videos, could Jabril's audio be turned up just a bit more? Sometimes, the end of his sentences get quieter and it's harder to catch all the info. Thank you!
@jweezy101491
@jweezy101491 5 лет назад
I already know all this stuff but I love jabrils and this series is so good.
@hollyg.5516
@hollyg.5516 5 лет назад
jweezy2045 had to make your absolute genius apparent, eh?
@28MUSE
@28MUSE 5 лет назад
Speaking of Learning things fast 💟 AI is one such industry that requires constant learning.. Thanks for this video. Once you learn this technology it's so dynamic that if you don't catch up and be updated, your knowledge will get outdated
@nezimar
@nezimar 5 лет назад
Nice shout-out to MarI/O !
@Anonarchist
@Anonarchist 5 лет назад
John_GreenBot learns "Dog", by the end of the series he might learn how to drive, and then DESTROY US ALL!
@Danilego
@Danilego 5 лет назад
I just love it when John Green Bot completely ignores what Jabril says and does something random lol
@thepowerful7593
@thepowerful7593 5 лет назад
Lol
@hugo54758
@hugo54758 5 лет назад
I LOVE THIS GUY
@mattkuhn6634
@mattkuhn6634 5 лет назад
Oh man, I can't wait to see you guys talk about gradient descent! Great job so far!
@mikek4025
@mikek4025 5 лет назад
what about google's deep learning neural network used in alpha zero and alpha go? That's pretty cool
@theodorechandra8450
@theodorechandra8450 5 лет назад
I believe that alphago and alpha zero put the grid in the chess or GO as one input neuron, in the GO they can make black piece as -1, none as 0, white as 1. In chess each piece can be assigned a single number.
@mattkuhn6634
@mattkuhn6634 5 лет назад
Alpha Go is notable less for the basic architecture of its network, and more for the way it's trained. The reason why Go was considered such a problem was because its decision space is huge. Chess was comparatively simple. Chess is played on an 8x8 grid, which means it was feasible for a computer to calculate on the fly every single possible board state given the current state, and then it would simply pick the move that made it most likely to win. That's why computers beat grandmasters in chess during the 90's, almost 20 years before neural networks really took off. Go, on the other hand, is a 15x15 grid, and so the decision space was much to large for that kind of brute force calculation. A simple multi-layer perceptron like this episode shows wouldn't work for this either, partly because there simply isn't enough data. You would need move-by-move dissections of hundreds of thousands of games, minimum, and probably more in the millions or even possibly billions of games. They'll get to why this is the case next week when they talk about optimization methods. It also wouldn't work because you would have no way of telling the network what makes a move "good". The solution was to use a different method of optimization called a policy gradient, or more broadly, reinforcement learning. In a sense they simulated games of Go, with the computer playing against itself. At every turn, the network takes the board state as input, and then decides on an action to take - in the case of Go, which spot to put a tile on. It starts off making decisions randomly, but you update the weights on the various actions based on its performance - it plays out whole games, probably against itself, and gives a reward to the winning set of actions, and a punishment to the losing ones, weighting them up or down respectively. Over many, many games, the system learns a policy - what action to take given a board state. Importantly, it doesn't need to know anything about what moves were made to get you into this state, nor does it have to calculate future permutations of the board. Much like the image recognizer, it learns "if I see tiles in this configuration, this move is the most likely to win." In this way, it kind of provides its own training data. Alpha Go is far more complicated than this simple description of course, and if anyone knows its architecture better than me I'd be glad to hear more about it (I haven't read that paper) but that's the essence of it.
@mariafemina
@mariafemina 5 лет назад
@@mattkuhn6634 wow thank you so much for the explanation!! 😍 That's why I often like comments more than actual vids
@mattkuhn6634
@mattkuhn6634 5 лет назад
Maria Fedotova Glad it was enjoyable! I’m finishing up grad school on this topic now, and I had a seminar last semester that covered reinforcement learning extensively, so I find it super interesting!
@gadgetboyplaysmc
@gadgetboyplaysmc 4 года назад
I can finally hear Jabrils opening and closing his mouth while talking.
@abrahammekonnen
@abrahammekonnen 5 лет назад
So basically every layer let's you measure more parts of the picture letting you be more accurate in classifications, right?
@inertiasquared6667
@inertiasquared6667 5 лет назад
Yes and no, input layers let you measure more, hidden layers allow the program to do more with the data and 'think about it' in a more complex manner. It also depends on how the weights have been calibrated as well, though. Hope I could help!
@ASLUHLUHC3
@ASLUHLUHC3 5 лет назад
Every pixel is inputted at the start. Watch 3blue1brown's neural network series for a far deeper explanation.
@abrahammekonnen
@abrahammekonnen 5 лет назад
Thanks for the answers :) guys.
@kevadu
@kevadu 5 лет назад
One thing that was glossed over (though understandably so because this is just the intro) is how the input features are used. What he described is a simple 'fully connected' layer that treats every input pixel as a separate feature and looks at arbitrary combinations of them. But this is actually not very robust against things like translation, i.e. if you trained it on images of dogs where the dogs were all centered in the picture and then you showed it the image of a dog in which the dog is off the side it probably wouldn't even recognize it as a dog because the starting location of each pixel is extremely important. What almost all image recognition algorithms use today are 'convolution layers'. Rather than training neurons that look at specific pixel they train 'filters' that are a small group of pixels that gets scanned across an image. So the specific pixels input into the filter are constantly changing but they're always going to be in the same positions relative to each other. This emphasizes relative positions of pixels over absolute positions and makes the whole algorithm a lot more robust as well as easier to train.
@abrahammekonnen
@abrahammekonnen 5 лет назад
@@kevadu so they basically program the pixels to be scanned as one large 'pixel' and that changes based on what you are looking at, right? (Just trying to synthesize the info u said so I can make sure I understand it)
@thelastone0001
@thelastone0001 5 лет назад
I really like this host. I hope Jabroni sticks around for a long time.
@phasingout
@phasingout 5 лет назад
Now i want to learn to program. Ty for this
@remuladgryta
@remuladgryta 5 лет назад
3:14 MarI/O? Nice!
@reelsalih
@reelsalih 5 лет назад
I'm a simple human. I see cute doggos and I click.
@valhernandez4247
@valhernandez4247 5 лет назад
R I literally saw the pug and I clicked 🐕
@julioservantes8242
@julioservantes8242 5 лет назад
@@valhernandez4247 A pug is an abomination not a cute doggo.
@zhongliangcai602
@zhongliangcai602 5 лет назад
I love corgis
@discordtrolls5668
@discordtrolls5668 4 года назад
No one cares
@anke4347
@anke4347 7 месяцев назад
This is a great video for complete beginners. Thank you!
@williamwebb2863
@williamwebb2863 4 года назад
How did I not know Jabrils did an AI crash course series?!
@BrainsApplied
@BrainsApplied 5 лет назад
*I love this series* ❤️❤️❤️
@ОлегКозлов-ю9т
@ОлегКозлов-ю9т 5 лет назад
World isn't just bagels and donuts. Sometimes it's bees and threes.
@nikolasgrosland9341
@nikolasgrosland9341 5 лет назад
5:46 Spaghetti is spelled incorrectly, just a heads up.
@photophone5574
@photophone5574 5 лет назад
3:14 uncredited video from Sethbling.
@evanreidel22
@evanreidel22 5 лет назад
I'm surprised you didn't decide to voiceover your Crash Course as well :)
@element9224
@element9224 5 лет назад
Amazing job with these videos! I’m excited for the next video because I’m stuck on backpropagation. Also did you mention bias’s or not?
@IceMetalPunk
@IceMetalPunk 5 лет назад
They didn't, but a bias can be thought of as an extra weight on each layer (or an extra neuron that's always inputting 1), so it's kind of captured by the simplified discussion of weights. Also, as someone who is a professional software engineer, who has a degree in computer science, and who took several in-depth machine learning courses at uni, let me tell you: I'm still stuck on backpropagation XD The overall idea is simple enough, but I have yet to be able to ever remember the math that goes into it.
@element9224
@element9224 5 лет назад
IceMetalPunk yeah, I ended up just running two versions of the same network, and the one that is better gets cloned too the other with slight changes to the weights and biases. It works, but takes more computation and time (as cloning the original one with differences can make it worse). Also my network doesn’t have a limit where it can only be in between 1 and 0. I use outputs of different numbers to tell me what it’s saying. Ex output 0.5-1.4 means blank output 1.5-2.4 means blank etc.
@IceMetalPunk
@IceMetalPunk 5 лет назад
@@element9224 That is known as neuroevolution, my dude :) As for the output, there's functionally not really a difference between 0-1 or 0-2.4; you can always project the range onto any other range proportionally :)
@marcelocondori7761
@marcelocondori7761 4 года назад
So good and interesting explanation!
@horisontial
@horisontial 5 лет назад
John Green-bot? Hhahaha, I might be easily delighted
@cakezzi
@cakezzi 5 лет назад
Deeper nueral networks with deeper layers? That's deep
@NawabKhan-vq5de
@NawabKhan-vq5de 5 лет назад
amzing sir keep it up and we are waiting for your more tutorials...
@chillsahoy2640
@chillsahoy2640 4 года назад
Seeing the cassette made me realize that for many younger viewers, this will be a strange type of old technology they may have never seen before.
@ac3_train3r_blak34
@ac3_train3r_blak34 5 лет назад
He's back, ladies and gentlemen!!!!
@i_smoke_ghosts
@i_smoke_ghosts 5 лет назад
very good Gibraltar! my mans
@i_smoke_ghosts
@i_smoke_ghosts 5 лет назад
You know his name is not Gibraltar ay?
@abrahammekonnen
@abrahammekonnen 5 лет назад
Thanks for the video Jabril
@JC-vu6sn
@JC-vu6sn 2 года назад
excellent lesson
@feyisayoolalere6059
@feyisayoolalere6059 5 лет назад
Could you do a video comparing how the early visual processing /feedforward process works in the brain? thanks for being cool!
@Vishal-np9pe
@Vishal-np9pe 5 лет назад
Hey! Jabril, want to thank you for such an informative and easy to comprehend lecture. But the only thing is that I didn't get that gist of the math imagery. Could you help me out?
@nomobobby
@nomobobby 5 лет назад
vishal Ghulati What part did you stumble on?
@Vishal-np9pe
@Vishal-np9pe 5 лет назад
@@nomobobby how input layer sends its input to the next layer?
@idkmy_name7705
@idkmy_name7705 Год назад
i just finished high school. The maths is basically complex probability?
@starboyjadenn
@starboyjadenn 5 лет назад
Anyone else watches this _and_ Ted-Ed?
@ammaeaar
@ammaeaar 5 лет назад
fantastic video
@jasonsoto5273
@jasonsoto5273 4 года назад
Good vid!
@CuriousIndiaCI
@CuriousIndiaCI 4 года назад
Thanks... Crash Course
@LiaAwesomeness
@LiaAwesomeness 5 лет назад
why red green and blue and not red yellow and blue (or magenta, yellow and cyan blue)? and how do the neurons "distribute tasks"? how do they "decide" which neuron of the hidden layers focuses on what?
@navidb
@navidb 5 лет назад
Watched the whole thing, don't understand anything, back to eating hot dogs.
@ASLUHLUHC3
@ASLUHLUHC3 5 лет назад
This isn't a very good video imo. Watch numberphile's neural network videos, and then 3blue1brown's neural network series.
@adammorley6966
@adammorley6966 5 лет назад
Music theory, please?
@programmingjobesch7291
@programmingjobesch7291 Год назад
9:12- Genuinely thought this was a rocket ship when it came on screen...😐
@supernova5434
@supernova5434 4 года назад
It is like minecraft building, the bigger the scale the more details you get
@nickd7986
@nickd7986 5 лет назад
Wanted Johann Bon Greenvot to fire lasers and summon magic but he slowly backed away.
@LashknifeTalon
@LashknifeTalon 5 лет назад
I was expecting him to calculate the possibility that Jabril was a dog.
@robellyosief8820
@robellyosief8820 Год назад
Jabrill!!!
@1224chrisng
@1224chrisng 5 лет назад
Who ships Jabrills with Carykh
@gjinn5001
@gjinn5001 4 года назад
In the past I had never even cared for AI but when the world began to change humans adopted AI now I should study some things for future life 😂
@discordtrolls5668
@discordtrolls5668 4 года назад
I still don’t care I just gotta watch this for a class
@geoffreywinn4031
@geoffreywinn4031 5 лет назад
Educational!
@ananyapujar6797
@ananyapujar6797 4 года назад
Doesn't lighting change the face Id's recognition of people?
@chu8
@chu8 5 лет назад
john green bot? literally skynet
@sxndra.y543
@sxndra.y543 4 года назад
ok can we talk about how big his beanie is compared to his head
@freddypelo
@freddypelo 4 года назад
Tibetan Monks discovered this "Neural Network" long ago. Hundreds of them chant independently parts of a prayer so it is done in one sec. The problem is that god is deaf.
@BinaryReader
@BinaryReader 5 лет назад
Glossing over a few details there.
@thomas.02
@thomas.02 5 лет назад
why can't the neurons within each hidden layer interact with each other? For example, if a neighbour neuron got a high number, that'd make another neuron (of the same layer) act differently. is that arrangement helpful or just unnecessarily complicating things?
@openedsaucer
@openedsaucer 5 лет назад
Thomas Chow the network described in the video is called a feedforward network. The kind of network you’re talking about does exist and is called a recurrent neural network. It’s usually used for sequential data where as feedforward nets are used for one-off computations.
@thomas.02
@thomas.02 5 лет назад
@@openedsaucer where does sequential data pop up, i'll guess data about location of a self-driving car?
@openedsaucer
@openedsaucer 5 лет назад
@@thomas.02 sequential data can come in a bunch of different forms. Usually it comes in the the form of a time series. For example you would use a feed forward net to classify images and you would use an RNN classify videos with each video being a stream of images over a period of time. You're right in that RNN's are likely used in self driving applications as data is captured in real time so to speak. Another place you might want to use an RNN is for something like speech to text where the number of words/syllables in a sentence can vary. Typically you don't want to use RNN's for simple classification as it's a bit overkill. You can make feedforward nets as big/complex as you want to approximate whatever function you're trying to map.
5 лет назад
What was the guy saying "nope, nope" with his head an closing the laptop, looking at ?
@gabedarrett1301
@gabedarrett1301 5 лет назад
What about quantum computers?
@jeffthegangster6065
@jeffthegangster6065 5 лет назад
Is green bot real or is it edited in
@rosswebster7877
@rosswebster7877 5 лет назад
I'm going to guess a bit of both.
@jeffthegangster6065
@jeffthegangster6065 5 лет назад
@@rosswebster7877 only they know
@jimmyshrimbe9361
@jimmyshrimbe9361 5 лет назад
I'm pretty sure it's full CGI.
@jeffthegangster6065
@jeffthegangster6065 5 лет назад
@@jimmyshrimbe9361 it seems to real to be edited
@nomobobby
@nomobobby 5 лет назад
I guessing real but I wonder if he’s actually computing the examples or if there feeding him lines to say?
@rich.n3215
@rich.n3215 5 лет назад
Bring back Crash course mythology... That's the first one I ever watched
@W0lfbaneShikaisc00l
@W0lfbaneShikaisc00l 5 лет назад
Jabril: Hey, I'm y'bro! Yeah you are! Ahh I'm kidding it just sounds very like it.
@discordtrolls5668
@discordtrolls5668 4 года назад
Bruh
@16.t.badulla86
@16.t.badulla86 5 лет назад
Apart from the science the dog 🐕 is cute.
@JuBerryLive
@JuBerryLive 5 лет назад
Jakequaline?
@nappyjonze
@nappyjonze 5 лет назад
Could you guys do an African American History Crash Course?
@randomguy263
@randomguy263 5 лет назад
But why?
@shortssquad1
@shortssquad1 5 лет назад
Miss this line "yo guys jabrils here!"
@Bubbalikestoast
@Bubbalikestoast 5 лет назад
Hi
@markorendas1790
@markorendas1790 4 года назад
IM SURE THERE WILL BE A PART OF ME IN A AI VERSION AROUND ON THE INTERNET AFTER IM GONE...
@Audrey-eg2zf
@Audrey-eg2zf 5 лет назад
Yo
@erkins8818
@erkins8818 4 года назад
How to implement
@TonyTigerTonyTiger
@TonyTigerTonyTiger 4 года назад
Contradiction. At 9:30 he says that AlexNet needed "more than 60 million neurons", but at 2:33 we can see the abstract of the paper and it says AlexNet used only 650,000 neurons.
@dragonface528
@dragonface528 5 лет назад
jabril!!!!
@Benimation
@Benimation 5 лет назад
AS A HUMAN; I LOVE() EATING() SPAGEHTTI;
@FollowTheRabbitHole
@FollowTheRabbitHole 5 лет назад
No lie, I read the title as "neutral fireworks".
@yourbuddyunit
@yourbuddyunit 5 лет назад
That's literally what neurons do. Literally. I think, Therefore I am an amalgamation of neural fireworks.
@pojokindie
@pojokindie 5 лет назад
Yup like hmmm I hate dog but I love cat
@alanoudalhamdi8216
@alanoudalhamdi8216 4 года назад
How can I translate to Arabic?
@dylanparker130
@dylanparker130 5 лет назад
so, they got everyone to do their work for nothing? how, er... ingenious
@IceMetalPunk
@IceMetalPunk 5 лет назад
It's not doing "their" work. It's getting people to help with the work that will help everyone.
@dylanparker130
@dylanparker130 5 лет назад
@@IceMetalPunk if a psychology researcher wishes to carry out a study on human subjects, they have to pay those subjects. yet somehow, people literally mucking in on the hard graft of research should do it for free?
@IceMetalPunk
@IceMetalPunk 5 лет назад
@@dylanparker130 The people who are "doing it for free" are the same people who are benefiting from it directly. In your analogy, it'd be less like having free subjects and more like having other psychologists help with their research --- something that happens all the time.
@dylanparker130
@dylanparker130 5 лет назад
@@IceMetalPunk i don't believe the people who respond to crowd funding are typically fellow researchers, but rather interested amateurs if they were fellow researchers, why would they agree to help other researchers in this way? to do so would be to help the competition, unless they were credited with authorship, which they would not be in such cases
@IceMetalPunk
@IceMetalPunk 5 лет назад
@@dylanparker130 First of all, many people in STEM fields are more concerned with advancing their field than trying to compete with others at the expense of their field. Secondly, whether you're an amateur programmer or a professional computer science researcher, advancing AI tech still helps you out. (Don't believe me? Take a look at Runway ML, a little software suite that's designed for amateur programmers, but lets you use tons of advanced machine learning algorithms and pre-trained networks without having to even understand the details of implementation if you don't want. Machine learning advancements aren't just for professionals and researchers.)
@jimmyshrimbe9361
@jimmyshrimbe9361 5 лет назад
Haha you said doggernaut.
@th3bear01
@th3bear01 4 года назад
Kinda lame that you did not credit Sethbling for that clip of marIO.
@blacktommer3543
@blacktommer3543 5 лет назад
At 5:48 there's a typo that says spagehtti instead of spaghetti
@mattlangstraaat3508
@mattlangstraaat3508 5 лет назад
Affirmative action solved... everyone replaced by JOHN GREEN BOTS.. sounds like a dem solution to me.
@donrichards1362
@donrichards1362 4 года назад
What's my animal that do is Two Princes of Darkness from the future or from the past from the present
Далее
Training Neural Networks: Crash Course AI #4
12:29
Просмотров 193 тыс.
Supervised Learning: Crash Course AI #2
15:23
Просмотров 399 тыс.
skibidi toilet 77 (part 3)
04:51
Просмотров 12 млн
4 YEAR SIBLING DIFFERENCE! 😭 #shorts
00:11
Просмотров 10 млн
Transformer Neural Networks Derived from Scratch
18:08
Просмотров 140 тыс.
What Is Artificial Intelligence? Crash Course AI #1
11:46
I Built a Neural Network from Scratch
9:15
Просмотров 299 тыс.
Natural Language Processing: Crash Course AI #7
13:29
Просмотров 139 тыс.
The Essential Main Ideas of Neural Networks
18:54
Просмотров 945 тыс.
Why Neural Networks can learn (almost) anything
10:30