Тёмный
Aaron Davis
Aaron Davis
Aaron Davis
Подписаться
Комментарии
@Joel_M
@Joel_M 21 день назад
One thing that you might have missed, and what likely resulted in the worsening performance from your own implementation is that the researchers only replaced the best model if the newer one won by a margin of 55%, otherwise it would be rejected and train for N games again.
@thanapatrachartburut513
@thanapatrachartburut513 8 месяцев назад
great
@kapilpoudel8452
@kapilpoudel8452 8 месяцев назад
This is the best video i have evern seen about AlphaZero !
@nossonweissman
@nossonweissman 8 месяцев назад
This was amazing!
@karlbooklover
@karlbooklover 11 месяцев назад
great video, activated noticiations :)
@shmug8363
@shmug8363 Год назад
please make more videos man, the quality is great. Would you consider exploring some of root concepts in this video in a bit more detail? like maybe a dedicated video on convolutional neural networks? I'm a beginner programmer and I'm super interested in this stuff.
@stonefacehhr6638
@stonefacehhr6638 Год назад
What
@uku4171
@uku4171 Год назад
Isn't AlphaFold 2 an even more impressive feat than the matrix multiplication thing?
@2ToTheNthPower
@2ToTheNthPower Год назад
AlphaFold 2 is a very impressive domain-specific feat. The matrix multiplication advancement is more of a meta advancement... it has the potential to seriously improve the computational efficiency of training models like AlphaFold 2, so in my opinion the matrix multiplication improvement will have a much broader positive impact than AlphaFold 2 has had so far.
@applepaul
@applepaul Год назад
At 2:35 you mention that we visit the root node (of the subtree) 9 times. I dont get this. Dont we just visit it once and then continue our DFS(depth first search down the tree). So essentially, dont we just visit it only once and not 9 times??
@2ToTheNthPower
@2ToTheNthPower Год назад
We visit it 9 times in the sense that we've experienced 9 different game branches so far as a result of visiting that game state. If you're simulating games one at a time, then you will pass through that node 9 different times. If you simulate games in batches, then you can do what you're describing. For the sake of MCTS, though, I think the "visit count" essentially refers to the number of leaf nodes that result from visiting a particular node. In that sense, it doesn't matter if we simulate one game at a time, or if we simulate games in batches.
@emmettdja
@emmettdja Год назад
monte carlo tree search
@MightyElemental
@MightyElemental Год назад
Deary me, "carbon emissions" being brought up in machine learning training 😩
@2ToTheNthPower
@2ToTheNthPower Год назад
Where do you think the energy required to run an entire datacenter worth of TPUs comes from, exactly? Ethics, climate science, and machine learning are inseparably linked when we're talking about a project of this scale, and we can't escape that.
@josephmazor725
@josephmazor725 Год назад
I’ve been looking for a great explainer for Alphazero, wish you best of luck with all future videos, I’ll be there to continue watching them
@furbyfubar
@furbyfubar Год назад
Really nice video! One minor gripe is that at 5:30 you introduce a bunch of terms the layers/steps of what's done with the data that are not explained further than the abstract graphics on the screen. This is sort of fine, but when the next section at 6:06 begins with "So now we understand all the pieces of the puzzle..." it feels more than a little hand-waved. I'd had liked either at least some more sentences on explaining what each of those steps in slightly more detail, OR an acknowledged that we're not going to get into the weed of those things in this video. So it's more the disconnect from how your script's written than really a problem with the info itself. The video assumes that everyone gets what (for example) "a low-dimensional embedding of the gamestate" is and why it's needed here. I sort of can figure that out, but when it's thrown at me in between other sentences that are also dense with technical terms, delivered at that speed? Well I really didn't feel like I understood all the pieces of the puzzles after that slide.
@2ToTheNthPower
@2ToTheNthPower Год назад
That's fair, and thank you for the feedback! I've put a lot of thought into balancing between explaining things and assuming people know things, and I don't think I've got the balance quite right yet. I'd like to make a series that starts all the way at algebra and works up to state-of-the-art ML, but the amount of time and effort that would take with animations is enormous. Maybe this will become my life's work :)
@furbyfubar
@furbyfubar Год назад
@@2ToTheNthPower You certainly have a talent for making videos that explain stuff. So to keep making videos, (while not biting off so much that it feels overwhelming to continue), is likely the best (only?) way forward to get better at it, and to see if there's a career in it for you in the long run. From what I've heard other science communicator youtubers say, having people who *can* ask the stupid or at least less informed questions is important. For example, @Numberphile works so well in part because Brady is *not* a mathematician, so he's constantly asking the obvious next question that other non-mathematicians might have. One way to get those questions early enough in the process so that you can still answer those questions in a video is to have some sort of small group or community that read your draft for scripts that can give feedback on if something is unclear, and to pretty much after each paragraph ask them "Are there any obvious questions that this paragraph raises that maybe should be answered before we move on?". The tricky part is to find people who are both interested enough in the subject to want to read/listen to those early drafts and also give feedback, but who *don't* already know most of what's covered in the script. Once your channel grows it's possible to crowdsource this, but at first, having some friends who are willing to do it might be easier; mainly because community building is important and all, but it also takes a lot of time and effort that, early on, might be better spent on making more videos instead.
@JosephTarun
@JosephTarun Год назад
Letsgo a Ludbud in the computer science RU-vid space !!
@2ToTheNthPower
@2ToTheNthPower Год назад
😁
@marcotroster8247
@marcotroster8247 Год назад
I don't know why people in AI don't admit that this field requires technical excellence in high-performance computing to make trainings even feasible. It's important to get the results within a reasonable timespan unlike Hitchhiker's Guide 😂 I've spent the past 3 months accelerating a training from a month to a few hours. Let's make better use of our hardware instead of throwing money at the problem. Modern PC games can squeeze insane amounts of compute out of our machines. Let's facilitate this in our trainings as well 😉
@etsequentia6765
@etsequentia6765 Год назад
Zarathustra in the background... nice touch. All hail our new overlord, HAL 9000!
@fan5188
@fan5188 Год назад
Hey Aaron, I love your video. Please keep the good work 👏
@studgaming6160
@studgaming6160 Год назад
nEXT VIDEO WHEN?
@reh.4919
@reh.4919 Год назад
If it takes just one atom to indicate the possible state of a game of Pente, would we run out of atoms?
@VPSOUNDS
@VPSOUNDS Год назад
And i am excited to find out what your next video is gonna be.
@taptox
@taptox Год назад
how about adding #Some2 tag?
@2ToTheNthPower
@2ToTheNthPower Год назад
I’m not sure I know how to add tags, but it’s a good idea and I tried. Thanks for the suggestion!
@yourfutureself4327
@yourfutureself4327 Год назад
💚
@andrewrobison581
@andrewrobison581 Год назад
the same as it is large, so it is small. the only winning move is none at all. memento mori
@pananaOwO
@pananaOwO Год назад
I was thousand like
@JoelFazio
@JoelFazio Год назад
Good start but holy shit there is way too much black screen time. Also your mic clips quite a lot. Keep at it, youll be big time on yt in no time.
@timtrix1449
@timtrix1449 Год назад
Great work! That was a really nice Intro. I especially liked the Storytelling, which made it feel more like a movie than science Education :)
@mxn5132
@mxn5132 Год назад
Neat
@georgevjose
@georgevjose Год назад
Definitely the start of a great channel. Good luck!!
@NoNTr1v1aL
@NoNTr1v1aL Год назад
Absolutely amazing video! Subscribed.
@IqweoR
@IqweoR Год назад
Only one complaint. Too much emptyness. Even if you are talking about atoms in the universe - insert some pictures over your voice, so we don't stare at black screen for this long. For the first 5 seconds of the video I thought my RU-vid app froze and was just outputting sound without video. But contentwise this is top notch, composition is great, and topic is good. Keep up the good work, there's not enough channels that have this quality talking about AI, you will be famous in no time :)
@2ToTheNthPower
@2ToTheNthPower Год назад
Thanks for your input and the complement! I’ll take them both to heart
@MsHofmannsJut
@MsHofmannsJut Год назад
I disagree. At last someone not blasting us with multimodal excitation.
@favesongslist
@favesongslist Год назад
@@MsHofmannsJut There is a balance here, I thought there was a video issue with just that black screen. SO glad I kept watching.
@조셉0309
@조셉0309 Год назад
Amazing video!
@dr.kraemer
@dr.kraemer Год назад
great work - keep at it! I bet you're correct that you need more space and more training data. are you using a trie to represent your graph?
@2ToTheNthPower
@2ToTheNthPower Год назад
I used a networkx directed graph data structure. Some sort of tree could have worked too, though theoretically I think it's more appropriate to describe it as a graph. Could be wrong tho
@jasonchiu272
@jasonchiu272 Год назад
Everyone is talking about AlphaZero but I am wondering when AlphaOne will be released.
@2ToTheNthPower
@2ToTheNthPower Год назад
smh DeepMind needs to get it together
@yourcatboymaid
@yourcatboymaid Год назад
Great video, I hope you decide to make more!
@AA-gl1dr
@AA-gl1dr Год назад
Amazing video, thank you so much.
@captain_crunk
@captain_crunk Год назад
I am sub #320. Wait, 320?
@farpurple
@farpurple Год назад
when matrix multiplication witn O(1) ?
@2ToTheNthPower
@2ToTheNthPower Год назад
I don't think it is? I think the strassen algorithm is around O(n^2)
@ofekshochat9920
@ofekshochat9920 Год назад
Has been shared in lc0 discord. Good stuff
@2ToTheNthPower
@2ToTheNthPower Год назад
Ooh! I bet there a lot of people there who know more about this than I do. I'm looking forward to seeing their comments!
@ofekshochat9920
@ofekshochat9920 Год назад
@@2ToTheNthPower I see, what's your handle? May I introduce you? Oh I read it wrong, thought you meant that you're there, join along :)
@ofekshochat9920
@ofekshochat9920 Год назад
@aarondavis5609 I have some stuff I'd like to say, it's a really cool video, and I'd definitely use it to introduce people to the concept (thanks!), but some stuff was a little confusing: like the highlighting of different terms in ucb. Also, P(s) is the policy, and shouldn't be there when we're talking about the stuff before the nn... But great video! Come join, we use transformers now as well
@2ToTheNthPower
@2ToTheNthPower Год назад
Yeah, that makes sense. P(s) probably should've brought in after the network was introduced. Ooh! Transformers are cool. I'm thinking about making a video on visual intuition for the attention mechanism, at least for NLP. We'll see how long that takes!
@lachlanperrier2851
@lachlanperrier2851 Год назад
U are amazing thank you for existing
@brady6968
@brady6968 Год назад
very glad RU-vid recommended this to me, good video!
@NewtonMD
@NewtonMD Год назад
I mean who knows what he is talking about?! Just love the vid for the quality!
@kraketito8999
@kraketito8999 Год назад
I am amazed by the quality of animation.which software is used for making these animations
@2ToTheNthPower
@2ToTheNthPower Год назад
Manim in Python! It was started by 3Blue1Brown. Definitely look it up!
@conando025
@conando025 Год назад
I started to look into implementing alphaZero for a diffrent game and this is such a great overview. There are some details that don't quite seem to align with what i got from the paper and its materials, but they are so insignifikant that they're not worth mentioning (besides I'm not even sure I got it right). I commend you for getting things to run, its a project that sounds easier then it is (or I'm just dumb). Right now I'm struggling with the NN part especially since I cursed myself by deciding to write my implementation in rust. Do you plan on publishing your source code?
@2ToTheNthPower
@2ToTheNthPower Год назад
This is definitely not an easy project, and it sounds like you're making it a bit more difficult by trying to code NNs from scratch, so give yourself lots of credit on that front. I definitely left a good chunk of information out... I don't really discuss self-play here, or adding dirichlett noise to the prior, or compressing arrays and making them hashable so I can put them in a graph without destroying my computer! There's also some ambiguity in my mind as to whether playouts are used at all after training is complete, and that seems pretty important, too. I have my code here ( github.com/2ToTheNthPower/Pente-AI ), and I recommend looking up "Accelerating Self-Play Learning in Go" on arXiv. I think it's a better resource than the original paper was. Hope that helps!
@conando025
@conando025 Год назад
@@2ToTheNthPower It definitely will. Yeah with the neutral network i definitely shot my self in the foot because tensorflow in Rust is pretty bare bones. I'll also look into that paper but i have to say what helped the most was not the actual paper but rather the pseudo code they provided it just sadly skimps out on the neural network front
@2ToTheNthPower
@2ToTheNthPower Год назад
@@conando025 I gotcha. From what I've read about the neural network, they primarily used ResNets, since residual connections allow very deep networks to train and getter better results than shallow networks. I don't know if the KataGo paper has any pseudocode, so it may not be helpful on that front.
@dubsar
@dubsar Год назад
9:05 "How will they be used tomorrow?" To keep the base of the social pyramid under control while the top 0.01% live as demigods.
@2ToTheNthPower
@2ToTheNthPower Год назад
There are definitely some interesting ethical, social, and political issues that will emerge as AI becomes more and more capable.
@avananana
@avananana Год назад
This is the sad state of machine learning technology, it has such great potential to do good but the ones that have the power to utilise it the most will only use it for personal gain, which has been the case with how humanity works for centuries upon centuries.
@MrIndomit
@MrIndomit Год назад
Cool video! I leave comment here to help to promote :)
@SinanAkkoyun
@SinanAkkoyun Год назад
More like this! Finally somebody painting the whole picture!
@Luredreier
@Luredreier Год назад
Almost 4 k views with less than 200 subscribers? Jeez. Well, you got one now.
@Alex50969
@Alex50969 Год назад
You made it right into the youtube algorithm, if you are able to produce a new video in the next days, your channel will grow extremely fast. Quality animations, interesting topic and great commentary. Good job :)
@2ToTheNthPower
@2ToTheNthPower Год назад
Thanks! If I can find time over the next week, I may start another video. Lots of ideas!