My name is Xander Steenbrugge, and I read a ton of papers on Machine Learning and AI. But papers can be a bit dry & take a while to read. And we are lazy right?
In this channel I try to summarize my core take-aways from a technical point of view while making them accessible for a bigger audience.
If you love technical breakdowns on ML & AI but you are often lazy like me, then this channel is for you!
You explain something you are mastering. However in order for other people to understand you are speaking too fast. And more difficult to understand when English is not your native language
One must stress what you say at the end of the video at 28:20, that although AlohaFold 2.0 can predict native confirmation of an amino acid sequence, there are other contributing factors, and the algorithm isn’t able to answer the why, nor how proteins find their native state out of the vast combinatorial complexity of native confrontation structures. Levinthal’s Paradox.
This is cool, but after the third random jumpscare sound I couldn't pay attention to what you were saying--all I could think about was when the next one would be. Gave up halfway through since it was stressing me out
This is a marvel. I read a book with similar content, and it was a marvel to behold. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn
Thanks for sharing this! I may be misunderstanding something, but it seems like there might be a mistake in the description. Specifically, the claim in 12:50 that "this is the only region where the unclipped part... has a lower value than the clipped version". I think this claim might be wrong, because there could be another case where the unclipped version would be selected: For example, if the ratio is e.g 0.5 (and we assume epsilon is 0.2), that would mean the ratio is smaller than the clipped version (which would be 0.8), and it would be selected. Is that not the case?
Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work
The term "activation" in the context of neural networks generally refers to the output of a neuron, regardless of whether the network is recognizing a specific pattern. The activation is indeed a numerical value that represents the result of applying the neuron's activation function to the weighted sum of its inputs. Just posting here what ChatGPT told me, because the definition of "activation" in this video confused me