Тёмный
No video :(

Capsule Networks: An Improvement to Convolutional Networks 

Siraj Raval
Подписаться 770 тыс.
Просмотров 142 тыс.
50% 1

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 235   
@UnboxingSve
@UnboxingSve 6 лет назад
What I can say is just a huge respect to you Siraj. How fast you catch up with new things that is just amazing!
@SirajRaval
@SirajRaval 6 лет назад
Thanks! I really love this stuff so its always fun to study it
@whatcani2
@whatcani2 6 лет назад
I think u have GPU on your brain to speed up learning all those new algo.
@randpaul9863
@randpaul9863 4 года назад
pip3 install tensorflow --upgrade
@allennelson1987
@allennelson1987 4 года назад
That I didn't understand it is more about me and my experience than it is about his explanation. I didn't understand it, but I have upvoted it anyway because it was interesting.
@snzn3854
@snzn3854 6 лет назад
About time, I was waiting for him to publish something like this because he keeps mentioning about the many things wrong with backpropagation.
@SirajRaval
@SirajRaval 6 лет назад
same
@IyamwhoIyam
@IyamwhoIyam 6 лет назад
Hi Siraj, I've been waiting for this paper. What a pleasant surprise to learn it has been published only few days ago! I have downloaded the paper and will read it over my second, third, and fourth cup of coffee. You've done an excellent job presenting this very complex topic.
@arriva1256
@arriva1256 6 лет назад
Its just amazing that Hinton once again revolutionized neural nets or ai I if you want to call it. Incredible guy!
@ladjiab
@ladjiab 6 лет назад
Wish I had money to support you for all the good work you are doing. Thank you
@diegoantoniorosariopalomin4977
I supported him for months and he never delivered the rewards
@diegoantoniorosariopalomin4977
If you read the comments on his older videos you will see my asking for the private chat for backers repeatedly
@diegoantoniorosariopalomin4977
And him giving increasingly vague answers
@SirajRaval
@SirajRaval 6 лет назад
thanks for listening :)
@unoqualsiasi7341
@unoqualsiasi7341 6 лет назад
the rewards are the videos, the code and the knowledge he shares here. Man there are people that play fking video games and they receive thousands of dollars in donations for that. Stop complaining please, this is useful knowledge worth more than a dollar/month.
@yvanscher7555
@yvanscher7555 6 лет назад
It''s incredible that after so much has been done on a dataset like mnist you can still get state of the art if you come up with something clever. In short a capsule network adds a third dimension to the network shape. cool.
@SirajRaval
@SirajRaval 6 лет назад
great way of putting it! 'a 3rd dimension'
@thedevo01
@thedevo01 6 лет назад
I am so very grateful for your efforts to deliver all of this information. you're a very good educator. the way you explain these complex solutions is very demystifying, easy to understand, and showing it in practice to validate what we came to understand thanks to you, gives a sense of success which is inspiring. it hasn't been long since I began watching you (2017 summer) but your passion for discovery and success in uncovering (teaching about) these developments has been an enlightening experience! thank you!
6 лет назад
Keep it up Siraj. To me you are like "La Mouche du Coche" to me, energizing my will to carry on with the subjects that matters to us. Thank You
@SirajRaval
@SirajRaval 6 лет назад
thanks Pierre!
@bomb3r422
@bomb3r422 6 лет назад
I think Capsule network will be game changing. Big ups Siraj , you never fail to amaze !
@011azr
@011azr 6 лет назад
Dude, you don't have any PhD but it feels like you're an expert in the deep learning field. Thanks for making the concept much easier for me to grasp.
@011azr
@011azr 6 лет назад
Just stalked your LinkedIn profile, you seem to have a passion for teaching. I still wonder why haven't you pursue a PhD yet? Going to Stanford and doing the project with Andrew Ng in his Google Brain project sounds like so much fun for people like you. Anyway, thanks for the video. Even if you decide to continue your study or doing something else out there, please keep making useful educational videos like this. Thanks :).
@g0d182
@g0d182 6 лет назад
To have a chance at the core of the Google brain team, you probably need to produce three or more sequences of work that beat some non-trivial state of the art in a huge way.
@SirajRaval
@SirajRaval 6 лет назад
because of time. im full time making content for you guys. and i love it
@2xehpa
@2xehpa 6 лет назад
You are wrong. they did test it on CIFAR10 with less promising results (~10% when SOTA is ~3-4%)). But this is not that important. They clearly state at the paper that this is not suppose to be a fully formed amazing new architecture but: "There are many possible ways to implement the general idea of capsules. The aim of this paper is not to explore this whole space but to simply show that one fairly straightforward implementation works well and that dynamic routing helps."
@AnimSparkStudios
@AnimSparkStudios 4 года назад
The way you teach is really unique
@ebertolo100
@ebertolo100 6 лет назад
Only a few words about your video: Amazing! and Thanks so Much for sharing!
@soumensinha305
@soumensinha305 6 лет назад
Siraj please make videos on reinforcement learning, as it can serve very good for general purpose intelligence
@debarko
@debarko 6 лет назад
+1 to this one
@CKSLAFE
@CKSLAFE 6 лет назад
+1
@avinashk8006
@avinashk8006 6 лет назад
+1
@zishanahmedshaikh
@zishanahmedshaikh 6 лет назад
+1
@mpricop
@mpricop 6 лет назад
+1
@levinicklas7885
@levinicklas7885 6 лет назад
Really love the new video format. Definitely a step up from your old videos! Much easier to follow;I’m learning a lot more!
@Piyush2896
@Piyush2896 6 лет назад
It would be interesting to see the results of a dynamic routing capsule model being attacked by the pixel attacks at 1, 3 or 5 pixels as done in the paper you mentioned and how it fairs against CNNs
@tanmaybhatnagar4849
@tanmaybhatnagar4849 6 лет назад
Guys just to be clear the image of the Neural Net at 6:44 is not cropped. It is the original image that is in the paper. The publishers themselves published a cropped image by mistake. (I find it quite funny actually)
@EngIlya
@EngIlya 6 лет назад
Hey Siraj, Thanks for the video! A note: advantage of CNN over MLP is not the computational complexity, but statistical efficiency - we use "translational symmetry" in the image, teaching the net that e.g. an eye is the top of an image is the same thing as an eye in the bottom of an image.
@Gannicus99
@Gannicus99 6 лет назад
Loving the more serious format (memse dropped) and the good link documentation! This has realy gotten better!
@davidm.johnston8994
@davidm.johnston8994 6 лет назад
Thanks, great video man. It's so much better when you are serious!
@DF-rd6zv
@DF-rd6zv 5 лет назад
Dude great work, your visual descriptions of these structures builds a phenomenal image in my head. And aTF implementation? Siiick.
@darkhydrastar
@darkhydrastar 4 месяца назад
Great video. Well done.
@hayatitutar8429
@hayatitutar8429 6 лет назад
Thanks Siraj. I Think, Capsule Networks is will be very help us in Deep Learning studies.
@Ruhgtfo
@Ruhgtfo 3 года назад
Great explanation
@siarez
@siarez 6 лет назад
Thanks for the video. I wish you had explained how the capsule network overcomes the shortcomings of a regular CNN.
@knexator_
@knexator_ 6 лет назад
Paper here: arxiv.org/pdf/1710.09829.pdf
@ehfo
@ehfo 6 лет назад
thanks
@y__h
@y__h 6 лет назад
You're awesome
@SirajRaval
@SirajRaval 6 лет назад
good link
@DoctorKarul
@DoctorKarul 6 лет назад
Hit Like when Siraj casually drops that "because [we all know] neural networks are universal function approximators."
@UsmanAhmed-sq9bl
@UsmanAhmed-sq9bl 6 лет назад
Thankyou Siraj for an awesome presentation.
@Frankthegravelrider
@Frankthegravelrider 6 лет назад
Always at it with those fresh vids!
@charlieyou97
@charlieyou97 6 лет назад
Siraj, absolutely love your videos and am incredibly impressed with how fast you can get videos out on novel concepts. If you'll allow me one critique, I do think that your videos would benefit if you spoke slower, especially during the sections where you are explaining code. Quite frequently, I slow down that part to .75x so that my brain can absorb the connection between your words and the code I am seeing. Keep up the amazing work!
@gotel100
@gotel100 6 лет назад
geoff hinton is awesome!
@kevinchweya3087
@kevinchweya3087 6 лет назад
Siraj explains it like it normal 1 + 1 math but then when I get down to understanding the code, the calculus, the math in it, 😭😭😭😭
@RavinderRam
@RavinderRam 6 лет назад
awesome as usual
@sgaseretto
@sgaseretto 6 лет назад
Awesome video Siraj, as always! By the way, nice Deepmind shirt
@unoqualsiasi7341
@unoqualsiasi7341 6 лет назад
Thanks for the interesting video Siraj!
@AbeDillon
@AbeDillon 6 лет назад
Dammit, Hinton! You beat me to this idea!
@aigen-journey
@aigen-journey 6 лет назад
He was talking about capsules for quite some time now. I think it's still not the final solution to equivarianc, but a small step in the right direction.
@SirajRaval
@SirajRaval 6 лет назад
always with the ideas hah
@grekogecko
@grekogecko 6 лет назад
Hahaha, he had this idea from long time ago but it was until now that Sabour put an effort to materialize it :P
@itsSKG
@itsSKG 6 лет назад
Siraj is back ❤
@carlosjosejimenezbermudez9255
@carlosjosejimenezbermudez9255 6 лет назад
Man, I definitely give you props for the change in your video style, it's still you, but its now a lot easier to understand and follow. Quick question, if I may. Do you think capsule based neural networks could be a way to crack down with some of the issues of 3d generation with conv nets?
@nesmaashraf3427
@nesmaashraf3427 4 года назад
thx a lot for your help you r talented really , respect for you from Egypt :)
@AbhimanyuAryan
@AbhimanyuAryan 4 года назад
sweet fast introduction...thanks
@makokal10010
@makokal10010 6 лет назад
Not that I don't like the content or anything, but not mentioning the first author at all is absolutely not fair in terms of attribution. This is regardless of who had the original idea. Someone actually did the work to make this paper happen and she deserves credit for that.
@bizzyvinci
@bizzyvinci 3 года назад
You're awesome! Thanks
@RishabhSaxena1996
@RishabhSaxena1996 6 лет назад
Next up, do one on Progressive Learning and the about how accurate the outputs are for those.
@godbennett
@godbennett 6 лет назад
Congrats on the google Deepmind job Siraj
@debarko
@debarko 6 лет назад
I have been waiting for this...
@RoxanaNoe
@RoxanaNoe 6 лет назад
Great video Siraj!!!
@godsobsex
@godsobsex 6 лет назад
You're simply just awesome.
@neilwang9124
@neilwang9124 6 лет назад
Hi, siraj, thanks for some precise summarization for concepts. I would recommend that maybe you could set an estimated knowledge level of potential audiences for each video and try to explain your ideas for different levels of audiences to avoid mixing easy and difficult stuff together.
@shivroh7678
@shivroh7678 6 лет назад
if Geoffrey Hinton is god than siraj is messenger....!! Hat's Off man .....!!
@SirajRaval
@SirajRaval 6 лет назад
thanks
@godbennett
@godbennett 6 лет назад
Hinton's paper sounds quite similar to "Network in Network" by Lin et al, 2013, arXiv. "Network in Network" like Hinton's paper: 1) Captures abstractions in nested neural bundles, and is less susceptible to overfitting than prior works. 2) Uses "global average pooling", but does so over the classification layer, and not per capsule or neuron bundle as in Hinton's paper. arxiv.org/abs/1312.4400
@ehfo
@ehfo 6 лет назад
love it ! thank siraj!
@larryteslaspacexboringlawr739
@larryteslaspacexboringlawr739 6 лет назад
thank you for capsule network video
@jonathansettle4839
@jonathansettle4839 6 лет назад
Great video, well explained.
@hoyinchan343
@hoyinchan343 6 лет назад
thanks
@vladimirtchuiev2218
@vladimirtchuiev2218 6 лет назад
Some correction to the difference between AlexNet and VGG: The author of AlexNet wasn't a computer science guy so he relied on his sharp intuition a lot. AlexNet, while very significant on it's own right, is very arbitrarily put together. VGG is a network that was made by computer science folks, it is very ordered, has more layers with a consistent layout and is much simpler overall in its structure. VGG is still often used in DL research. Besides, the number of neurons for each layer is 2^x, where x is an integer, suggesting that it corresponds to the number of GPU threads (different versions of VGG for different GPUs). Also, it's worth mentioning that GoogleNet doesn't use full connected layers in the end, it's purely convolutional. It's problematic in DL research because it works well but nobody really understands it. ResNet was a very deep network of 152 layers, and by theory shouldn't work at all, but I don't know the exact details.
@layeroftranslation
@layeroftranslation 6 лет назад
Cool!
@sunnyppanchal
@sunnyppanchal 6 лет назад
Great job with the video!
@KulvinderSingh-pm7cr
@KulvinderSingh-pm7cr 6 лет назад
Yaan Lecun, was working on Back prop earlier than Geoff, though Geoff's version got popularized more in the community.
@AndrewMelnychuk0seen
@AndrewMelnychuk0seen 6 лет назад
Damn dude, you are on top of your stuff. I've been anticipating this tech since I did Hinton's Corsera class. Thanks for explaining.
@delenlawson1251
@delenlawson1251 6 лет назад
Great Job!
@durand101
@durand101 6 лет назад
Siraj, don't you think that scoring better and better on MNIST is a bad target? A 100% accuracy wouldn't make any sense because there are quite a few digits in MNIST which are genuinely ambiguous. Why should new models achieve a rate much higher than what the SOTA is? Shouldn't we move on to more serious baselines?
@tonycatman
@tonycatman 6 лет назад
I've thought a lot about this before, and I've seen some of the digits you are talking about. The digits are ambiguous to you (and me), but obviously they aren't ambiguous to the algorithm. The question is resolved by finding out whether the classification correctly represents the original intention of the person who wrote the digits, and it is reasonable to assume that their intention is correctly reflected in the 'y' target. I've had to come to the realisation over the last 15 years that some of the algorithms I've put together are simply much better at the task I've set them than I ever would be. Not just faster, but more accurate. In fact, my current test for when I've perfected an algorithm is when I am repeatedly convinced that the system has gotten it wrong, but on investigation I'm wrong.
@poojanpatel2437
@poojanpatel2437 6 лет назад
CIFAR 10 is also used for the baseline. and CIFAR 10 is much more elegant baseline than MNIST.
@011azr
@011azr 6 лет назад
Exactly. They need to research on a more useful "intro to deep learning dataset".
@SirajRaval
@SirajRaval 6 лет назад
yea need moar baselines this was a start
@tobiasgehring2462
@tobiasgehring2462 6 лет назад
Durand D'souza from watching another talk on capsule networks, it seems that "state of the art performance on MNIST" in this case doesn't mean "higher accuracy", but rather "the same accuracy with less supervision". It's not that it's trying to get 100% accuracy, but instead it's getting similar accuracy to previous models, but only requires a fraction of labelled data compared to them. This is really helpful because for a more complicated problem, getting a large amount of high quality labelled data can be a real issue, so if we can get similar accuracy with lots of unlabelled data and a small amount of labelled data, that seems like a serious win.
@souravgames
@souravgames 6 лет назад
Thanks for your videos. what you are doing is amazing. a small request can you make a live video on recommendations system or market basket analysis like apriori. thanks a lot in advance..
@jayshah4016
@jayshah4016 6 лет назад
Thank you SIRAJ.... Great Video... Can you have a video explaining about YOLO object detection
@wafaawardah3264
@wafaawardah3264 6 лет назад
Wow. Respect respect respect.
@theakitata
@theakitata 6 лет назад
It would be great to explain a little more about the architecture when you have that nice picture of the Capsule Network already ! thanks anyway :D
@Stan_144
@Stan_144 3 года назад
Capsule networks are the right path forward. They also have some similarities to Jeff Hawkins' ideas.
@inlustrolearningprivatelim4868
Hey Siraj am a huge fan of your vids. You are doing an awesome job with your lucid explanations. I am quite new to machine learning (deep learning in particular). Is there any particular order you would recommend to go through your videos in so as to get a comprehensive outlook of the content? Also I heard one of your videos wherein you were talking about the intersection of AI and blockchain in the creation of DAOs. I am working on that right now. It's truly inspiring to see your enthusiasm.. Hoping to see more videos on blockchains and DApps from your channel :) Once again thanks for all effort!!
@erikadistefano7582
@erikadistefano7582 6 лет назад
Amazing!
@mackievanfleet8107
@mackievanfleet8107 6 лет назад
Seeing beyond what is seen.. very interesting
@mackievanfleet8107
@mackievanfleet8107 6 лет назад
Spooge RU-vid are u ok.. this is a great class to learn from
@spoige7333
@spoige7333 6 лет назад
❤️❤️❤️
@mackievanfleet8107
@mackievanfleet8107 6 лет назад
Spooge RU-vid hoping u r ok.. not understanding.. but .. what is nu.. be ok..
@spoige7333
@spoige7333 6 лет назад
Are you on reddit?
@mackievanfleet8107
@mackievanfleet8107 6 лет назад
Spooge RU-vid nope..
@parthtrehan8668
@parthtrehan8668 5 лет назад
I had a question if CNN does not provide spatial correlation, that would be because of using only one type of convolutional matrix(3x3 or 4x4), but Inception v3 uses all 2x2, 3x3 and 4x4, that can capture that eyes are above the nose. Does inception model also fail to capture spatial correlation?
@WildAnimalChannel
@WildAnimalChannel 6 лет назад
So do the capsules store orientations of objects? I reckon the way humans recognise is like this: First we might see some features then we guess the object (also using context). Then we see what other features the object should have and where. Then we look to see if those features exist where we expect them to be. And if we don't recognise a feature we might look at sub-features and so on. Going up and down the hierarchy until we can say "ah, that's a five legged dog with carrots for eyes."
@luck3949
@luck3949 6 лет назад
You have a DeepMind T-shirt, do you work there, or did you win it in some sort of competition? Or what?
@empyrerhomann6743
@empyrerhomann6743 6 лет назад
Early at last....., finally i'm part of top 10 comments
@gpligor
@gpligor 6 лет назад
Thanks for keeping us up to date but the intro was too long. Better if you had spent some extra time explaining the capsule NN
@honestexpression6393
@honestexpression6393 5 лет назад
Why arent these kind of comments upvoted more?
@anandsrivastava5845
@anandsrivastava5845 4 года назад
Can i apply this network on Text data sets .?..because what you are explaining related to image features.
@TheAAMvideos
@TheAAMvideos 4 года назад
yes you can! You have to turn your text into a matrix first. Check out this paper: arxiv.org/abs/1906.04898
@user-ro4mi2td1p
@user-ro4mi2td1p 6 лет назад
What types of neural networks exist for data which are not images or not sequences (supervised learning)?
@MalikKlc
@MalikKlc 6 лет назад
Siraj what do you think about using GO language in ML (and AI in general), do you think it can take over Python in this field when there are more libraries available?
@fabian.hertwig
@fabian.hertwig 6 лет назад
What is the Website He is scrolling through?
@SaveAsss
@SaveAsss 6 лет назад
I would argue that this is more or less what Numenta are working on for a while now (old stuff). Maybe you can point out some diferences I didn't notice?
@xuhaodu3921
@xuhaodu3921 6 лет назад
Hi guys, the owner of these codes is still updating his work, so if you are interested in this work, please go to this repo: github.com/naturomics/CapsNet-Tensorflow to get the latest update!
@nemanjaradojkovic1224
@nemanjaradojkovic1224 6 лет назад
Please make more videos like this and less of the "epileptic" ones. Great job!
@ismaelgoldsteck5974
@ismaelgoldsteck5974 6 лет назад
please do a comparison between two trained networks
@RAJATTHEPAGAL
@RAJATTHEPAGAL 6 лет назад
Hey Siraj great videos like always. But there is one thing i do not really like about your content is that every time you start explaining the basics again. Now i know i can skip it but i just feel it is not required. For example anyone reading up about capsule network already knows what a convolution network is. So maybe skip the part of explaining that . :-) ... Anyway ciao ... keep making great videos :-D
@ArtyAnikey
@ArtyAnikey 6 лет назад
I like your simple explanations. This is always good.
@antopolskiy
@antopolskiy 6 лет назад
Where is the link to this huge infographic about the development on the NN? I cannot find it anywhere.
@macshout6502
@macshout6502 6 лет назад
medium.com/@nikasa1889/the-modern-history-of-object-recognition-infographic-aea18517c318
@BRANDMAW
@BRANDMAW 6 лет назад
Hey Saraj! I'm lying with laughter from your top memes pics. Maybe it pilled up somewhere? :3
@dimitriosmallios5941
@dimitriosmallios5941 6 лет назад
What are the weaknesses of this model? I assume that because it maximizes a prediction and maps to a specific entity(capsule), it recognizes only one class for each image, right?
@WesleySoares
@WesleySoares 6 лет назад
Great Video! You said that one big problem of some NN is when the image is shifted or displaced, rotated etc. Do you think this new technique can "interpret" CAPTCHAs?
@shawz4308
@shawz4308 6 лет назад
gooooood!
@razorintube
@razorintube 6 лет назад
love your videos...diverse topics ...and well researched....with insight of your own......adds a different flavor to each of your video
@hello27216
@hello27216 6 лет назад
Hey Siraj, could you please send a link to the webpage that you were using to demonstrate in this video? I couldn't find it in the description. Thanks!
@PSNAcademy
@PSNAcademy 6 лет назад
github.com/llSourcell/capsule_networks/blob/master/Capsule%20Networks%20What%20Comes%20after%20Convolutional%20Networks%3F.ipynb
@deseofintech1449
@deseofintech1449 6 лет назад
Great Stuff Siraj!!! Can capsule network better performance of sentiment analysis task as well? whats your take on it?
@kingspp
@kingspp 6 лет назад
One has to give credit to the original author of open-source implementation. Dig deeper and you will find that this is not a scalable architecture due to primitive and inefficiency in dynamic routing algorithm, however there is a new routing - em routing which might improve the routing technique!
@thebutlah
@thebutlah 6 лет назад
Please take some extra time explaining the capsule networks more in-depth, you spent only about 5 minutes on them but about 15 on regular CNNs. Thanks for the video though!
@alibaheri4614
@alibaheri4614 6 лет назад
Siraj is there any guarantee that CapsNet leads to better overall performance in Deep Q network? Can apply it in deep reinforcement learning? What is your intuition?
@josephrejive4081
@josephrejive4081 6 лет назад
Have you ever thought about doing programming language tutorial series? I would love to see one in C.
@jonaslai5867
@jonaslai5867 6 лет назад
How is the PrimaryCaps Layer different from Grouped Convolution (3 Groups, 8 filters per group and kernel of 9x9)?
@donaldderrick1595
@donaldderrick1595 5 лет назад
hey siraj remember to check the quality of your audio make sure its not peaking in the red
@brennan123
@brennan123 6 лет назад
I didn't see the History of Object Recognition Infographic in the above description. In case anyone else is looking for it, try here: github.com/Nikasa1889/HistoryObjectRecognition
@bilalshahid5005
@bilalshahid5005 Год назад
Thank you!
Далее
Which Activation Function Should I Use?
8:59
Просмотров 263 тыс.
Chelsea gym be like.. 😅⚽️
00:20
Просмотров 14 млн
How to Read a Research Paper
8:44
Просмотров 345 тыс.
Geoffrey Hinton Capsule theory
1:09:49
Просмотров 26 тыс.
The U-Net (actually) explained in 10 minutes
10:31
Просмотров 98 тыс.
How to implement CapsNets using TensorFlow
33:38
Просмотров 55 тыс.
Why Neural Networks can learn (almost) anything
10:30
Capsule networks: overview
1:06:05
Просмотров 17 тыс.
Synthetic Gradients Explained
27:16
Просмотров 21 тыс.
Variational Autoencoders
15:05
Просмотров 494 тыс.
Dynamic Routing Between Capsules
42:07
Просмотров 10 тыс.