Тёмный

Einsum Is All You Need: NumPy, PyTorch and TensorFlow 

Aladdin Persson
Подписаться 80 тыс.
Просмотров 45 тыс.
50% 1

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 61   
@udbhavprasad3521
@udbhavprasad3521 3 года назад
Honestly, there is no channel that even compares to this level of quality
@matt.jordan
@matt.jordan 3 года назад
This is literally insane how well you explained this I instantly subbed you deserve so much more attention
@AladdinPersson
@AladdinPersson 3 года назад
Wow thanks :)
@qiguosun129
@qiguosun129 3 года назад
This is literally the best and simplest explanation I ever had, thanks.
@johngrabner
@johngrabner 4 года назад
Another perfect video. Most valuable because it provides a foundation for your other video. Can't wait for your next einsum video.
@AladdinPersson
@AladdinPersson 4 года назад
Really appreciate your comment! :)
@stacksmasherninja7266
@stacksmasherninja7266 2 года назад
It almost felt like you implemented these functions yourself in those libraries ! Great video
@rajanalexander4949
@rajanalexander4949 Год назад
Excellent tutorial of a very useful but sometimes confusing feature in NumPy. I would only add that " . . . " is syntactic sugar for omitting a bunch of indices.
@mayankkamboj4025
@mayankkamboj4025 9 месяцев назад
Wow, I finally get einsum ! Thank you so much. And that lotr reference was good.
@SantoshGupta-jn1wn
@SantoshGupta-jn1wn 2 года назад
One of the most important videos I've ever seen.
@haideralishuvo4781
@haideralishuvo4781 4 года назад
Awesome , Your channel is so underrated . Was struggling for a good channel to learn about pytorch ,Thanksfully got yours :D Can you cover pix2pix , cycleGAN , RCNN's ? Would be greatful if you do .
@AladdinPersson
@AladdinPersson 4 года назад
Appreciate you 👊 Many people have requested that so it's coming but can't promise when :)
@CrazyProgrammer16
@CrazyProgrammer16 Год назад
Hey, but why does "i,j->ij" also have a product??? Here in the input nothing is repeating. Are there other rules?
@thecros1076
@thecros1076 4 года назад
Learnt something new today❤️❤️, ...I always had a question how and were did you learn everything?
@AladdinPersson
@AladdinPersson 4 года назад
I don't know all of this stuff. I research everything to try to make every video as good as I possible can so the process is usually that I learn something in depth and then decide to share it with you guys
@thecros1076
@thecros1076 4 года назад
@@AladdinPersson ❤️❤️❤️loved all of your videos ... hardwork and talent is a deadly combination ....hope to see new project videos soon❤️
@iva1389
@iva1389 2 года назад
i had to translate it to tensorflow :) very useful video for practice. thank you!
@kenzhebektaniyev8180
@kenzhebektaniyev8180 Год назад
cool! tbh I didn't believe you could explain it but you did
@rekeshwardhanani920
@rekeshwardhanani920 Год назад
Insane brother, excellent just excellent
@francesco_savi
@francesco_savi 3 года назад
nice explanation, very clear! thanks!
@iskhwa
@iskhwa 2 года назад
Thanks, a perfect explanation.
@valeriusevanligasetiawan6967
@valeriusevanligasetiawan6967 10 месяцев назад
This is great, I just wanna know however, if I can do FFT of Green function using einsum. Note: been trying for a week to implement the code, never got the correct result.
@danyalzia6958
@danyalzia6958 3 года назад
So, basically einsum is the DSL that is shared between these libraries, right?
@Raghhuveer
@Raghhuveer 2 года назад
How does it compare in terms of performance and efficiency to standard numpy function calls?
@johnj7883
@johnj7883 4 года назад
Thanks a lot. it saves my day
@iva1389
@iva1389 2 года назад
gives me error while matrix-vector multipication: torch.einsum("ij, kj->ik", x, v) einsum(): operands do not broadcast with remapped shapes [original->remapped]: [2, 5]->[2, 1, 5] [1, 3]->[1, 1, 3] same in tf Expected dimension 5 at axis 1 of the input shaped [1,3] but got dimension 3 [Op:Einsum]
@Han-ve8uh
@Han-ve8uh 2 года назад
One thing that wasn't mentioned in the video that i realized halfway through is sometimes einsum is used on 1 operand while sometimes on 2. I tried "torch.einsum('ii->i', t,t)" and got "RuntimeError: einsum(): more operands were provided than specified in the equation". This tells me that the number of operands must correspond to the number of comma separated indexes on left hand side of ->.
@cassenav
@cassenav 2 года назад
Great video thanks :)
@ALVONIUM
@ALVONIUM Год назад
Helt otroligt
@iva1389
@iva1389 2 года назад
einsum to rule them all, indeed.
@Choiuksu
@Choiuksu 4 года назад
What a nice video !
@AladdinPersson
@AladdinPersson 4 года назад
Thank you so much :)
@SAINIVEDH
@SAINIVEDH 3 года назад
can someone explain how matrix diagonal is "ii->i" ?
@ericmink
@ericmink 3 года назад
I think it's because if you wrote it as a nested loop, then you would loop over all rows with a variable `i`, and for the columns you would reuse the same variable (every entry at coordinates (i,i) is on the diagonal). Now for the result, if you left the `i` out it would sum the diagonal elements up. If you have it in there, it will create a list instead.
@alfahimmohammad
@alfahimmohammad 3 года назад
will einsen work for model parallelism in keras models?
@AladdinPersson
@AladdinPersson 3 года назад
I haven't tried that but I would imagine that it works
@alfahimmohammad
@alfahimmohammad 3 года назад
@@AladdinPersson I tried it. It wasn't good. I was better off with manually assigning each layer to each GPU in pytorch
@gtg238s
@gtg238s 4 года назад
Great explanation! click
@AladdinPersson
@AladdinPersson 4 года назад
Thank you so much! :)
@AlbertMunda
@AlbertMunda 4 года назад
awesome
@hieunguyentrung8987
@hieunguyentrung8987 3 года назад
np.einsum('ik,kj->ij', x,y) is actually much much slower than np.dot(x,y) when the matrix size gets larger Also tf.einsum is slightly slower than tf.matmul but torch.einsum is slightly faster than torch.matmul... Only from a perspective of the configuration of my laptop though
@misabic1499
@misabic1499 4 года назад
Hi. Your model building from scratch tutorials are really helpful. Eagerly waiting for more tutorials to come. I really appreciate it!
@AladdinPersson
@AladdinPersson 4 года назад
I appreciate the kind words! Any video in particular that you thought were good and do you have any specific suggestions for the future?
@leonardmensahboante4308
@leonardmensahboante4308 2 года назад
@@AladdinPersson Please do a video on python hooks, thus how to use pre-trained model as the encoder to the UNET architectures for image segmentation.
@gauravmenghani4
@gauravmenghani4 2 года назад
Lovely. I always found einsum non-intuitive. Learnt a lot! Thanks :)
@ripsirwin1
@ripsirwin1 3 года назад
This is so difficult to understand I don't know if I'll ever get it.
@AladdinPersson
@AladdinPersson 3 года назад
Sorry, maybe I didn't explain it good enough:/
@ripsirwin1
@ripsirwin1 3 года назад
@@AladdinPersson no you're great. I just have to work at it
@AndyLee-xq8wq
@AndyLee-xq8wq 7 месяцев назад
cool
@parasharchatterjee3223
@parasharchatterjee3223 2 года назад
It's the Einstein summation convention that's used in physics very commonly, and just removes the clunky summation sign in pages long calculations!
@michaelmoran9020
@michaelmoran9020 3 года назад
Are the "free indicies" part of standard einstein notation or something made up to allow you to exclude array dimensions from the einsum entirely?
@jeanchristophe15
@jeanchristophe15 3 года назад
I am not sure the "Batch matrix multiplication" example is correct, because i is used twice.
@epolat19
@epolat19 3 года назад
Does einsum mess the auto-differentiation of TensorFlow
@deoabhijit5935
@deoabhijit5935 3 года назад
are you considering doing an another video on advanced einsum?
@javidhesenov7611
@javidhesenov7611 Год назад
thanks for awesome explanation
@leofh1917
@leofh1917 3 года назад
Thanx! This one is very useful!
@minma02262
@minma02262 3 года назад
Thank you for sharing this!
@jamgplus334
@jamgplus334 3 года назад
nicely done
@rockapedra1130
@rockapedra1130 3 года назад
Very cool!
@MorisonMs
@MorisonMs 3 года назад
3:37 (Outer product) there is no need to sum, simply M[i,j] = A[i,k]*B[k,j]
@lewis2865
@lewis2865 3 года назад
It's matrix multiplication
Далее
Pytorch Seq2Seq Tutorial for Machine Translation
50:55
Я ИДЕАЛЬНО ПОЮ
00:31
Просмотров 584 тыс.
Einstein Summation Convention: an Introduction
9:00
Просмотров 194 тыс.
What the HECK is a Tensor?!?
11:47
Просмотров 758 тыс.
PYTORCH COMMON MISTAKES - How To Save Time 🕒
19:12
PyTorch Autograd Explained - In-depth Tutorial
13:42
Просмотров 107 тыс.
NumPy vs SciPy
7:56
Просмотров 39 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 388 тыс.
Pytorch ResNet implementation from Scratch
30:25
Просмотров 94 тыс.
The Home Server I've Been Wanting
18:14
Просмотров 137 тыс.