Тёмный
DeepFindr
DeepFindr
DeepFindr
Подписаться
Hello and welcome on my Channel :)

I make videos about all kinds of Machine Learning / Data Science topics and am happy to share what I've learned.

If you enjoy the content and want to support me (only if you want!), these are the current options:
►Share this channel: bit.ly/3zEqL1W
►Support me on Patreon: bit.ly/2Wed242
►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl

Contact: deepfindr@gmail.com
Website: deepfindr.github.io
Self-/Unsupervised GNN Training
12:09
2 года назад
Causality and (Graph) Neural Networks
16:13
2 года назад
Комментарии
@juanete69
@juanete69 13 часов назад
Why is it "x" and not "self.x" ? And why self.training and not training?
@juanete69
@juanete69 День назад
Could you explain more in depth why we have 32 nodes and 9 features for this problem, please?
@the_random_noob9860
@the_random_noob9860 5 дней назад
Amazing video! I have a question regarding one data point attributes. We have both x and y. The features themselves are the speeds which is the ground truth right? We want to train our model to predict into next 12 timestamps and compare it with values in x. So what is the significance of y (though in documentation, its given that y is the ground truth, both x and y should have the same values but they differ. Only difference that has to be is that x additionally has time of day as a second feature for each of the 12 timestamps per data point) Could you kindly clarify this?
@CarolineWu0719
@CarolineWu0719 8 дней назад
thank you for your great explanation
@TrusePkay
@TrusePkay 9 дней назад
You did not cover LDA and ICA
@kshitijdesai2402
@kshitijdesai2402 11 дней назад
I found it hard to follow initially but after understanding GCNN thoroughly, this video is a gem.
@dr.aravindacvnmamit3770
@dr.aravindacvnmamit3770 14 дней назад
I agree with your lecture and was very nice. How to apply for images like x-ray or ct scan
@PedroPoquette-t4s
@PedroPoquette-t4s 16 дней назад
VonRueden Circle
@MaryamSadeghi-u6u
@MaryamSadeghi-u6u 18 дней назад
You have put a lot of time into creating this videos and it is really valuable that after 3 years it is still very useful
@MaryamSadeghi-u6u
@MaryamSadeghi-u6u 18 дней назад
Greta Video, thank you!
@luisperdigao6204
@luisperdigao6204 19 дней назад
"... github in the link below....". Where is the link?
@hannespeter1484
@hannespeter1484 19 дней назад
wow what a great video.Thank you, helped me a lot.
@giulliabraga9709
@giulliabraga9709 23 дня назад
I just discovered your channel and THANK YOU!!
@JonathanDenise-v4d
@JonathanDenise-v4d 23 дня назад
Adaline Pines
@王恺风
@王恺风 27 дней назад
I really enjoy this video! It is so concise, comprehensive and beautiful! And thanks a lot for so many useful links for further learning.
@ishara779
@ishara779 27 дней назад
So how are the edge features used in GCN algorithm? are they completely ignored? Because according to this explanation, only the node features take part in the convolution process
@VoltVipin_VS
@VoltVipin_VS 29 дней назад
The best part of Vision transformers is inbuilt support interpretability as compared to CNN where we had to compute saliency maps.
@metehkaya96
@metehkaya96 Месяц назад
Perfect video to understand GATs. However, I guess, you forgot to add sigmoid function when you demonstrate h1' as a sum of multiplications of hi* and attention values, in the last seconds of the video: 13:51
@ashishkannad3021
@ashishkannad3021 Месяц назад
why are we adding time embedding to input features, like literally adding them together. Can a simple concatenation of input features and time embedding possible ? btw dope video thanks for sharing
@urveesh09doshi62
@urveesh09doshi62 Месяц назад
I'm making a model from sx-stackoverflow of SNAP, it only has source target and timestamp, no clue how to make a dataset for TGN from that
@DhananjaySarkar-y3i
@DhananjaySarkar-y3i Месяц назад
One of the best video i have ever seen
@baklavatv4981
@baklavatv4981 Месяц назад
Can we do the same also for a GIN? Just with changing the Gcnconv with Ginconv?
@RhondaSims-e2c
@RhondaSims-e2c Месяц назад
Meredith Green
@PaxonFrady
@PaxonFrady Месяц назад
why would the attention adjacency matrix be symmetrical? If the weight vector is learnable, then it does matter which order the two input vectors are concatenated. It doesn't seem like there would be any reason to enforce symmetry.
@anastassiya8526
@anastassiya8526 Месяц назад
it was the best explanation that gave me hope for the understanding these mechanisms. Everything was so good explained and depicted, thank you!
@eransasson20
@eransasson20 Месяц назад
Thanks for this amazing presentation! This topic which is not trivial is also not easy to show in pictures and you succeeded perfectly. Great help!
@PostmetaArchitect
@PostmetaArchitect Месяц назад
Ist almost as if its just a normal neural network but projected onto a graph
@English-bh1ng
@English-bh1ng 2 месяца назад
Well-organized video and description, abundant references. I love this series. Cheer up!
@stevechesney9334
@stevechesney9334 2 месяца назад
I really appreciate the information that you shared in this video/playlist. Do you have an example of where you used used the Heterogeneous graph data to create a GNN or GCN?
@lw4423
@lw4423 2 месяца назад
mathematician reeee-man
@kenalexandremeridi
@kenalexandremeridi 2 месяца назад
What is this for? I am intrigued ( im a musician)
@tobiaspucher9597
@tobiaspucher9597 2 месяца назад
I have trouble finetuning the model?? has anyone managed to recreate the results from the paper?
@shubhamtalks9718
@shubhamtalks9718 2 месяца назад
Bro u killed it. Best explanation. Trust me I have watched all tutorials but all other explanations were shitty. Please create one video on quantization.
@scaredheart6109
@scaredheart6109 2 месяца назад
AMAZING!
@SaketRamBandi
@SaketRamBandi 2 месяца назад
This might be the best and simple explanation of GAT one can ever find! Thanks man
@xxyyzz8464
@xxyyzz8464 2 месяца назад
Why use dropout with GeLU? Didn’t the GeLU paper specifically say one motivation for GeLU was to replace ReLU+dropout with a single GeLU layer?
@mahathibodela
@mahathibodela 2 месяца назад
its just awesomee... the way u do research before making a video is reallyyyy fascinating. can you tell how u collect papers which are relevant, it will help a lot while doing projects on our own
@amulya1284
@amulya1284 2 месяца назад
you make the best explanation videos everrrr! is there one on how to train custom models using LORA?
@RishabNakarmi-rn
@RishabNakarmi-rn 2 месяца назад
Did not get :( moving to other videos
@SickegalAlien
@SickegalAlien 2 месяца назад
Banger vid from big dog as always🐶
@SickegalAlien
@SickegalAlien 2 месяца назад
Collaborative Filtering is such an underrated computation method, imho Many real life problems can be re-interpteted as a recommendation problem. It's just a matter of perspective, but the results can be huge!
@newbie8051
@newbie8051 2 месяца назад
Ah, tough to understand, guess will have to read more on this to fully understand
@VoltVipin_VS
@VoltVipin_VS 29 дней назад
You should have a deep understanding of transformer architecture to understand this.
@nicksanders1438
@nicksanders1438 2 месяца назад
It's a good, concise walk-through with good code implementation examples. However I'd recommend avoiding some ambiguous variable names in code like betas, Block, etc.
@pokhrel3794
@pokhrel3794 2 месяца назад
The best explanation i found in internet
@snsacharya1737
@snsacharya1737 2 месяца назад
A wonderful and succinct explanation with crisp visualisations about both the attention mechanism and the graph neural network. The way the learnable parameters are highlighted along with the intuition (such as a weighted adjacency matrix) and the corresponding matrix operations is very well done.
@redalligator291
@redalligator291 2 месяца назад
Hello great video. I was just wondering about something you mentioned in the video. At time 14:19 you say that graph VAE's might not be the best architecture for graph generation, and I was wondering what other models do you recommend that might be better than GVAE's. I am asking this because I am working on the exact thing you are working on just for another disease, so I was wondering if you have any recommendations to make the process a lot simpler.
@DeepFindr
@DeepFindr 2 месяца назад
Hi, I would look into transformer models like Smiles Transformer or Diffusion models. In both cases the architecture is more suitable in my opinion
@redalligator291
@redalligator291 2 месяца назад
@@DeepFindr Ok thank you so much. Do you know any good specific models in the Smiles Transformer or Diffusion that stand out to you as better than the rest. It would be really helpful if I knew what model I should be going off of.
@이길현-p7f
@이길현-p7f 2 месяца назад
perfect video
@rajeshve7211
@rajeshve7211 2 месяца назад
Fantastic explanation. You made it look easy!
@ycombinator765
@ycombinator765 2 месяца назад
bro is educated!
@vimukthisadithya6239
@vimukthisadithya6239 2 месяца назад
This is a perfect explanation for LoRA I found so far !!!