I make videos about all kinds of Machine Learning / Data Science topics and am happy to share what I've learned.
If you enjoy the content and want to support me (only if you want!), these are the current options: ►Share this channel: bit.ly/3zEqL1W ►Support me on Patreon: bit.ly/2Wed242 ►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
Amazing video! I have a question regarding one data point attributes. We have both x and y. The features themselves are the speeds which is the ground truth right? We want to train our model to predict into next 12 timestamps and compare it with values in x. So what is the significance of y (though in documentation, its given that y is the ground truth, both x and y should have the same values but they differ. Only difference that has to be is that x additionally has time of day as a second feature for each of the 12 timestamps per data point) Could you kindly clarify this?
So how are the edge features used in GCN algorithm? are they completely ignored? Because according to this explanation, only the node features take part in the convolution process
Perfect video to understand GATs. However, I guess, you forgot to add sigmoid function when you demonstrate h1' as a sum of multiplications of hi* and attention values, in the last seconds of the video: 13:51
why are we adding time embedding to input features, like literally adding them together. Can a simple concatenation of input features and time embedding possible ? btw dope video thanks for sharing
why would the attention adjacency matrix be symmetrical? If the weight vector is learnable, then it does matter which order the two input vectors are concatenated. It doesn't seem like there would be any reason to enforce symmetry.
I really appreciate the information that you shared in this video/playlist. Do you have an example of where you used used the Heterogeneous graph data to create a GNN or GCN?
Bro u killed it. Best explanation. Trust me I have watched all tutorials but all other explanations were shitty. Please create one video on quantization.
its just awesomee... the way u do research before making a video is reallyyyy fascinating. can you tell how u collect papers which are relevant, it will help a lot while doing projects on our own
Collaborative Filtering is such an underrated computation method, imho Many real life problems can be re-interpteted as a recommendation problem. It's just a matter of perspective, but the results can be huge!
It's a good, concise walk-through with good code implementation examples. However I'd recommend avoiding some ambiguous variable names in code like betas, Block, etc.
A wonderful and succinct explanation with crisp visualisations about both the attention mechanism and the graph neural network. The way the learnable parameters are highlighted along with the intuition (such as a weighted adjacency matrix) and the corresponding matrix operations is very well done.
Hello great video. I was just wondering about something you mentioned in the video. At time 14:19 you say that graph VAE's might not be the best architecture for graph generation, and I was wondering what other models do you recommend that might be better than GVAE's. I am asking this because I am working on the exact thing you are working on just for another disease, so I was wondering if you have any recommendations to make the process a lot simpler.
@@DeepFindr Ok thank you so much. Do you know any good specific models in the Smiles Transformer or Diffusion that stand out to you as better than the rest. It would be really helpful if I knew what model I should be going off of.