Тёмный

PyTorch Geometric tutorial: Graph Autoencoders & Variational Graph Autoencoders 

Antonio Longa
Подписаться 3,1 тыс.
Просмотров 26 тыс.
50% 1

Опубликовано:

 

28 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 28   
@vlogsofanundergrad2034
@vlogsofanundergrad2034 Год назад
This tutorial doesn't do latent space visualization, very important aspect of gauging if VAEs are being trained correctly. Could you share a video which does this too?
@JJab0n
@JJab0n 2 года назад
Hello everybody. First of all, thanks for this particular lecture and the whole course. I have one inquiry. Is it possible to produce an embedding not only related to node features but also to graph structure? I mean, if we move to page number 33 in the slide, we have a graph with 3 nodes and several node features per node and then it is reduced to 3 nodes and 2 features per node. So, for example, could it be possible to condense de graph to 2 nodes with 2 node features per node and then decode it to the original graph? Thanks in advance.
@Ripper346.
@Ripper346. Год назад
Thanks for the tutorial, I don't get one thing on the test of VGAE, why do we encode on the training and not on the test set?
@nicolacalabrese5891
@nicolacalabrese5891 2 года назад
Hello, thank you for the video , it is very interesting .I want to ask you a question, because i have found one other example and it seems to me more correct. Why in the test function you compute the encoding of train data and then you compute the loss ,auc and ap considering encoding of test_edges? This is what i mean: ####### in your code ####### def test(pos_edge_index, neg_edge_index): model.eval() with torch.no_grad(): z = model.encode(x, train_pos_edge_index) return model.test(z, pos_edge_index, neg_edge_index) ### variation######### def test(pos_edge_index, neg_edge_index): model.eval() with torch.no_grad(): z = model.encode(x, test_pos_edge_index) return model.test(z, pos_edge_index, neg_edge_index)
@swatityagi222
@swatityagi222 2 года назад
Thanks for the lecture. If we have text data without labels, can we convert it into a graph for this?
@1000nateriver
@1000nateriver 3 года назад
Thanks for the lecture! I have one question: Lets say you have a dataframe with columns that can be seen as nodes and there is a link(edge) between the columns based on domain knowledge. How do you convert this into an input that can be used for GVAE?
@94longa2112
@94longa2112 3 года назад
Hello :) We are going to do a talk about "load your own dataset" It is planned for the second week of May
@鑫胡-s3x
@鑫胡-s3x 2 года назад
@@94longa2112 Hello, has this tutorial been uploaded now?
@JJab0n
@JJab0n 2 года назад
@@鑫胡-s3x Check tutorials 14 and 15
@chongtang7778
@chongtang7778 3 года назад
Thanks for the clear explanation! Just one question: Normally, we used VAE (or here VGAE) as the generative model. But it seems like in your examples, we cannot use the decoder individually, right? How can we call this "InnerProductDecoder"? Something like VGAE.decoder?
@94longa2112
@94longa2112 3 года назад
Yes, you can call it like Model.decode()
@chongtang7778
@chongtang7778 3 года назад
@@94longa2112 Cool!
@davidoliveira7465
@davidoliveira7465 3 года назад
Great lecture mate
@EverythingDatawithHafeezJimoh
Beautiful lecture and tutorial, In Slide 59, is it q(z\x) we do not know or p(z|x). Can you confirm
@zijiali8349
@zijiali8349 Месяц назад
I believe t's p(z|x). q(.) is fully parameterized thus we know.
@yumlembamrahul6457
@yumlembamrahul6457 3 года назад
Can It be applied for multiple graphs instead of a single graph? What will be pos_edge_index index and neg_edge_index?. It will be helpful a lot if you can provide some pointers.
@guangxzhu4019
@guangxzhu4019 3 года назад
I would like to say my guess. We know the Sigmoid(z*z(T)) to get the recon Adj matrix: A_recon. The z is shape (N by d), then the recon Adj shape is (N by N) which is same as origin Adj. Then the sigmoid is to make binary class, which result is a prob matrix in range (0,1). Here, the 0 means no connect (neg_edge_index), and 1 means have connect (pos_edge_index). We want the recon A_recon is as close as the origin A. In the source code, so in loss calculation, we make the value at neg_edge_index lower, we make the value at pos_edge_index higher, to recon the origin A. that is the use of pos and neg. just a guess of me. And sry for my english is poor.
@yumlembamrahul6457
@yumlembamrahul6457 3 года назад
@@guangxzhu4019 Thank you!!!! that makes a lot of sense.
@yumlembamrahul6457
@yumlembamrahul6457 3 года назад
@@guangxzhu4019 Will it be the same if I want to encode multiple graph
@guangxzhu4019
@guangxzhu4019 3 года назад
@@yumlembamrahul6457 The multi graph should work. I guess all graph should be in same shape, such as padding to same shape (A and X)
@francescserratosa3284
@francescserratosa3284 2 года назад
I suppose this is a very simple question. I'm sorry about that. The GCN receives a Graph as an input and a vector as an output. Usually, the length of this output vector is the number of nodes of the graph. Then, how do you can concatenate two GCNs? The input of the second one is going to be a vector instead of a graph. Thank you for you video.
@sarash3126
@sarash3126 2 года назад
Hi I think there is a mistake in the formulation of the variational autoencoder reconstruction loss.
@padisarala4114
@padisarala4114 Год назад
how to create a graph for custom dataset that will work for autoencoders
@anowarulkabir4943
@anowarulkabir4943 2 года назад
Thanks a lot for a very nice tutorial. I have a quick question. At 33.52 for learning mu and sigma, you said shared learning parameter W_1, however in VariationalGCNEncoder, self.conv_mu and self.conv_logstd were constructed using GCNConv class. So, they do not share the same parameters.
@carloamodeo7877
@carloamodeo7877 3 года назад
Grazie mille per questa serie fantastica sul ML applicato ai grafi. Ho un paio di domande: - I termini W0 e W1, presenti nei GCN, sono fattori che vengono inizializzati randomicamente e che vengono ottimizzati durante il learning process? - Per quanto riguarda il codice che ha implementato, non mi è chiara la struttura del dataset che è stato preso in considerazione: data.x è la matrice delle nodal features, mentre data.edge_index che cos'è esattamente? Ha a che fare con l'adjacency matrix? -Perché splittiamo il dataset in train e testset, se stiamo considerando un apprendimento non supervisionato? - Supponendo di avere un intero dataset di matrici sparse (soltanto 1 e 0), e di volerla quindi considerare come la adjacency matrix del mio grafico, come posso creare il tipo di dato che serve come input per il variational autoencoder descritto (e implementato) in questo video?
@motiurrahman
@motiurrahman 3 года назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qA6U4nIK62E.html - why are the edges not in the graph?
@94longa2112
@94longa2112 3 года назад
Hello Motiur, We had a look at the origin code from PyTorch Geometric, here: pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.train_test_split_edges pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/utils/train_test_split_edges.html#train_test_split_edges
@陈宸-r7g
@陈宸-r7g 3 года назад
cute
Далее
Variational Autoencoders
15:05
Просмотров 507 тыс.
GNN Project #4.1 - Graph Variational Autoencoders
12:53
Exploiting Symmetries in Inference and Learning
54:29
Просмотров 2,8 тыс.
Autoencoder In PyTorch - Theory & Implementation
30:00
Pytorch Geometric tutorial: PyTorch basics
41:36
Просмотров 17 тыс.
Variational Autoencoder from scratch in PyTorch
39:34