This tutorial doesn't do latent space visualization, very important aspect of gauging if VAEs are being trained correctly. Could you share a video which does this too?
Hello everybody. First of all, thanks for this particular lecture and the whole course. I have one inquiry. Is it possible to produce an embedding not only related to node features but also to graph structure? I mean, if we move to page number 33 in the slide, we have a graph with 3 nodes and several node features per node and then it is reduced to 3 nodes and 2 features per node. So, for example, could it be possible to condense de graph to 2 nodes with 2 node features per node and then decode it to the original graph? Thanks in advance.
Hello, thank you for the video , it is very interesting .I want to ask you a question, because i have found one other example and it seems to me more correct. Why in the test function you compute the encoding of train data and then you compute the loss ,auc and ap considering encoding of test_edges? This is what i mean: ####### in your code ####### def test(pos_edge_index, neg_edge_index): model.eval() with torch.no_grad(): z = model.encode(x, train_pos_edge_index) return model.test(z, pos_edge_index, neg_edge_index) ### variation######### def test(pos_edge_index, neg_edge_index): model.eval() with torch.no_grad(): z = model.encode(x, test_pos_edge_index) return model.test(z, pos_edge_index, neg_edge_index)
Thanks for the lecture! I have one question: Lets say you have a dataframe with columns that can be seen as nodes and there is a link(edge) between the columns based on domain knowledge. How do you convert this into an input that can be used for GVAE?
Thanks for the clear explanation! Just one question: Normally, we used VAE (or here VGAE) as the generative model. But it seems like in your examples, we cannot use the decoder individually, right? How can we call this "InnerProductDecoder"? Something like VGAE.decoder?
Can It be applied for multiple graphs instead of a single graph? What will be pos_edge_index index and neg_edge_index?. It will be helpful a lot if you can provide some pointers.
I would like to say my guess. We know the Sigmoid(z*z(T)) to get the recon Adj matrix: A_recon. The z is shape (N by d), then the recon Adj shape is (N by N) which is same as origin Adj. Then the sigmoid is to make binary class, which result is a prob matrix in range (0,1). Here, the 0 means no connect (neg_edge_index), and 1 means have connect (pos_edge_index). We want the recon A_recon is as close as the origin A. In the source code, so in loss calculation, we make the value at neg_edge_index lower, we make the value at pos_edge_index higher, to recon the origin A. that is the use of pos and neg. just a guess of me. And sry for my english is poor.
I suppose this is a very simple question. I'm sorry about that. The GCN receives a Graph as an input and a vector as an output. Usually, the length of this output vector is the number of nodes of the graph. Then, how do you can concatenate two GCNs? The input of the second one is going to be a vector instead of a graph. Thank you for you video.
Thanks a lot for a very nice tutorial. I have a quick question. At 33.52 for learning mu and sigma, you said shared learning parameter W_1, however in VariationalGCNEncoder, self.conv_mu and self.conv_logstd were constructed using GCNConv class. So, they do not share the same parameters.
Grazie mille per questa serie fantastica sul ML applicato ai grafi. Ho un paio di domande: - I termini W0 e W1, presenti nei GCN, sono fattori che vengono inizializzati randomicamente e che vengono ottimizzati durante il learning process? - Per quanto riguarda il codice che ha implementato, non mi è chiara la struttura del dataset che è stato preso in considerazione: data.x è la matrice delle nodal features, mentre data.edge_index che cos'è esattamente? Ha a che fare con l'adjacency matrix? -Perché splittiamo il dataset in train e testset, se stiamo considerando un apprendimento non supervisionato? - Supponendo di avere un intero dataset di matrici sparse (soltanto 1 e 0), e di volerla quindi considerare come la adjacency matrix del mio grafico, come posso creare il tipo di dato che serve come input per il variational autoencoder descritto (e implementato) in questo video?
Hello Motiur, We had a look at the origin code from PyTorch Geometric, here: pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.train_test_split_edges pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/utils/train_test_split_edges.html#train_test_split_edges