maxiaoba / GRAPE

MIT License
137 stars 29 forks source link

About the edge embedding #5

Open Abraham12580 opened 3 years ago

Abraham12580 commented 3 years ago

As the paper writes, the edge values are either continuous or discrete. But how are the edge values transfered into edge embeddings? what does the edge embedding look like? I dont see this clearly in the paper. Hoping for the authors kindly reply.

Abraham12580 commented 3 years ago

image

maxiaoba commented 3 years ago

The initial edge embedding is the known attribute values from the data matrix, it is updated by equation 3 in section 3.3 in the paper.

Abraham12580 commented 3 years ago

Thanks for your reply. I have seen your words in your papar. I'm sorry that I haven't express myself very well. Actually it is the form of edge embedding that puzzles me. In other words, is the edge embedding a real number from data matrix or a vector transformed from data matrix? From Figure 1 in the paper, I think the edge embedding of the edge O1-F1 is a real number, which is 0.3. But I think real number cannot deliver the complex information between sample nodes and feature nodes. If the edge embedding is a vector, I don't kown how it is transformed from a single real number, like 0.3. 

------------------ 原始邮件 ------------------ 发件人: "maxiaoba/GRAPE" <notifications@github.com>; 发送时间: 2021年2月19日(星期五) 上午7:33 收件人: "maxiaoba/GRAPE"<GRAPE@noreply.github.com>; 抄送: "兵哥哥"<277057445@qq.com>;"Author"<author@noreply.github.com>; 主题: Re: [maxiaoba/GRAPE] About the edge embedding (#5)

The initial edge embedding is the known attribute values from the data matrix, it is updated by equation 3 in section 3.3 in the paper.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

maxiaoba commented 3 years ago

The initial edge embedding is a real number from the data matrix, and this is enough because it contains all we know from the original data matrix. Then at the following layers, it is updated to a vector by equation 3 in section 3.3 in the paper to encode the additional statistical information from message passing.

Alex-Mathai-98 commented 3 years ago

Thanks for this great repository ! I had a question regarding equation (3).

@maxiaoba, @dingdaisy, @JiaxuanYou - in equation (3) - I see that you use h_{v}^{l-1} - which is the previous layer embedding of the node (v) that you want to update.

So if I understand correctly, to update Node V we look at the Embedding of Node V and the connecting edge Euv.

In the code, on line 64 of egsage.py, I see this concatenation

m_j = torch.cat((x_j, edge_attr),dim=-1)            
m_j = self.message_activation(self.message_lin(m_j)) 

From what I understand, x_j is the embedding of the neighbouring node (and not the node itself). If this is true, I see a conflict in the equation and in the code.

Can you help me to better understand this ?

I am on a tight deadline - I would really appreciate any help. Thanks so much !

voladorlu commented 3 years ago

@maxiaoba I have a question about the edge embedding. I note the different way to deal with discrete and continuous features. The discrete will be transformed into one-hot vector, while continuous feature will keep its original value. From my understanding, the transformed edge features are not aligned well. The transformed discrete feature is a vector, while the continuous feature is a scalar. However, both node and edge features will be stored in a tensor. Could you please help me on this issue? It'd be better to point out where we can find the piece of code to deal with this problem.

guoanu commented 3 weeks ago

@voladorlu I have this problem too, did you slove this? My dataset is a time-series data which have 365days feature. And i want to use this model to achive imputation.

Abraham12580 commented 3 weeks ago

您好,我已经收到你的邮件。祝您工作顺利,天天开心思密达~                                                                 唐子超