Wentao-Xu / HIST

The source code and data of the paper "HIST: A Graph-based Framework for Stock Trend Forecasting via Mining Concept-Oriented Shared Information".
241 stars 69 forks source link

Model artitecture problem #6

Open mzh-lin opened 2 years ago

mzh-lin commented 2 years ago

Dear authors,

I have some questions about the differences between codes and formula in the part of predefined concept module. It seems that in the paper, we use market capital as the initial weight of the predefined concepts. I can understand these steps in code:

        market_value_matrix = market_value.reshape(market_value.shape[0], 1).repeat(1, concept_matrix.shape[1])
        stock_to_concept = concept_matrix * market_value_matrix

        stock_to_concept_sum = torch.sum(stock_to_concept, 0).reshape(1, -1).repeat(stock_to_concept.shape[0], 1)
        stock_to_concept_sum = stock_to_concept_sum.mul(concept_matrix)

        stock_to_concept_sum = stock_to_concept_sum + (torch.ones(stock_to_concept.shape[0], stock_to_concept.shape[1]).to(device))
        stock_to_concept = stock_to_concept / stock_to_concept_sum
        hidden = torch.t(stock_to_concept).mm(x_hidden)

        hidden = hidden[hidden.sum(1)!=0]

But i cant understand the following steps, which i cant find in the paper formula,

        stock_to_concept = x_hidden.mm(torch.t(hidden))
        # stock_to_concept = cal_cos_similarity(x_hidden, hidden)
        stock_to_concept = self.softmax_s2t(stock_to_concept)
        hidden = torch.t(stock_to_concept).mm(x_hidden)

Does It aim to modify the weight of stock to concept? but i dont find the answer in the paper, can you give me some hints about this? I'll appreciate your responses.

BoruiXu commented 2 years ago

I think this part code corresponds to section4.2.2, it aims to correct the Predefined concepts’ Representations.

Wentao-Xu commented 2 years ago

The reply of @a919480698 is right.

Michelia-L commented 2 years ago

Dear Authors,

I have a question about this part of the code, too. Why is Line 2 commented? When we are reproducing your results, do we have to uncomment this line?

        stock_to_concept = x_hidden.mm(torch.t(hidden))
        # stock_to_concept = cal_cos_similarity(x_hidden, hidden)
        stock_to_concept = self.softmax_s2t(stock_to_concept)
        hidden = torch.t(stock_to_concept).mm(x_hidden)
BoruiXu commented 2 years ago

Dear Authors,

I have a question about this part of the code, too. Why is Line 2 commented? When we are reproducing your results, do we have to uncomment this line?

        stock_to_concept = x_hidden.mm(torch.t(hidden))
        # stock_to_concept = cal_cos_similarity(x_hidden, hidden)
        stock_to_concept = self.softmax_s2t(stock_to_concept)
        hidden = torch.t(stock_to_concept).mm(x_hidden)

I think maybe Line 1 and Line 2 mean different similarity methods. The author mentioned the Line2 method(cosine) in his paper. I tried to uncomment Line 2 and comment Line 1, also can reproduce the results.

Michelia-L commented 2 years ago

Dear Authors, I have a question about this part of the code, too. Why is Line 2 commented? When we are reproducing your results, do we have to uncomment this line?

        stock_to_concept = x_hidden.mm(torch.t(hidden))
        # stock_to_concept = cal_cos_similarity(x_hidden, hidden)
        stock_to_concept = self.softmax_s2t(stock_to_concept)
        hidden = torch.t(stock_to_concept).mm(x_hidden)

I think maybe Line 2 and Line 3 mean different similarity methods. The author mentioned the Line2 method(cosine) in his paper. I tried to uncomment Line 2 and comment Line 3, also can reproduce the results.

Do you mean Line 2 and Line 1 ? I think Line 3 is just a normalization step.

BoruiXu commented 2 years ago

YES. sorry, I actually mean Line 2 and Line 1

BoruiXu commented 2 years ago

YES. sorry, I actually mean Line 2 and Line 1

Thank you! I am not sure if the author use attention mechanism instead of calculating cosine similarity to describe the degree of connection between stocks and concepts. Does Line 1 correspond to "attention mechanism"? The section 4.4 in paper noted that "we apply the attention mechanism to learn the importance of each concept for a stock." But it cant be found in either the code or the formula of the paper.

I think the author uses Line 2 (cosine) to realize the attention mechanism. Calculating cosine similarity is is an implementation of attention mechanism. But I am not sure

Michelia-L commented 2 years ago

YES. sorry, I actually mean Line 2 and Line 1

Thank you! I am not sure if the author use attention mechanism instead of calculating cosine similarity to describe the degree of connection between stocks and concepts. Does Line 1 correspond to "attention mechanism"? The section 4.4 in paper noted that "we apply the attention mechanism to learn the importance of each concept for a stock." But it cant be found in either the code or the formula of the paper.

I think the author uses Line 2 (cosine) to realize the attention mechanism. Calculating cosine similarity is is an implementation of attention mechanism. But I am not sure

Thank you anyway!

mzh-lin commented 2 years ago

I think this part code corresponds to section4.2.2, it aims to correct the Predefined concepts’ Representations.

i got it! Thanks very much! i am still a little confused about the following step of formula<7> in the section4.2.2 I can understand this part aim to implement the formula<6> in the section4.2.2

stock_to_concept = x_hidden.mm(torch.t(hidden))
# stock_to_concept = cal_cos_similarity(x_hidden, hidden)
stock_to_concept = self.softmax_s2t(stock_to_concept)
hidden = torch.t(stock_to_concept).mm(x_hidden)

and then the formula<7> want to add a fully connected layer, but the following code seems jump to section 4.4, I can not find the fully connected layer about the formula<7>.

concept_to_stock = cal_cos_similarity(x_hidden, hidden)
concept_to_stock = self.softmax_t2s(concept_to_stock) 

e_shared_info = concept_to_stock.mm(hidden) 
e_shared_info = self.fc_es(e_shared_info) 

do we need to implement the formula <7> before constructing the concept_to_stock? @Wentao-Xu thanks!

sdumyh commented 2 years ago

Dear Authors, I have a question about LeakyReLU activation function, it seems that the function is used three times in your paper, but in the code , it's used only once. image image

Wentao-Xu commented 2 years ago

Hi, there may be some errors in the paper's equations, please take the code as the correct standard.