nzc / dnn_ctr

The framework to deal with ctr problem。The project contains FNN,PNN,DEEPFM, NFM etc
756 stars 285 forks source link

For dcn model #6

Open jiarenyf opened 6 years ago

jiarenyf commented 6 years ago

In https://github.com/nzc/dnn_ctr/blob/c750fec4ba21134a08b8048be2d2ae992d587806/model/DCN.py#L232, x_0 * x_l should be replaced by torch.matmul(x_0, x_l.t()), right ?

nzc commented 6 years ago

The x_l's size should be batch_size [field_size embedding_size],in DCN paper,x_l's size should be [filed_sizeembedding], so result of x_0x_l^T is rank-one. Thinking of expanding to batch_size [field_sizeembedding_size], the result's size should be batch_size1. If use torch.matmul(x_0, x_l.t()), the size is wrong.

nzc commented 6 years ago

@jiarenyf

jiarenyf commented 6 years ago

The shapes of x0, xl, wl, bl are all [fieldSzie*embeddingSize, 1] (ignoring the batch_axis), and the formulation of xl should be xl = matmul(matmul(x0, xl.T), wl) + bl + xl, as shown in the following image: image ...

I am not familiar with pytorch, but in mxnet I use batch_dot to implement the calculation of xl, as in here ...

nzc commented 6 years ago

@jiarenyf The x_0 in my code is two-dimension . And I do the same thing as batch_dot in my code

jiarenyf commented 6 years ago

But the result of x0*xl^T is not rank-one, it should be [batchSize, fieldSize*embeddingSize, fieldSize*embeddingSize], and the shape of x0*xl^T*wl is [batchSize, fieldSize*embeddingSize] ... Here I use * to represent matmul ...