Deep Interest Network for Click-Through Rate Prediction
This code is a demo to implement DIN on Amazon data. Unfortunately, the code quality is realy poor and there are many problems with this version of the code. Thus, we did major code refactoring and publish the new version of code in DIEN and XDL. We strongly recommend you to use DIEN. Moreover, the wrong way to apply BatchNorm in DIN brings a faulty experimental result. We fix the bug and report the new results of DIN on Amazon(Electro):
Model | GAUC |
---|---|
PNN | 0.8679 |
deepFM | 0.8683 |
DIN | 0.8698 |
DIN with Dice | 0.8711 |
The updated training log can be found in DIN
This is an implementation of the paper Deep Interest Network for Click-Through Rate Prediction Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Han Zhu, Ying Fan, Na Mou, Xiao Ma, Yanghui Yan, Xingya Dai, Junqi Jin, Han Li, Kun Gai
Thanks Jinze Bai and Chang Zhou.
Bibtex:
@article{Zhou2017Deep,
title={Deep Interest Network for Click-Through Rate Prediction},
author={Zhou, Guorui and Song, Chengru and Zhu, Xiaoqiang and Ma, Xiao and Yan, Yanghui and Dai, Xingya and Zhu, Han and Jin, Junqi and Li, Han and Gai, Kun},
year={2017},
}
raw_data/
folder.
mkdir raw_data/;
cd utils;
bash 0_download_raw.sh;
python 1_convert_pd.py;
python 2_remap_id.py
This implementation not only contains the DIN method, but also provides all the competitors' method, including Wide&Deep, PNN, DeepFM. The training procedures of all method is as follows:
Step 1: Choose a method and enter the folder.
cd din;
Alternatively, you could also run other competitors's methods directly by cd deepFM
cd pnn
cd wide_deep
,
and follow the same instructions below.
Step 2: Building the dataset adapted to current method.
python build_dataset.py
We put a processed data 'dataset.pkl' in DeepInterestNetwork/din. Considering the GitHub's file size limit of 100.00 MB, we split it into 3 file aa ab ac.
cat aa ab ac > dataset.pkl
* Step 3: Start training and evaluating using default arguments in background mode.
python train.py >log.txt 2>&1 &
* Step 4: Check training and evaluating progress.
tail -f log.txt tensorboard --logdir=save_path
## Dice
There is also an implementation of Dice in folder 'din', you can try dice following the code annotation in `din/model.py` or replacing model.py with model\_dice.py