非常感谢您的分享!
看了您的文章和代码,有一点小疑惑:
文章中提到:For Wide&Deep, DeepCross and AFM, we find that pre-training their feature embeddings with FM leads to a lower RMSE than a random initialization.As such, we report their performance with pre-training.
这个在命令行中具体是怎样实现预训练的呢?
按照Readme中的说明输入AFM.py的命令行后会报错,因为找不到预训练模型,即使将fm预训练模型放入pretrain/afm_ml-tag_16文件夹中也会因meta文件中参数不匹配而无法使用,请问该怎么处理呢?
(另外,因为FM默认hidden_factor=16,那AFM命令行中--hidden_factor [8,256]是不是应该改为--hidden_factor [16,16]?)
非常感谢您的分享! 看了您的文章和代码,有一点小疑惑: 文章中提到:For Wide&Deep, DeepCross and AFM, we find that pre-training their feature embeddings with FM leads to a lower RMSE than a random initialization.As such, we report their performance with pre-training. 这个在命令行中具体是怎样实现预训练的呢? 按照Readme中的说明输入AFM.py的命令行后会报错,因为找不到预训练模型,即使将fm预训练模型放入pretrain/afm_ml-tag_16文件夹中也会因meta文件中参数不匹配而无法使用,请问该怎么处理呢? (另外,因为FM默认hidden_factor=16,那AFM命令行中--hidden_factor [8,256]是不是应该改为--hidden_factor [16,16]?)
train FM model and save as pretrain file
python FM.py --dataset ml-tag --epoch 100 --pretrain -1 --batch_size 4096 --lr 0.01 --keep 0.7
train AFM model using the pretrained weights from FM
python AFM.py --dataset ml-tag --epoch 100 --pretrain 1 --batch_size 4096 --hidden_factor [8,256] --keep [1.0,0.5] --lamda_attention 2.0 --lr 0.1