patverga / torch-relation-extraction

Universal Schema based relation extraction implemented in Torch.
Other
53 stars 20 forks source link

Problem about Running Row-Less Universal Schema #7

Open bflashcp3f opened 6 years ago

bflashcp3f commented 6 years ago

Hi,

I am trying to run the row-less version with the "attention-both-lstm" configuration. I used the demo training data by changing the training variables in script from export LOG_ROOT="${TH_RELEX_ROOT}/models/fb15k/filtered/$NAME" export TRAIN_FILE_ROOT="$TH_RELEX_ROOT/data/FB15K-237/fix_padding_no-min//" export TRAIN_FILE="filtered-train-relations.torch" to export LOG_ROOT="${TH_RELEX_ROOT}/models/$NAME" export TRAIN_FILE_ROOT="${TH_RELEX_ROOT}/data/" export TRAIN_FILE="train-mtx.torch"

After running the command ./bin/train/train-model.sh 0 bin/train/configs/rowless/fb15k-237/attention-both-lstm, I got the following error. image

I also tried other lstm-related configurations like "max-relation-both-lstm", and they all have the same error while non-lstm configurations like "attention" works fine.

Any idea? Really appreciate it if you can help!

@patverga @davidBelanger @strubell

patverga commented 6 years ago

I think the issue is that the demo data only works with the master branch version of the code and not the rowless version because it has a slightly different format. You can try using the processing script referenced here https://github.com/patverga/torch-relation-extraction/tree/rowless-updates#fb15k-237-and-rowless-models and see if that does the trick.

bflashcp3f commented 6 years ago

I think the issue is that the demo data only works with the master branch version of the code and not the rowless version because it has a slightly different format. You can try using the processing script referenced here https://github.com/patverga/torch-relation-extraction/tree/rowless-updates#fb15k-237-and-rowless-models and see if that does the trick.

Hi Patverga, really appreciate your suggestions. I downloaded the fb15k dataset and preprocessed it using process-fb15k.sh. However, still, lstm-related configurations like "attention-both-lstm" have the exact same bug while non-lstm configurations like "attention" work well. In this case, I think the problem is probably not the training data, so any other idea? Thanks! @patverga