xunjianyin / Seq2SeqOnData2Text

The source codes and annotated data of ACL2022: How Do Seq2Seq Models Perform on End-to-End Data-to-Text Generation?
MIT License
1 stars 0 forks source link

What's the code? #1

Open Dinxin opened 2 years ago

xunjianyin commented 2 years ago

Hi, Dinxin

Due to my busy schedule, I have not yet organized my code and data. I'm sorry for that.

But you can use some open-source code on GitHub, some of which is also exactly what I used.

•T5-small、T5-base、BART-base https://github.com/UKPLab/plms-graph2text •Hierarchical transformer encoder + conditional copy https://github.com/KaijuML/data-to-text-hierarchical •Neural Content Planning + conditional copy https://github.com/ratishsp/data2text-plan-py •Pointer-generator https://github.com/ymfa/seq2seq-summarizer •Bert-to-bert https://github.com/google-research/google-research/tree/master/bertseq2seq •Graph convolutional networks https://github.com/diegma/graph-2-text •Transformer https://github.com/OpenNMT/OpenNMT-py

If you have any questions, welcome to ask here or email xjyin@pku.edu.cn.

Dinxin commented 2 years ago

Thx, I will have a try in the near future.

By the way, I have another problem whether the copy mechanism could boost the pre-trained model such as BART and T5.

xunjianyin commented 2 years ago

I think it is interesting. But we didn't have an experiment on this idea.

XiaoqingNLP commented 1 year ago

@xunjianyin So the authors didn't intend to open source the data and code, they just said open source inside the paper?