Closed baokemi closed 2 years ago
Hello @baokemi,
Have you seen any code in this repo related to this step in their paper?
Each utterance is converted to the [CLS] representation concatenated with the topic representation $z_n$ and knowledge representation $c_n$
In the code, the output from the topic-driven Roberta model is then fed into transformer classifier without any concatenation. Maybe I miss something. What do you think?
Hi there, thanks for the interest in our work. The extra knowledge is generated from the Comic and SentenceBert respectively. The files are too large to upload and knowledge base construction is not in the scope of our work. I have uploaded the version you can refer to. Simply run the atomic_extractor.py
and main_gen.py
to retrieve the knowledge and enhance the dataset. Please be aware that you need to download the atomic dataset and the comet_pretrained_models and place them in the dataset directory.
How to struct Knowledge Graph? I want to know how this part is implemented