mswellhao / PacSum

Unsupervised Extractive Summarization based on Position-Augmented Centrality
124 stars 27 forks source link

Does the model need to be trained or not #8

Closed yalunar closed 3 years ago

yalunar commented 4 years ago

After getting bert as the sentence encoder, does the model need to be trained? I don't find the codes to train the model, it seems that the model only need to find parameters lambda1 and lambda2 through enumerating the values as in function below. https://github.com/mswellhao/PacSum/blob/637dffeddb0e83a53e73012ca33727c773c2c158/code/extractor.py#L123 Am I wrong?

mswellhao commented 4 years ago

I didn't include the training code in this repo, but you can find the finetuned bert models here https://drive.google.com/file/d/1wbMlLmnbD_0j7Qs8YY8cSCh935WKKdsP/view?usp=sharing