-
## Why
Machine Learning 輪講は最新の技術や論文を追うことで、エンジニアが「技術で解決できること」のレベルをあげていくことを目的にした会です。
prev. #16
## What
話したいことがある人はここにコメントしましょう!
面白いものを見つけた時点でとりあえず話すという宣言だけでもしましょう!
-
https://aclanthology.org/C14-1187/
-
Hi,
Thanks for providing and presenting this nice work.
As mentioned in your paper, your attention pattern for modeling long sequences can be plugged into any pretrained transformer model.
I wond…
-
Your code is good and thankyou for sharing in community. I ran your code against this following links and it didn't worked well.
Please let me know your thoughts after your testing.
"https://www.…
-
Hello sir. @nreimers
I fine-tune S-BERT using my summarization dataset. I tried train using Multiple Negatives Ranking Loss and Triplets Evaluator. My anchor sentence is the title of an online new…
-
I am trying to satisfy your requirements. Kindly let me know where I can find the Boxer corpora as mentioned in your Read.Me file. Also I would like to know whether your program generates paraphrases …
-
Hi,
This might be a dumb question but I am not getting it.
This model is supposed to perform an extractive summarization process.
But when I look at the raw data (cnn_stories), they provide a t…
-
https://virtual2023.aclweb.org/paper_P3367.html
-
Hello.
When doing extractive summarization of raw text using bertext_cnndm_transformer and trying to twist min/max lenght, the output is always the same.
eg.
`python train.py -mode test_text …
ghost updated
4 years ago
-
I am trying to understand the code for WikiSum. I understand that before training we need to download the data and then process it by using the extractive method. However, I am not sure why we are tra…