-
[Shared Task Description (Argmining 2022 on Validity and novelty)](https://phhei.github.io/ArgsValidNovel/)
[Task Description Paper ](https://aclanthology.org/2022.argmining-1.7.pdf)
| Team(Pape…
-
https://www.kaggle.com/yamqwe/roberta-is-back-0-634-lb
を参考に、BERT(RoBERTa)による実装を理解、提出まで持っていく
-
Hi, I think the released LayoutLM v1 model in huggingface is initialized with BERT weights. I do see in the paper it states that initializing with roberta weights gives better results. IS there a reas…
-
I have downloaded the AlignScore-base.ckpt and am initialzing `AlignScore` like so:
```python
align_evaluator = AlignScore(
model="roberta-base",
batch_size=4,
device="cp…
-
I am facing an issue with implementing the [self-disclosure model](https://aclanthology.org/2022.findings-acl.83.pdf) to output self-disclosure scores for free text responses. Steps followed:
1. Cl…
-
We should add additional data to train the roberta model. This one looks good: https://www.kaggle.com/datasets/carlmcbrideellis/llm-mistral-7b-instruct-texts. It only contains ai generated essays, whi…
-
Would it be possible to use LiLT with BigBird-Roberta-Base models?
If so, any feedback on the best approach of doing so? What might need changing in the LiLT repository to do so?
https://hugging…
-
I have been using `BertPreTrainedModel` to load this roberta model, which works well.
Noticing in `pytorch_transformers`, `Roberta` is also supported.
```
from pytorch_transformers import (BertCo…
Vimos updated
4 years ago
-
Hi,
It seems from the source code that XLM Roberta is finetuned with the gradient updates based on the LSTM attention model. However, when I follow the README instructions and train the model on hi…
-