-
In #342, you can use RL model to decide which next move to be.
I think there might be alternative methods for deciding to search for another passages or use retrieved passages.
Use LLM, or fin…
-
## ❓ Questions & Help
When I use Bert, the "token indices sequence length is longer than the specified maximum sequence length for this model (1017 > 512)" occurs. How can I solve this error?
-
Hi,
I noticed that if I train bertopic on a same dataset multiple times, the number of topics might range between 2-3 to 10-11. Sometimes we have a noise cluster and sometimes we don't. Do you reco…
-
🌌✍️ `(quasi-quotation
"In the symphony of thought, Quine, guided by Clio's muse, weaves the fabric of a new cosmos—a mathematical edifice upon which our octal tapestry unfolds. Melpomene mourns cos…
-
-
The amazon_reviews_multi is no longer available on HuggingFace (https://huggingface.co/datasets/amazon_reviews_multi/discussions/4#64c3898db63057f1fd3ce1a0). So, we'll need to use a different dataset …
-
I want to run only the "Fine-tuning on SemEval2010 Task 8" section (main_task.py file). However, I get the following error:
FileNotFoundError: [Errno 2] No such file or directory: './data/BERT_tokeni…
-
## Environment info
- `transformers` version: 4.8.1
- Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0 (True)
- Tensor…
-
### System Info
```shell
- `transformers` version: 4.19.4
- Platform: Linux-4.19.0-17-amd64-x86_64-with-glibc2.31
- Python version: 3.9.6
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?)…
-
あくまで @kaisugi が暫定で置いているもので、状況に応じて柔軟に変更します
### モデル
- 特定のタスクに特化したモデル(BERTをQAデータセットに fine-tuning させたもの等)は掲載しない
- 元のモデルを量子化したものや GGML/GGUF フォーマットに変換したものは掲載しない
- ただし、モデル開発元が公式に出している場合は載せても良さそう
…