-
Hi! Based on how the Latin BERT code is written I think I might have to implement the pseudo perplexity myself. This is because the tokenizer they use isn't a derivative of some other more generic tok…
-
shcup updated
5 years ago
-
Thank you for the excellent work.
So I have been trying to leverage my labelled English data to do short text (sentence) classification for Spanish.
Firstly I'm comparing the result for monolingua…
-
You will see the problem in the text below, this is with using gpt-4o and version 0.5 of agent zero, but have similar issues with other models
User message ('e' to leave):
> Write a college level …
-
## 論文リンク
https://arxiv.org/abs/1810.04805
## 公開日(yyyy/mm/dd)
2018/10/11
## 概要
Question Answering などの supervised な自然言語処理の問題を fine-tuning based のモデルで解きたい時、これまで使われていた pre-training の手法は入力データの情報を使…
-
**Role of AI in XBRL tagging**
All the companies registered in us , india and european stock exchanges have to submit their quarterly financial statements with xbrl tagging
1. Each numerical entit…
-
I'm fine-tuning BERT (using the Transformers library) to perform a regression task and I can successfully extract the attributions. However, how am I supposed to visualize the importance of different …
-
hallo everyone,
may i ask you, if the special tokens of XLNet are same as BERT? We all know, the special tokens of BERT are [CLS] and [SEP]. and many public introduction of XLNet also use [CLS] and…
lytum updated
4 years ago
-
May I ask how to finetune trocr on my own dataset? What's the format of dataset I need to prepare?
-
**Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-…