-
We would like a fast tokenizer for BertJapaneseTokenizer. This is because the current token classification model (run_ner.py) requires using the fast tokenizer but BertJapaneseTokenizer does not have …
-
I have been trying to get this to work for several days now and keep on getting errors every time. I tried building the container image on my mac and on an AWS p3.8xlarge instance, but failed each tim…
-
Dear author:
I have encountered this promblem. My os is ubuntu 20.04. How to solve it? Thank you!
```
(atm) khl@khl:~/khl/ATM/ATM$ python -m scripts.preprocess_libero --suite libero_spatial
Trac…
-
How can i use the confusion matrix for each class and the other metrics in this link https://github.com/kaushaltrivedi/fast-bert/issues/17 ??
-
We are optimizing the bert performance now and one important aspect is the contiguous op.
Our current implementation for the transpose input shape in contiguous uses a straightforward approach, but …
-
I appears Microsoft have a neat little BERT model for code search and basic comment generation, code translation, and simple refactoring. With faster inference in C++, perhaps someone can make a neat …
-
由于连接不到huggingface所以我将bert模型下载到了本地,我将代码修改如下:
try:
self.tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
except:
self.tokenizer = BertTok…
-
我使用ALBERT和孪生网络来训练一个主观问题评分模型,训练策略参考的你的代码,孪生网络由双向LSTM和全连接层组成。在训练中,我发现准确率没有提高,一直保持不变。我感觉像是权重没有更新,可能是因为梯度太小导致了权重变化不大。或者,训练策略可能存在问题,但我不确定具体原因。下面是我训练期时的准确率:
![training](https://github.com/dragen1860/MAML-P…
-
From the code (adapted from test_weight_mapper.py)
```
import torch
import torch.nn as nn
from fast_transformers.builders import TransformerEncoderBuilder
from fast_transformers.weight_mapper i…
-
We really need a tiny checkpoint for tests. We currently include the smallest one we can (bert_base_uncased) via git-lfs, but I'd definitely like that to be smaller. Including it allows tests to run w…