-
### Model description
ColBERT is a fast and accurate retrieval model, enabling scalable BERT-based search over large text collections in tens of milliseconds.
[ColBERT github](https://github.com/sta…
-
I followed your instructions using my data.
Since the batch_size was too big for my data i changed it to 6.
Then i got this error during evaluation:
`
08/23/2019 17:50:14 - INFO - root - …
-
Hello,
I ran Bert example on MI-250x by using command:
python3 examples/03_bert/benchmark_ait.py --batch-size 32 --seq-length 512 --encoders-only false
However, it aborted with the following er…
-
Hello! I was wondering if you would release your pretraining code for DNABERT-2 and NT? The DNABERT-2 website does not release the actual code that they used to pre-train, just a suggestion of two sim…
-
Ollama [added support for embedding models like BERT](https://github.com/ollama/ollama/issues/327). This is much faster than using a generative model, such as llama2, which is currently the default in…
-
Dear author:
I have encountered this promblem. My os is ubuntu 20.04. How to solve it? Thank you!
```
(atm) khl@khl:~/khl/ATM/ATM$ python -m scripts.preprocess_libero --suite libero_spatial
Trac…
-
理由:nezha模型据我们内部测试效果要好于bert-wwm-ext这一类的bert模型,同时据我们所知faster transformer也暂时不支持nezha,onnx-runtime对nezha的加速也是普通优化,并未达到bert这么快的程度
-
I tried to run the code from example on the fast-bert page, but got out of GPU memory error:
Exception has occurred: RuntimeError
CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 6.00 GiB …
-
Thank you for creating a great repository.
I wonder why there is no bert when converting a pytorch model of MeloTTS to an Onnx model.
https://github.com/k2-fsa/sherpa-onnx/blob/963aaba82b01a425ae8…
-
Hi, I'm following the guide, and everything seems to work except when I'm creating a predictor object I get:
File "bert.py", line 63, in
do_lower_case=False)
File "/home/w3pt/.local/lib/…