-
Trying to quantize distilbert-base-uncased-distilled-squad model using the conversion tool in Transformers library from HuggingFace.
python convert_graph_to_onnx.py --framework pt --opset 13 --pipe…
-
Hi,
Following [huggingface's model page](https://huggingface.co/bandainamco-mirai/distilbert-base-japanese), I download DistilBERT-base-jp model. My question is tokenizer returns different output co…
-
Fine tune 3 (or more) popular models and compare performance to DistilBERT for the movie sentiment analysis task.
Some choices:
GPT-3
LaMDA
Turing-NLG
XGen
Llama 2 (7 billion)
Gemini
Pic…
-
Sweep: in text similarity 2 change the model from bert-base-uncased to distilbert
-
## Information
The problem arises in chapter:
* [ ] Introduction
* [ ] Text Classification
* [ ] Transformer Anatomy
* [ ] Multilingual Named Entity Recognition
* [ ] Text Generation
* [ ] …
-
from neuspell import BertChecker
checker = BertChecker()
checker.from_pretrained(
bert_pretrained_name_or_path="distilbert-base-cased",
ckpt_path=f"{data_dir}/new_models/distilbert-base-…
-
Sorry if this is a noob question, but I'm wondering if there is a straightforward way to use this model for basic sentiment analysis (positive/negative), similar to how "distilbert-base-uncased-finetu…
-
This is a "living issue". Editing is appreciated.
### Context:
- Most prominent benchmark for embedding models: https://huggingface.co/spaces/mteb/leaderboard
- We can choose to index the pdf dat…
-
### Bug Description
[This warning](https://github.com/run-llama/llama_index/blob/6816ad9addf8c04a1ff00905dd87a2445dadb236/llama-index-integrations/llms/llama-index-llms-huggingface/llama_index/llms…
-
I have seen a curious behavior when running the `encoding` of a `sentence-transformer` model insida a `threadPool`.
Look at this code which runs with no problem and constant memory consumption:
…