-
OS: Windows 11
Model: [maidalun1020/bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)
Command:
```sh
cargo run --features mkl --example bert --release -- --model-…
-
Hi,
` from sentence_transformers import SentenceTransformer `
` sentences = ["This is an example sentence", "Each sentence is converted"] `
` model = SentenceTransformer('sentence-transform…
-
Repro:
transformers: `pip install git+https://github.com/huggingface/transformers`
setfit: `pip install setfit==1.1.0`
code:
```
dataset = load_dataset("sst2")
train_dataset = samp…
-
@onnx_op(op_type="GPT2Tokenizer",
inputs=[PyCustomOpDef.dt_string],
outputs=[PyCustomOpDef.dt_int64, PyCustomOpDef.dt_int64],
attrs={"padding_length": PyCustomOp…
-
I'm interested in applying the promising pretained model to the similarity task. It was unclear from the example provided on HuggingFace how it might be utilized for similarity tasks.
Two sentences …
-
Thanks for your great work!
I'm just wondering how big dataset is recommended from training from scratch for other languages?
Thank you!
-
When using `sentence-transformers/all-MiniLM-L6-v2` on m1 PRO mac (32GB ram) with --features metal on,
attempting to generate embeddings for 1000 sentences and attempting to convert the final tensor…
-
I run the code in " gts", but I find a mistake when using the Roberta which in "contextual_embeddings.py ".
`
class RobertaEncoder(nn.Module):
def __init__(self, roberta_model = 'roberta-base', de…
-
With the new release of version 3.2.0, the use of ONNX has become much easier but initial local tests led to various errors, meaning that it was not possible to use ONNX Runtime via Sentence Transform…
-
Hello,
Instead of using `sentence-transformers/all-MiniLM-L6-v2`, I wanted to try out a custom embedding model from Huggingface. I read previous opened and closed issues and found the following app…