-
Hi, does the code repo include the code for interpolation without finetuning E5
-
In `from_pretrained()` in `model.py`, if the `config` is neither `BertConfig`, `RobertaConfig` or `DistilBertConfig`, the `tensor` won't be initialized.
I peeped the codebase and found that `tensor…
-
### Tested versions
pyannote.audio==3.1.1
### System information
Ubuntu 20.04
### Issue description
My code:
```
from pyannote.audio import Pipeline
from pyannote.audio.pipelines import Speake…
-
### System Info
- `transformers` version: 4.43.0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.10.9
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate versi…
-
I am trying to import the aforesaid model using the following command:
```
eland_import_hub_model --url --hub-model-id dunzhang/stella_en_400M_v5 \
--task-type text_embedding --es-username e…
ivssh updated
1 month ago
-
Thanks for your brilliant work!
I have some questions about the implementation.
1.
According to the configuration files and preprocessing code, the input image intensity is rescaled to the ran…
-
```python
from pyannote.audio.pipelines.utils.hook import ProgressHook
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization")
```
On my Mac, default `pipeline.embedding_batch_size…
-
I'm trying to use this for PDF files but I don't see any PDF examples.
-
@mlissner @legaltextai As we agreed here we can discuss about the architecture for the microservice to generate the embeddings required for semantic search.
From my understanding we'd require two s…
-
Hi, based on my understanding, we can extend the LangBridge approach to the seq2seq models which have {model_name}EncoderModel in HuggingFace.
However, how about seq2seq models which only have genera…