-
Create a directory with name trained_models or something similar where all trained, fine-tuned models used in the pipeline are stored. So that each time you need them, you can just directly load them …
-
https://github.com/myshell-ai/MeloTTS
I tried to modify the `export-onnx-ljs.py` script, I got as far as the `get_text`. MeloTTS returns tones as well as phonemes, is this easy to support in sherpa…
-
Embedded using the API
Significantly underperforms vs other models
In most of the cases, each embedding is a full text of the Supreme Court decision
Indexed with hnsw.
Should I use a different…
-
When I run this code:
predictor = Predictor.from_path("https://s3-us-west-2.amazonaws.com/allennlp/models/bert-base-srl-2019.06.17.tar.gz", cuda_device=opts.cuda_device)
Send an error:
Traceback (m…
-
### Model description
[Monarch Mixer BERT](https://hazyresearch.stanford.edu/blog/2023-07-25-m2-bert) is sub-quadratic in sequence length. This has enabled the development of [32k-context retrieval m…
-
### What model would you like?
Till now, ollama supports LLM and embedding models. I wonder if it could support popular reranking models later?
Such as:
1. [BAAI/bge-reranker-large](https://huggi…
-
After I trained sum model for CNNDM, I tried to run the validation on my ckpt.
The script I use are followed:
`export BERT_DATA_PATH=../bert_data/
export MODEL_PATH=../model/
python train.py \
…
-
![Screen Shot from awesomescreenshot.com](https://www.awesomescreenshot.com/api/v1/destination/image/show?ImageKey=tm-12284-44299-c032a954489dc4e41f5d98e137a434ad)
---
**Source URL**:
[https://chatg…
-
Not sure what to put for the params, are there any docs?
```
outputNames: new Map([
["encoder", "output"],
]),
tokenizerParams: {
bosTokenID: 0,
padTokenID…
-
The current disadvantage of doing NER for large models is that they cannot achieve the effect of fine-tuning BERT. Is there any way to solve it. For example, through prompt words and so on. If the lar…