-
Hi,
I am getting around 3% wer in fast-beam-search and greedy-search. However, I am getting 70% WER when I use fast-beam-search-ngram. My decode configuration looks as below. I am using pruned_tran…
-
`
model_name = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = Model()
model.init(model_name, use_quant=True, weight_dtype="in…
-
In most text generating architecture, beam search provide a quality improvement by generating more natural text.
**Is it useful to use beam search with XLNet ?**
---
As far as I understand, s…
-
### Your current environment
```text
The output of `python collect_env.py`
```
### 🐛 Describe the bug
For example, in https://github.com/vllm-project/vllm/pull/646, if you examine the outputs g…
-
In https://github.com/freelawproject/courtlistener/issues/4597 we found that we have received some uploads for main documents with attachment numbers 0 or 1.
We should review whether it's possible to…
-
As beam search is widely used in machine translation, could you support beam search sampling and evaluate the model?
-
Though I can find you have implemented this function in your another repository 'video-caption-openNMT.pytorch', it's hard to comprehend it. Would you please make it available in this repository? Than…
-
I am trying to decode model trained with receipe **pruned_transducer_stateless7_streaming**. I am able to successfully decode with fast beam search (without LG), however, when I try to decode with LG …
-
right now, we're using beam search as an off-the-shelf component.
It would be great that:
* the search embeds some kind of patch quality knowledge: the first patch generated should have a better…
-
[1] https://arxiv.org/pdf/1904.04479.pdf
[2] https://arxiv.org/pdf/1712.09444.pdf
[3] https://arxiv.org/pdf/1702.01806.pdf
[4] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.75.6306&…