-
Can we directly pass the input_embeds to the generate function? Just like the following used in the pytorch transformers
```
generated_ids = self.model.generate(
inputs_embeds=input_t…
-
Hi authors,
Thank you for your nice library! I am trying to use your library to run mistral 7B with CoT on gsm8k. I have several questions on the code when using `HFModel`:
- Which mistral 7B m…
-
Expose metrics from eos quota subcomand.
```
eos quota ls -m -n
```
```
quota=node uid=xxxx space=/eos/test/ usedbytes=0 usedlogicalbytes=0 usedfiles=2 maxbytes=1000000000000 maxlogicalbytes=500…
-
Это произведение доступно по лицензии Creative Commons «Attribution» («Атрибуция») 4.0 Всемирная.
-
Conversation from #11 - though OpenEOS lets you customize your own manager by only providing interface helpers, it might help to have some sort of starting place. Where should this example be hosted? …
-
Analyse external user test data for Typography + Color page
**Executive summary** of study
* Method: why and how was it structured?
* Key takeaways: what did we learn?
* Next steps
**Macchiato EOS …
-
### Enhancement summary
It would be nice if eos_designs generated a cotainerlab topology artifact
### Which component of AVD is impacted
eos_designs
### Use case example
If you wanted to pre-depl…
-
from bark import SAMPLE_RATE, generate_audio, preload_models
import sounddevice
from transformers import BarkModel, BarkProcessor
import torch
import numpy as np
from optimum.bettertransformer im…
-
- some models output the wrong EOS token, so this is important
- special tokens show up as blank in output because we use llama_token_to_piece with special=False, so they aren't even considered for ou…
-
I see in `RemoteExperienceMaker._generate_vllm()`, [line 375](https://github.com/OpenLLMAI/OpenRLHF/blob/4e15591a5abd19a14e4a72415603bce76c3e1567/openrlhf/trainer/ppo_utils/experience_maker.py#L375) t…