-
Hi,
Thank you for your great work.
If you don't mind, could you provide us with minimal code or instructions to reproduce the results from the paper?
Or, the minimal script to run the code woul…
-
Hi, first of all, thank you so much for providing pre-trained models through many experiments. But what I want to ask is, I want to fine-tune the pre-trained VCTK model with my multi-speaker dataset. …
-
**Following the readme.md, I tried to run RAP for gsm8k using exllama, with the recommended instruction:**
`CUDA_VISIBLE_DEVICES=0,1 python examples/RAP/gsm8k/inference.py --base_lm exllama --exlla…
-
Instead of having them fixed, get them dynamycally:
```
// Thanks to Zibri for this routine.
async function getPreferredModel(apiKey) {
try {
…
-
Hello,
Thank you for this great package!
I would like to know on which datasets and how the two models that are used when running `OmniEvent.infer` were fine-tuned. That is, the 2 models which …
-
Recently made public:
https://openai.com/blog/whisper/
https://github.com/openai/whisper
Interesting, they have some multilingual models that can be used for multiple languages without fine tunin…
-
This is an umbrella issue for implementing a tuning infrastructure. By tuning we mean a type of Profile Guided Optimization flow where we compile a program/model with extra instrumentation and use the…
-
There has been some community appetite for classification tasks #1249 #1124. Incidentally, due to the use of classification models for RLHF, we already have some of the necessary components to support…
-
Hello,
I am using ecospat to run ESMs and I am encountering an error when I try to tune Maxent parameters. I received this error both when I tried to tune my own data and the the data provided in t…
-
Are you planning on releasing code/documentation for training LoRAs for Lumina-Next-T2I/Lumina-Next-SFT? Full fine-tuning is great, but smaller models would be nice to have too.