-
H2o hosts a number of strong open source models like falcon-40b, llama-65b and vicuña-33b finetuned for instruct.
It would be nice if they could be added.
They are hostet here https://gpt.h2o.ai/
…
-
@bin123apple Where do you share the pairs of before (Fortran) vs. after (C++) translation so @peihunglin can start initial manual grading of them ?
-
Yi Coder 1.5b is potentially a great model for fine-tuning on one's own codebase. It'd be great if you could release some SAMPLE portions of the dataset used for base training & instruction tuning (an…
-
Is there a script for this?
-
Currently, only the OpenAI API seems to be readily supported, and integrating LiteLLM might prove to be more useful with regards to using Huggingface models (such as https://huggingface.co/tiiuae/falc…
-
1. **support plan**
when it will release the version for supporting llava-llama3-70b?
meainwhile, will it will consider of supporting unofficial version like, using llm of llama3-120b?
huggin…
-
how to add eos token
-
#### The inference code in `inference.ipynb` is taking 3minutes to run on Colab L4 GPU .Is their any way to speed up inference?
@swastikmaiti
-
### Describe the issue
Hello
Looking at the dataset list, which dataset does the prompts with an empty model belong to?
For example:
"id": "wgByO4Y_0",
"model": "",
Thanks
-
First of all, I apologize if I'm asking nonsense. My doubt arises from the need to edit the same image, not a transformed one. Projects like img2img alternative test (automatic1111 script) have pointe…