-
The `accelerator` supports running in multiple GPUs and multiple CPUs scenarios.
Can I run `accelerate` in scenarios where both CPUs and GPUs exist at the same time?
For example, machine 1 uses th…
-
Yi 1.5 models are some of the [most capable](github.com/01-ai/yi-1.5) < 10B LLMs out there. It would be amazing to get fine-tuning capabilities for the model via Unsloth.
-
Torchtune is a great project that explaining such a complex fine-tuning process in such an elegant way.
I would think having a simple benchmark againt other popular LLM fine-tuning approach is valu…
-
- [ ] [LoRA Land: Fine-Tuned Open-Source LLMs that Outperform GPT-4 - Predibase - Predibase](https://predibase.com/blog/lora-land-fine-tuned-open-source-llms-that-outperform-gpt-4)
# LoRA Land: Fine…
-
Hello authors,
I have some questions to ask about your _general_dataset.json_.
1. Why didn't you include other modes than gpt4 and gpt 3.5?
2. What are the specific versions of gpt4 and gpt 3.5?
…
-
# URL
- https://arxiv.org/abs/2405.05904
# Affiliations
- Zorik Gekhman, N/A
- Gal Yona, N/A
- Roee Aharoni, N/A
- Matan Eyal, N/A
- Amir Feder, N/A
- Roi Reichart, N/A
- Jonathan Herzig…
-
This is an LFX mentorship project intended to run in the Fall of 2024.
This is related to https://github.com/cncf/mentoring/pull/1287
# Description
Kai is a tool designed to leverage AI for a…
-
Considerations:
- What is the base layer / existing LLM we use as the foundation?
- license
- cost
- sustainability
- How to make it Epiverse-aware? In particular, fine-tuning vs RAG.
-
**Is your feature request related to a problem? Please describe.**
Currently we only support `retrieve_online_documents`. We should add a way to allow for vector search for historical retrieval so us…
-
We can fine-tune gpt 3.5 i.e chat_gpt according to [official blog](https://openai.com/blog/gpt-3-5-turbo-fine-tuning-and-api-updates) and [docs](https://platform.openai.com/docs/guides/fine-tuning)
…