-
## Describe the bug
As part of our performance evaluation of the fms-hf-tuning fine-tuning stack, we observed a regression between the images we tested:
* quay.io/modh/fms-hf-tuning:01b3824c9…
-
It would be nice to have a language facility that can allow a user to add their own tuning parameters. A tuning parameter is essentially a hidden argument of type `i64` that you configure through the …
-
dear developers,
I am observing quite different times for a sample input 'casscf+xmspt2' when running bagel in parallel using just BAGEL or mpirun -np 1 . The node I am running on has two sockets …
-
### Summary
# Motivation
WasmEdge is a lightweight inference runtime for AI and LLM applications. Build specialized and finetuned models for WasmEdge community. The model should be supported by Wa…
-
The wasi build needs several steps to reduce the final binary size.
- [x] Icall linking: https://github.com/dotnet/runtime/pull/91843
- [x] blocked by https://github.com/dotnet/runtime/pull/9209…
-
For self-hosted runners, I just need text inference, so I don't want it to load Mistral for fine-tuning or Sdxl for images.
I tried setting:
`RUNTIME_OLLAMA_WARMUP_MODELS=llama3:instruct`
but…
-
# Motivation
In the llm-symptom-study experiments I'm running (and I imagine many experimental setups) I have two sets of relevant labels for our experiment – those used in tuning, and those used i…
-
Hi,
I have tried run alpaca_finetuning_v1/finetuning.sh and encounter runtime error.
```
Traceback (most recent call last):
File "finetuning.py", line 294, in
main(args)
File "finetun…
-
## 🐛 Bug Description
When running the `fine_tuning_tutorial_jax.ipynb` notebook on a CPU in Google Colab, I encountered the following error:
```
--------------------------------------------------…
-
This is an umbrella issue for implementing a tuning infrastructure. By tuning we mean a type of Profile Guided Optimization flow where we compile a program/model with extra instrumentation and use the…
kuhar updated
1 month ago