-
支持Alpaca等指令数据集的SFT和RLHF流程:https://github.com/hiyouga/LLaMA-Efficient-Tuning
LoRA微调可在单块3090 GPU上运行,同时支持QLoRA方法。(最低12G显存)
微调模型的 LoRA 权重:https://huggingface.co/hiyouga/baichuan-7b-sft
运行以下指令即可实现…
-
We had a bug in our code that caused us to publish thousands of events in a single unit of work. It ended up triggering some Axon behavior that brought our application to its knees.
If there is a v…
-
You will see the problem in the text below, this is with using gpt-4o and version 0.5 of agent zero, but have similar issues with other models
User message ('e' to leave):
> Write a college level …
-
- Since `orca.automl` is used for customized model for any area, while most of the metrics in `orca.automl.metrics` is designed and optimized for time series tasks.
- The built-in metrics in `orca.au…
-
## Is your feature request related to a problem? Please describe.
I'm always frustrated when I'm looking at the replication worker code. It takes and object from the storage, decompresses it, unmarsh…
-
With all the growing activity and focus on multimodal models is this library restricted to tune text only LLM?
Do we plan to have Vision or more in general multimodal models tuning support?
bhack updated
1 month ago
-
## Current Default
- `target_file_size_multiplier = 1`
- `block_size = 4096`
- `OptimizeLevelStyleCompaction(512M)` implies
- `target_file_size_base = 64M`
- snappy/lz4 compression types
…
-
**Link to the notebook**
[Notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-triton/business_logic_scripting/stable_diffusion/sm-triton-bls-stablediff.ipynb)
**Describe…
-
Hi, I have some question about m3 model finetune
1. I don't understand the role of What is fix_encode True and Unified_finetuning True. Can you explain?
2. About small dataset and large dataset의…
-
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 56.50 GiB.
V100 32G
5B model, `enable_model_cpu_offload()` option and `pipe.vae.enable_tiling()` optimization were enabled
using …