-
In axolotl, there's a config parameter you can set:
`train_on_inputs: false`
It changes the way the loss is calculated when training a lora -> i.e. it ignores the loss on input tokens and only tra…
-
I'm now training reward models using your code and I discovered that reward models are not logged during their creation.
https://github.com/tlc4418/llm_optimization/blob/main/configs/config_rm.yaml…
-
When training one's own background free dataset, there is noise in the output 2D GS points。And how can I remove them?
![image](https://github.com/hbb1/2d-gaussian-splatting/assets/47420641/b1aa153c…
-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and fou…
-
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
==((====))== Unsloth: Fast Llama patching release 2024.6
\\ /| GPU: NVIDIA A100 80GB PCIe MIG 7g.80gb. Max memory: 7…
-
Hi, I encountered two issues during training: 1. The memory usage gradually increases during training, eventually resulting in an "out of memory" message; 2. There is a lack of data in the second phas…
-
Why?
I think it would be helpful to have an overview for both devs and IxD/CD to see what data points are impacted on each user step in the manage training journey. I believe this way it will be easie…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
Hi, here is another issue about reproducibility of COIN dataset result.
I've also tried to reproduce your result of COIN dataset with using 8 A100 GPUs.
However, the evaluation result gives too low …
-
When training for about 10 epoch ,error occured, batch size is 2
```
CUDA out of memory. Tried to allocate 11.47 GiB (GPU 0; 11.99 GiB total capacity; 11.80 GiB already allocated; 0 bytes free; 23.3…
mokby updated
1 month ago