-
### Describe the issue
Hello,
see [also this discussion](https://github.com/microsoft/onnxruntime/discussions/22427). I'm opening this one as I think it's an issue as sifting through previous issues…
-
### Describe the issue
Env:
- RK3576: Debian 12 aarch64, CPU only
- PC : Ubuntu20.04, x86
Ram
- 8GB
Description:
Hello,
1. I am currently working with the onnxruntime-training demo,…
-
### Describe the issue
Issue:
I wanted to run the pre-train code `https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/pretrain.sh`, but it ends to a device mis-match error. It seems that the…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussi…
-
```
[ Info: Training machine(XGBoostClassifier(test = 1, …), …).
[ Info: XGBoost: starting training.
┌ Warning: [04:42:46] WARNING: [/workspace/srcdir/xgboost/src/common/error_msg.cc:27](https://ji…
Moelf updated
6 hours ago
-
## Summary
In order to train this model, the following key details are required:
- Required `fwd`, `bwd`, `loss` and `opt` ops are supported e2e
- tenstorrent/tt-mlir#77
- tenstorrent/tt-mli…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
Currently, on-device training is [out of scope](https://www.w3.org/2021/04/web-machine-learning-charter.html#out-of-scope):
>Training capabilities are out of scope due to limited availability of re…
-
### 🐛 Describe the bug
When using HYBRID_SHARD instead of FULL_SHARD on PyTorch 2.4.1 the loss of our model behaves similarly to when it is being trained on one node (despite training on two). When u…
-
I wanted to fine tune the model based on my processed chat files, but when I ran this command: `tune run lora_finetune_single_device --config config/mistral/qlora_train_config.yaml` , I got this erro…