-
****
When running the command `tune run generate ./custom_quantization_generation_config.yaml`, I encountered the following error:
`AttributeError: module 'torchtune.utils' has no attribute 'gen…
-
if i use projection layer for ddp it will cause:
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside …
-
### Prerequisites
- [X] I have read the [documentation](https://hf.co/docs/autotrain).
- [X] I have checked other issues for similar problems.
### Backend
Local
### Interface Used
UI
### CLI Com…
-
Please make it compatible with Python 3.11
```
13:56:35-478694 INFO Configuring accelerate...
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line …
radry updated
11 months ago
-
Hi,
I recently fine-tuned the phi-3.5-moe-instruct model and phi-3.5-mini-instruct model using PEFT LORA. It seems the Moe model is performing way worse than 3.5 Mini Are there any specific things …
-
I'm bringing my own PyTorch training script, and I'm interested in using SM Debugger to profile function calls in my training jobs. The [API Glossary](https://github.com/awslabs/sagemaker-debugger/blo…
-
DIffbir How to set up w/o restoration module training
-
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.…
-
No matter what i try, I can run the training. I have tried compiling binop, and it compiles fine, but running doenst work:
on Ubuntu LTS 18.04: (Python 3.6, Pytorch 4.0, no GPU)
```
python3 main.…
-
## 🐛 Bug
when using distributed with pretrained model, backprop seems to error out due to inplace modification.
## To Reproduce
I have converted a repo: https://github.com/talreiss/Mean-Shift…