-
After stage1 tuning, use safe_save_model_for_hf_trainer to save model params, but with “AttributeError: 'Trainer' object has no attribute 'deepspeed'” error
when fixing this, meet same probs: "Attrib…
-
-
Hi! Thanks for your contribution.
When I trained config/train/single_tf.yaml: the following error occurred:
```python
Traceback (most recent call last):
File "/usr/local/bin/dlhpcstarter", lin…
-
Dear author I have been following the source code you provided. I am fairly new to Deep Learning development.
About the Trainer, it shows that it has unexpected argument 'lr'. So it this code execu…
-
Need a fem version
![Image](https://github.com/untamed-team/project-untamed/assets/106409863/9dfbd097-c611-4b24-8a87-9f4629af098b)
-
I have imported the code from github and while running the code
!python /kaggle/working/Swin-TransUper/train.py
got error as
Traceback (most recent call last):
File "/kaggle/working/Swin-…
-
In PPOv2 trainer.train(), # 4. compute rewards, when computing the rewards index, the sequence_lengths_p1 is used.
`actual_end = torch.where(sequence_lengths_p1 < rewards.size(1), sequence_lengths…
-
2: Traceback (most recent call last):
2: File "/workspace/NeMo/examples/nlp/language_modeling/tuning/megatron_gpt_finetuning.py", line 64, in main
2: model = MegatronGPTSFTModel.restore_from(cfg.mod…
-
[//]: # (REMBEMBER! Add links to things related to the bug using for example:)
[//]: # (https://www.wowhead.com/wotlk)
**Description:**
If Shaman is neutral with trainers and Vendors in Dalaran…
-
**Describe the bug**
As described in the title, error when launching the fine tuning script in [Here](https://github.com/NVIDIA/NeMo/blob/main/tutorials/llm/mamba/mamba.rst#run-fine-tuning)
**Steps/…