-
### š Describe the bug
I am training the `meta-llama/Llama-3.2-1B` model using **LLaMA-Factory** with the following YAML configuration:
```yaml
### model
model_name_or_path: meta-llama/Llama-3.2-1ā¦
-
# Reptile: A Scalable Meta-Learning Algorithm #
- Author: Alex Nichol, John Schulman, OpenAI
- Origin: https://blog.openai.com/reptile/
- Related:
- https://d4mucfpksywv.cloudfront.net/researcā¦
-
Thanks for including instructions on how to run this software on the Hoffman2 Cluster. I have a couple of suggestions for your instructions relative to Hoffman2.
**Suggestion no. 1**: Instead of moā¦
-
`RANK=8
deepspeed --num_gpus=8 --num_nodes=2 train.py \
--base_model --micro_batch_size 4\
--wandb_run_name mora_math_r8 --lora_target_modules q_proj,k_proj,v_proj,o_proj,gaā¦
-
### System Info
```shell
accelerate 1.1.1
neuronx-cc 2.14.227.0+2d4f85be
neuronx-distributed 0.8.0
neuronx-distributed-training 1.0.0
optimum ā¦
-
**Feature Request: LangGraph Integration for Adaptive Agent Workflows in PufferLib**
**Objective**: Expand PufferLibās capabilities by integrating LangChain, TRL (Transformers Reinforcement Learninā¦
-
Thanks a lot for your work. I am kind of new to xilinx development and this repo has helped me a lot understanding more.. I work on Zynq MP US+ on mercury xu5 on st1 dev board from enclustra
I am tā¦
-
## What problem does this address?
Training and Meta relaunched https://learn.wordpress.org/ in August this year. Training has since been discussing how we can increase traffic to the site, especiallā¦
-
Hello, I want to do some benchmarking using OpenRLHF in a memory constrained environment (1-2 nodes each with one A30 GPU, 24 GB each). Thus, I have had to use other HF models as the ones used in the ā¦
-
This is a tracking issue for all work related to adding "guided learning paths" to the docs.
## Tasks
- [ ] Define the desired paths and goals for each
- [ ] Add front page links for the paths
ā¦