Open xiaom233 opened 1 month ago
A related quantion is the actual batch size when using a standard Pytorch Dataloader with an accelerate-wrapped model.
accelerate launch train.py
with defaults setup: 1 machine, 8GPUs
loader = DataLoader(
dataset=dataset, batch_size=cfg.train.batch_size,
num_workers=cfg.train.num_workers, shuffle=False
)
model, optimizer = accelerator.prepare(model, optimizer )
Appreciate your replay!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Appreciate a replay!
System Info
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
my dataset sample is like
The prepared_dataloader will raise an error like the following:
TypeError: Unsupported types (<class 'str'>) passed to
_gpu_broadcast_one.....
I understand and respect the design. However, in my research area, sometimes it involves text prompts, which is incompatible with Accelarate.
Expected behavior
I hope to support non-tensor type datasets in Accelerate. Or at least raise a user warning instead of an error.