Open bghira opened 2 weeks ago
You can manually initialize each tracker and pass in a tracker manually, similar to the custom trackers: https://huggingface.co/docs/accelerate/usage_guides/tracking#implementing-custom-trackers
Just pass in the instance to log_with
(looking at it we can/should expand the docs on this)
However you can then use the later API using get_tracker
to run things yourself: https://huggingface.co/docs/accelerate/usage_guides/tracking#accessing-the-internal-tracker
System Info
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
Expected behavior
this isn't a minimal reproducer but it outlines what we're doing to trigger the problem and also sort of work around it.
the Accelerator init receives the
--report_to
value which can be a csv list likewandb,tensorboard
or justall
which i guess will transfer these configurations up to all of the trackers.however, it fails to consider the limitations of types that each receiving backend can handle.
wandb can not handle torch dtypes or its own accelerator configs / kwargs objects as they do not serialise.
similarly, tensorboard only handles
int, float, str, bool, torch.Tensor
but passes everything through.i'm not sure what the best way to handle this is, maybe
ignore_unsupported_values
that we can set to True which will then not pass unsupported types into a given backend.the reason I'm requesting this be supported directly in Accelerate is because we cannot manually initialise each tracker individually. if we could do that, or maybe I'm just missing how to do so, that would negate this request too