This is an implementation of AAAI'20 paper "Semantics-Aligned Representation Learning for Person Re-identification". We leverages dense semantics to address both the spatial misalignment and semantics misalignment challenges in person re-identification.
How do you set up the training arguments to perform a mode training l on a cluster with multi-GPUs?
I think that it might be related to the following arguments. Could you give me more detailed suggestions?