Firstly, I would like to express my appreciation for the insightful research presented in the paper.
The paper shows the results:
baseline: training the model without pretraining on MALS
APTM: training the model with pretraining on MALS
I am interested in understanding the zero-shot performance, where no further training is performed after pretraining on MALS. Specifically, I would like to see evaluation results on the downstream datasets(CUHK-PEDES, ICFG-PEDES, and RST-PReid):
zero-shot: no training, just the model pretraining on MALS, then test on downstream datasets
Firstly, I would like to express my appreciation for the insightful research presented in the paper.
The paper shows the results:
I am interested in understanding the zero-shot performance, where no further training is performed after pretraining on MALS. Specifically, I would like to see evaluation results on the downstream datasets(CUHK-PEDES, ICFG-PEDES, and RST-PReid):