Hi, thanks for posting this very convenient tool to do experiments on disentanglement learning
When I trained the model, I met the following problem:
I use the --evaluate_metric mig sap_score irs factor_vae_metric dci during training the BetaVAE on CelebA dataset.
However, I get
anaconda3/lib/python3.7/site-packages/disentanglement_lib/data/ground_truth/named_data.py", line 65, in get_named_ground_truth_data
raise ValueError("Invalid data set name.")
ValueError: Invalid data set name.
In call to configurable 'dataset' (<function get_named_ground_truth_data at 0x7f8eed2a13b0>)
In call to configurable 'evaluation' (<function evaluate at 0x7f8e6e5558c0>)
I check the 'named_data.py' file and find out 'celebA' is not in the named data list (containing dSprites, 3dshapes, mpi3d, car3d, smallnorb).
Is there any ways that can let the model trained on 'celebA' be evaluated by disentanglement metrics?
Hi, thanks for posting this very convenient tool to do experiments on disentanglement learning
When I trained the model, I met the following problem:
I use the
--evaluate_metric mig sap_score irs factor_vae_metric dci
during training the BetaVAE on CelebA dataset.However, I get
I check the 'named_data.py' file and find out 'celebA' is not in the named data list (containing dSprites, 3dshapes, mpi3d, car3d, smallnorb).
Is there any ways that can let the model trained on 'celebA' be evaluated by disentanglement metrics?
Thanks