It seems like the CelebA datasets don't filter out non-matching sizes, which causes problems when trying to run the notebooks provided in /reproducability - the default behaviour seems to download both (3, 218, 178) and (3, 32, 32) images into the celeba32 folder, causing bugs when trying to run the models (for instance, running CNP.ipynb fails when running everything from scratch). Adding a transform.Resize() to the __init__() method for CelebA64 fixes this - alternatively, having a set up script to filter images into separate directories could also be useful.
Additionally, the eval() call in imgs.py seems like an easy source of bugs and confusion - I think it's better to split the datasets into a separate datasets.py file and provide the dict with direct class references (and avoid eval()) in imgs.py
It seems like the CelebA datasets don't filter out non-matching sizes, which causes problems when trying to run the notebooks provided in
/reproducability
- the default behaviour seems to download both (3, 218, 178) and (3, 32, 32) images into the celeba32 folder, causing bugs when trying to run the models (for instance, runningCNP.ipynb
fails when running everything from scratch). Adding a transform.Resize() to the__init__()
method for CelebA64 fixes this - alternatively, having a set up script to filter images into separate directories could also be useful.Additionally, the eval() call in imgs.py seems like an easy source of bugs and confusion - I think it's better to split the datasets into a separate
datasets.py
file and provide the dict with direct class references (and avoid eval()) inimgs.py