Open aredelmeier opened 1 year ago
Hi,
1) If I remember correctly the dimension of the input layer of the VAE is a hyperparameter. For example when testing REVISE there is the following:
vae_params = {
"layers": [sum(model.get_mutable_mask()), 512, 256, 8],
"epochs": 1,
}
but I'm not sure if that's the problem that you mean.
2) The reason the code doesn't really work for GPU is just mainly because my laptop doesn't have a GPU, so I never really tested that. Plus the automated testing on GitHub also uses the CPU. It should be easy to fix though I think. A pull request fixing that would be great!
Is what is described for 1. a good solution for this issue?
And 2. is being fixed in PR 187.
Hi, Yes for 1. that solutions works.
It might be nice to update experimental_setup.yaml
to reflect this.
I also noticed that the Quickstart in the README no longer works. I think it should be changed to something like
from carla.data.catalog.online_catalog import OnlineCatalog
from carla.models.catalog import MLModelCatalog
from carla.models.negative_instances import predict_negative_instances
from carla.recourse_methods.catalog import GrowingSpheres
data_name = "adult" dataset = OnlineCatalog(data_name)
model = MLModelCatalog(dataset, "ann", "tensorflow")
factuals = predict_negative_instances(model, dataset.df) test_factual = factuals.iloc[:5]
gs = GrowingSpheres(model)
counterfactuals = gs.get_counterfactuals(test_factual)
5. In the ```feature/tutorial-notebook``` branch, in ```notebooks/how_to_use_carla.ipynb```, under CCHVAE,
```Python
hyperparams = {
"data_name": dataset.name,
"n_search_samples": 100,
"p_norm": 1,
"step": 0.1,
"max_iter": 1000,
"clamp": True,
"binary_cat_features": False,
"vae_params": {
"layers": [len(ml_model.feature_input_order), 512, 256, 8],
"train": True,
"lambda_reg": 1e-6,
"epochs": 5,
"lr": 1e-3,
"batch_size": 32,
},
}
should be changed to
hyperparams = {
"data_name": dataset.name,
"n_search_samples": 100,
"p_norm": 1,
"step": 0.1,
"max_iter": 1000,
"clamp": True,
"binary_cat_features": False,
"vae_params": {
"layers": [sum(model.get_mutable_mask()), 512, 256, 8],
"train": True,
"lambda_reg": 1e-6,
"epochs": 5,
"lr": 1e-3,
"batch_size": 32,
},
}
Great! Looking forward to see all the changes :)
Hi,
Since the latest release of CARLA, I think there are some errors that have popped up.
The biggest problem is that (I believe) the dimension of the input layer of the VAE has to be adjusted for REViSE, CCHVAE, and CRUD if the immutable_mask contains at least one TRUE.
In addition, when running the methods on a GPU, there is some code that has to be adjusted (so far, I've only found problems for these three methods but I haven't tested all of them).
I'm not an expert on these methods, but can I do a pull request where I make changes to these methods such that they run again (on GPU)? Thanks,
Annabelle