vanderschaarlab / synthcity

A library for generating and evaluating synthetic tabular data for privacy, fairness and data augmentation.
https://www.vanderschaar-lab.com/
Apache License 2.0
451 stars 61 forks source link

Metrics.Evaluate only returns a few metrics #293

Closed LinoMMV closed 1 month ago

LinoMMV commented 1 month ago

Description

The Metrics.Evaluate() function only returns certain metrics without returning any errors. When passing the full metrics dictionary from the docs I only get the following in return: sanity.data_mismatch.score sanity.common_rows_proportion.score sanity.nearest_syn_neighbor_distance.mean sanity.close_values_probability.score sanity.distant_values_probability.score stats.jensenshannon_dist.marginal stats.ks_test.marginal performance.linear_model.gt performance.linear_model.syn_id performance.linear_model.syn_ood performance.mlp.gt performance.mlp.syn_id performance.mlp.syn_ood, performance.xgb.gt performance.xgb.syn_id performance.xgb.syn_ood performance.feat_rank_distance.corr performance.feat_rank_distance.pvalue detection.detection_xgb.mean

Changing the dict or using the default one results in whatever metrics are removed to not appear but no additional metrics show up. I tried Iris and diabetes sklearn datasets as well as some others.

System Information

robsdavis commented 1 month ago

Hi @LinoMMV,

Could you post a minimal code example that reproduces your error, so I can assess what might be causing your issue? Thanks!

LinoMMV commented 1 month ago

Thank you for the quick response @robsdavis ,

I figured out the issue. There were some missing values in the generated data (generated it externally using ctab-gan). Running hyperimpute first has all the metrics showing up now. Still this should probably throw an error.

Here is the code anyways: `import pandas as pd from synthcity.metrics import Metrics from synthcity.plugins.core.dataloader import GenericDataLoader

X = pd.read_csv('./Real Data/support2.csv', index_col=0, dtype={}) Y = pd.read_csv('./ctab/Support2_fake_2.csv', index_col=0, dtype={})

real = GenericDataLoader(X, target_column='death') syn= GenericDataLoader(Y, target_column='death')

score = Metrics.evaluate( X_gt=real, X_syn=syn, metrics={ 'sanity': ['data_mismatch', 'common_rows_proportion', 'nearest_syn_neighbor_distance', 'close_values_probability', 'distant_values_probability'], 'stats': ['jensenshannon_dist', 'chi_squared_test', 'feature_corr', 'inv_kl_divergence', 'ks_test', 'max_mean_discrepancy', 'wasserstein_dist', 'prdc', 'alpha_precision', 'survival_km_distance'], 'performance': ['linear_model', 'mlp', 'xgb', 'feat_rank_distance'], 'detection': ['detection_xgb', 'detection_mlp', 'detection_gmm', 'detection_linear'], 'privacy': ['delta-presence', 'k-anonymization', 'k-map', 'distinct l-diversity', 'identifiability_score'] }, task_type='classification' ) print(score)`