usc-isi-i2 / dsbox-ta2

The DSBox TA2 component
MIT License
11 stars 6 forks source link

Some dataset failed in last ll0 run #182

Closed RqS closed 6 years ago

RqS commented 6 years ago

Failed dataset

"pass_fail": {"n_passing": 284, "n_failing": 10, "n_datasets": 294, "success_rate": 0.9659863945578231}, "failing_datasets": ["LL0_1485_madelon", "LL0_1468_cnae_9", "LL0_1038_gina_agnostic", "LL0_1041_gina_prior2", "LL0_1122_ap_breast_prostate", "LL0_300_isolet", "LL0_1176_internet_advertisements", "LL0_1457_amazon_commerce_reviews", "LL0_4134_bioresponse", "LL0_1515_micro_mass"]
Traceback (most recent call last):
  File "/user_opt/dsbox/dsbox-ta2/python/dsbox/template/search.py", line 891, in _calculate_score
    assert (truth_amount_train == target_amount_train), "[ERROR] Truth and prediction does not match"
AssertionError: [ERROR] Truth and prediction does not match

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/user_opt/dsbox/dsbox-ta2/python/dsbox/template/search.py", line 564, in evaluate_pipeline
    evaluation_result = self._evaluate(configuration, cache, dump2disk)
  File "/user_opt/dsbox/dsbox-ta2/python/dsbox/template/search.py", line 637, in _evaluate
    training_ground_truth, training_prediction, test_ground_truth, test_prediction)
  File "/user_opt/dsbox/dsbox-ta2/python/dsbox/template/search.py", line 951, in _calculate_score
    raise NotSupportedError('[ERROR] metric calculation failed')
d3m.exceptions.NotSupportedError: [ERROR] metric calculation failed
liangmuxin commented 6 years ago

I am guessing that failure because we are randomly deleting dataset columns before predicting

ckxz105 commented 6 years ago

fixed after cd8a647c4610ac54beef891a191d8a4497761a2e