Closed vlomonaco closed 6 months ago
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
tests/benchmarks/scenarios/test_task_aware.py | 0 | 1 | 0.0% | ||
<!-- | Total: | 2 | 3 | 66.67% | --> |
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
tests/benchmarks/scenarios/test_task_aware.py | 1 | 34.21% | ||
<!-- | Total: | 1 | --> |
Totals | |
---|---|
Change from base Build 9014120340: | 0.0% |
Covered Lines: | 15082 |
Relevant Lines: | 29114 |
Hello, I found a bug in the
task_incremental_benchmark
generator. Even with the MNIST example in the notebook "_03benchmarks.ipynb"exp.task_labels
is a set not a list as the train method expects.I think the problem is here, where we convert the set in the list only for the
task_label
attribute: https://github.com/ContinualAI/avalanche/blob/0d6f7155446b2b4b25b33b8e7d21284ca8d7d06a/avalanche/benchmarks/scenarios/task_aware.py#L89-L105The set is never converted into a list for the attribute task_labels and in in this method we use the indexing which is not supported by a set: https://github.com/ContinualAI/avalanche/blob/0d6f7155446b2b4b25b33b8e7d21284ca8d7d06a/avalanche/evaluation/metric_utils.py#L249-L275
Below you can find the small fix + an assert I added in the TaskAware tests.