Closed zhaoxiongjun closed 1 year ago
Hi @zhaoxiongjun !
I tested your code with the master branch, and it works fine. It seems like this was an issue in version 0.2.0.
Which version are you using?
Oh yes, it is indeed a version issue. I am playing with it on Google Colaboratory from official documentation. The version installed inside is 0.2.0.
Hi, @HamedHemati , Sorry, I have another question to ask you, that is, for the task incremental scenario (each task has a different label category), is it possible to use the "JointTraining" strategy and with the "as_multitask" function as the upper bound of performance.
@zhaoxiongjun that's a common practice to use JointStrategy as an upper-bound in task-incremental setups. However, some researchers may consider it as a "soft" upper bound.
π Describe the bug I use the avalanche/examples/task_incremental.py code, and change the strategy to "JointTraining", the follow error is occurs:
AttributeError: 'AvalancheSubset' object has no attribute 'dataset'
π Expected behavior When I commented the code
model = as_multitask(model, 'classifier')
, the problem disappeared. But I don't think this is the result I expected. How should the upper bound of the task increment be defined base Avalanche?π Screenshots
π¦ Additional context this is my code. ` from future import absolute_import from future import division from future import print_function
import argparse import torch from torch.nn import CrossEntropyLoss from torch.optim import Adam
from avalanche.benchmarks.classic import SplitCIFAR10 from avalanche.models import SimpleMLP, as_multitask from avalanche.training.supervised import JointTraining, Cumulative
device = "cpu"
model = SimpleMLP(input_size=32 32 3, num_classes=10) model = as_multitask(model, 'classifier')
scenario = SplitCIFAR10(n_experiences=5, return_task_id=True) train_stream = scenario.train_stream test_stream = scenario.test_stream
optimizer = Adam(model.parameters(), lr=0.01) criterion = CrossEntropyLoss()
strategy = JointTraining( model=model, optimizer=optimizer, criterion=criterion, train_mb_size=128, train_epochs=3, eval_mb_size=128, device=device, )
strategy.train(train_stream) strategy.eval(test_stream) `