Closed PeYceBall closed 3 years ago
Hi! Thank you for your contribution! Please re-check all issue template checklists - unfilled issues would be closed automatically. And do not forget to join our slack for collaboration.
hi,
yeah, that's correct, Catalyst suppose to have top1 accuracy as the default alias :)
as a possible solution we could always add (1, ) to the topk_args
, but have not implemented it yet
nevertheless, it would be a valuable contribution if you could do so š
The AccuracyMetric code is very straightforward.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
š Bug Report
Apparently AccuracyCallback raises an error when 1 is not in tuple passed to AccuracyCallback as topk_args.
How To Reproduce
I ran code snippet from https://catalyst-team.github.io/catalyst/#getting-started on Google Colab while only changing topk_args in AccuracyCallback.
#### Code sample ```python import os from torch import nn, optim from torch.utils.data import DataLoader from catalyst import dl, utils from catalyst.data.transforms import ToTensor from catalyst.contrib.datasets import MNIST model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10)) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.02) loaders = { "train": DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=ToTensor()), batch_size=32), "valid": DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=ToTensor()), batch_size=32), } runner = dl.SupervisedRunner(input_key="features", output_key="logits", target_key="targets", loss_key="loss") # model training runner.train( model=model, criterion=criterion, optimizer=optimizer, loaders=loaders, num_epochs=1, callbacks=[ dl.AccuracyCallback(input_key="logits", target_key="targets", topk_args=(2, 3)), # catalyst[ml] required dl.ConfusionMatrixCallback(input_key="logits", target_key="targets", num_classes=10), ], logdir="./logs", valid_loader="valid", valid_metric="loss", minimize_valid_metric=True, verbose=True, load_best_on_end=True, ) # model inference for prediction in runner.predict_loader(loader=loaders["valid"]): assert prediction["logits"].detach().cpu().numpy().shape[-1] == 10 features_batch = next(iter(loaders["valid"]))[0] # model stochastic weight averaging model.load_state_dict(utils.get_averaged_weights_by_path_mask(logdir="./logs", path_mask="*.pth")) # model tracing utils.trace_model(model=runner.model, batch=features_batch) # model quantization utils.quantize_model(model=runner.model) # model pruning utils.prune_model(model=runner.model, pruning_fn="l1_unstructured", amount=0.8) # onnx export utils.onnx_export(model=runner.model, batch=features_batch, file="./logs/mnist.onnx", verbose=True) ```Expected behavior
AccuracyCallback should work with any tuple of positive integers.
Environment
Additional context
From what I understood, bug appears because of this line https://github.com/catalyst-team/catalyst/blob/2ff687e802250772f8614583af933d6613f87788/catalyst/metrics/_accuracy.py#L82 ("01" after "accuracy" in string).
Checklist