Open sakaia opened 4 years ago
Seems like the acceleration doesn't work from this code base - since there's no kernel implemented: https://github.com/NVIDIA/apex/blob/master/apex/contrib/sparsity/sparse_masklib.py#L57
Are you using Ampere architecuture? I suppose sparsity works for Ampere architectures for now
I am trying to run ASP toy_problem.py. It seems nothing changes. Is there any method for seeing performance gain?
I am comparing
train_loop/arg.num_xxx_steps
for dense and sparse. It seems few percentage changes.Another document for sparsity says, 50% performance gain on BERT (on MLPerf). But toy_problem.py seems no effect for sparsity. Of course BERT uses TensorRT for MLPerf, I understand software interface is different.
References