Closed sekhaish closed 2 years ago
Pasting the entire thing here!
warnings.warn( WARNING:torch.distributed.run:
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
fused_weight_gradient_mlp_cuda
module not found. gradient accumulation fusion with weight gradient computation disabled.
fused_weight_gradient_mlp_cuda
module not found. gradient accumulation fusion with weight gradient computation disabled.
Traceback (most recent call last):
File "train.py", line 16, in Failures:
Hi @sekhaish
It seems an import error. What is your timm
version? We use 0.3.2.
pip install timm==0.3.2
Hi @changlin31 , thank you for your response. Yes, I made sure the version is 0.3.2 and it did not work. I got the error as above. I tried playing around with a few versions and the error did not change.
It's weird. Could you try start a python console in your environment and run:
import timm
timm.__version__
from timm.models.efficientnet_blocks import make_divisible, SqueezeExcite, resolve_se_args
For my environment, it outputs this and yields no error:
'0.3.2'
Hi @changlin31 upon downgrading the torch version to 1.7.0 with Ttimm 0.3.2, I was able to run it without any errors. Thanks for your time. much appreciated.
Hi there, I am trying to search on NATS-Bench using CIFAR 10 dataset and encountered this error. Could you kindly help me with this?