Closed giorgiopiras closed 1 year ago
Hi,
Thank you for raising this issue. The warnings are most likely due to different versions. I have committed the requirements.txt
, which should fix the issue.
Hi,
Thank you for raising this issue. The warnings are most likely due to different versions. I have committed the
requirements.txt
, which should fix the issue.
Hi, thanks for your reply. Just for clarity, in case anyone else would need to reproduce the experiments, you also need cudatoolkit=9.0 and matplotlib. It works fine now, thanks.
Problem Introduction
HI @divyam3897 , I am trying to reproduce the network that you used in the experiments of the paper "Adversarial Neural Pruning with Latent Vulnerability Suppression". Specifically, I am trying to train a VGG-16 on CIFAR-10 with your exact same parameters and specifications. When running with Tensorflow 1.14, cudnn 7.6.5, Python 3.5 I yet face several warnings.
Warnings
Numpy:
/home/gpiras/anaconda3/envs/anpvs_3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)])
Tensorflow:
WARNING:tensorflow:From /home/gpiras/anaconda3/envs/anpvs_3/lib/python3.6/site-packages/tensorflow/python/ops/image_ops_impl.py:1514: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide.
WARNING:tensorflow:Entity <bound method Pooling2D.call of <tensorflow.python.layers.pooling.MaxPooling2D object at 0x7f397ea4ad30>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: converting <bound method Pooling2D.call of <tensorflow.python.layers.pooling.MaxPooling2D object at 0x7f397ea4ad30>>: AttributeError: module 'gast' has no attribute 'Str'
Comments
I am just reporting some of the Warnings, but messages of this kind are repeated for many operations within the log. Anyway, all these issues possibly lead to Warnings of this kind, which I believe are those describing the issue more accurately:
2022-11-22 07:55:42.072517: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA. To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
Eventually, this leads to running the training on the CPU instead of GPU which of course takes ages. I wonder whether it was possible to gently ask for the pretrained model used in the paper, or at least the exact list of requirements needed. I am running on NVIDIA RTX A600, CUDA 11.4. You will find attached my used libraries and the complete log.
Attachments
FULL LOG.txt anpvs_3.txt