kaist-dmlab / SELFIE

MIT License
49 stars 9 forks source link

ValueError: Cannot feed value of shape (128, 64, 64, 3) for Tensor 'DenseNet/train_images:0', which has shape '(None, 32, 32, 3)' #2

Open jocelynbaduria opened 3 years ago

jocelynbaduria commented 3 years ago

While running the code in Colab using Animal-10N dataset. I got an error

Cannot feed value of shape (128, 64, 64, 3) for Tensor 'DenseNet/train_images:0', which has shape '(None, 32, 32, 3)'

['/content/drive/Shareddrives/Eranti-Vijay-Su21-2/code/Updated_SELFIE/SELFIE/SELFIE/main.py', '0', 'ANIMAL-10N', 'DenseNet-25-12', 'SELFIE', 'none', '0.08', 'log/ANIMAL-10N/SELFIE']

This code trains Densnet(L={10,25,40}, k=12) using SELFIE in tensorflow-gpu environment.

Description ----------------------------------------------------------- Please download datasets from our github before running command. For SELFIE, the hyperparameter was set to be uncertainty threshold = 0.05 and history length=15. For Training, we follow the same configuration in our paper For Training, training_epoch = 100, batch = 128, initial_learning rate = 0.1 (decayed 50% and 75% of total number of epochs), use momentum of 0.9, warm_up=25, restart=2, ... You can easily change the value in main.py Dataset exists in /content/drive/Shareddrives/Eranti-Vijay-Su21-2/code/Updated_SELFIE/SELFIE/SELFIE/dataset/ANIMAL-10N 2021-07-18 22:17:19.850235: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2021-07-18 22:17:19.854243: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2199995000 Hz 2021-07-18 22:17:19.854433: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562c3bb7cbc0 executing computations on platform Host. Devices: 2021-07-18 22:17:19.854463: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version 2021-07-18 22:17:19.856103: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2021-07-18 22:17:20.035986: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:20.036711: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562c3bb7cf40 executing computations on platform CUDA. Devices: 2021-07-18 22:17:20.036744: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla T4, Compute Capability 7.5 2021-07-18 22:17:20.036910: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:20.037563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59 pciBusID: 0000:00:04.0 2021-07-18 22:17:20.037945: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2021-07-18 22:17:20.039393: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2021-07-18 22:17:20.040562: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2021-07-18 22:17:20.040877: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2021-07-18 22:17:20.042131: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2021-07-18 22:17:20.042997: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2021-07-18 22:17:20.045939: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2021-07-18 22:17:20.046061: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:20.046661: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:20.047171: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2021-07-18 22:17:20.047237: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2021-07-18 22:17:20.048267: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-07-18 22:17:20.048294: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2021-07-18 22:17:20.048308: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2021-07-18 22:17:20.048418: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:20.049041: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:20.049672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14161 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) Now read following files. ['/content/drive/Shareddrives/Eranti-Vijay-Su21-2/code/Updated_SELFIE/SELFIE/SELFIE/dataset/ANIMAL-10N/data_batch_1.bin'] Filling queue with 20000 data before starting to train. This will take a few minutes. Now read following files. ['/content/drive/Shareddrives/Eranti-Vijay-Su21-2/code/Updated_SELFIE/SELFIE/SELFIE/dataset/ANIMAL-10N/test_batch.bin'] Filling queue with 2000 data before starting to train. This will take a few minutes. [0718 22:17:20 @registry.py:90] 'DenseNet/conv0': [?, 32, 32, 3] --> [?, 32, 32, 16] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/dense_layer.0/conv1': [?, 32, 32, 16] --> [?, 32, 32, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/dense_layer.1/conv1': [?, 32, 32, 28] --> [?, 32, 32, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/dense_layer.2/conv1': [?, 32, 32, 40] --> [?, 32, 32, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/dense_layer.3/conv1': [?, 32, 32, 52] --> [?, 32, 32, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/dense_layer.4/conv1': [?, 32, 32, 64] --> [?, 32, 32, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/dense_layer.5/conv1': [?, 32, 32, 76] --> [?, 32, 32, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/dense_layer.6/conv1': [?, 32, 32, 88] --> [?, 32, 32, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/transition1/conv1': [?, 32, 32, 100] --> [?, 32, 32, 100] [0718 22:17:20 @registry.py:90] 'DenseNet/block1/transition1/pool': [?, 32, 32, 100] --> [?, 16, 16, 100] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/dense_layer.0/conv1': [?, 16, 16, 100] --> [?, 16, 16, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/dense_layer.1/conv1': [?, 16, 16, 112] --> [?, 16, 16, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/dense_layer.2/conv1': [?, 16, 16, 124] --> [?, 16, 16, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/dense_layer.3/conv1': [?, 16, 16, 136] --> [?, 16, 16, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/dense_layer.4/conv1': [?, 16, 16, 148] --> [?, 16, 16, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/dense_layer.5/conv1': [?, 16, 16, 160] --> [?, 16, 16, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/dense_layer.6/conv1': [?, 16, 16, 172] --> [?, 16, 16, 12] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/transition2/conv1': [?, 16, 16, 184] --> [?, 16, 16, 184] [0718 22:17:20 @registry.py:90] 'DenseNet/block2/transition2/pool': [?, 16, 16, 184] --> [?, 8, 8, 184] [0718 22:17:20 @registry.py:90] 'DenseNet/block3/dense_layer.0/conv1': [?, 8, 8, 184] --> [?, 8, 8, 12] [0718 22:17:21 @registry.py:90] 'DenseNet/block3/dense_layer.1/conv1': [?, 8, 8, 196] --> [?, 8, 8, 12] [0718 22:17:21 @registry.py:90] 'DenseNet/block3/dense_layer.2/conv1': [?, 8, 8, 208] --> [?, 8, 8, 12] [0718 22:17:21 @registry.py:90] 'DenseNet/block3/dense_layer.3/conv1': [?, 8, 8, 220] --> [?, 8, 8, 12] [0718 22:17:21 @registry.py:90] 'DenseNet/block3/dense_layer.4/conv1': [?, 8, 8, 232] --> [?, 8, 8, 12] [0718 22:17:21 @registry.py:90] 'DenseNet/block3/dense_layer.5/conv1': [?, 8, 8, 244] --> [?, 8, 8, 12] [0718 22:17:21 @registry.py:90] 'DenseNet/block3/dense_layer.6/conv1': [?, 8, 8, 256] --> [?, 8, 8, 12] [0718 22:17:21 @registry.py:90] 'DenseNet/gap': [?, 8, 8, 268] --> [?, 268] [0718 22:17:21 @registry.py:90] 'DenseNet/linear': [?, 268] --> [?, 10] 2021-07-18 22:17:21.457280: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:21.457843: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59 pciBusID: 0000:00:04.0 2021-07-18 22:17:21.457931: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2021-07-18 22:17:21.457958: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2021-07-18 22:17:21.457980: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2021-07-18 22:17:21.458010: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2021-07-18 22:17:21.458031: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2021-07-18 22:17:21.458051: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2021-07-18 22:17:21.458071: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2021-07-18 22:17:21.458152: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:21.458734: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-07-18 22:17:21.459313: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2021-07-18 22:17:23.501927: W tensorflow/core/common_runtime/colocation_graph.cc:960] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_nameindex=-1 requested_devicename='/device:GPU:0' assigned_devicename='' resource_devicename='/device:GPU:0' supported_devicetypes=[CPU] possibledevices=[] ReaderReadV2: CPU FixedLengthRecordReaderV2: CPU QueueSizeV2: GPU CPU XLA_CPU XLA_GPU QueueCloseV2: GPU CPU XLA_CPU XLA_GPU FIFOQueueV2: CPU XLA_CPU XLA_GPU QueueEnqueueManyV2: CPU

Colocation members, user-requested devices, and framework assigned devices, if any: input_producer (FIFOQueueV2) /device:GPU:0 input_producer/input_producer_EnqueueMany (QueueEnqueueManyV2) /device:GPU:0 input_producer/input_producer_Close (QueueCloseV2) /device:GPU:0 input_producer/input_producer_Close_1 (QueueCloseV2) /device:GPU:0 input_producer/input_producer_Size (QueueSizeV2) /device:GPU:0 FixedLengthRecordReaderV2 (FixedLengthRecordReaderV2) /device:GPU:0 ReaderReadV2 (ReaderReadV2) /device:GPU:0

2021-07-18 22:17:23.502153: W tensorflow/core/common_runtime/colocation_graph.cc:960] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_nameindex=-1 requested_devicename='/device:GPU:0' assigned_devicename='' resource_devicename='/device:GPU:0' supported_devicetypes=[CPU] possibledevices=[] QueueDequeueManyV2: CPU QueueCloseV2: GPU CPU XLA_CPU XLA_GPU FIFOQueueV2: CPU XLA_CPU XLA_GPU QueueSizeV2: GPU CPU XLA_CPU XLA_GPU QueueEnqueueV2: GPU CPU XLA_CPU XLA_GPU

Colocation members, user-requested devices, and framework assigned devices, if any: shuffle_batch/fifo_queue (FIFOQueueV2) /device:GPU:0 shuffle_batch/fifo_queue_enqueue (QueueEnqueueV2) /device:GPU:0 shuffle_batch/fifo_queue_Close (QueueCloseV2) /device:GPU:0 shuffle_batch/fifo_queue_Close_1 (QueueCloseV2) /device:GPU:0 shuffle_batch/fifo_queue_Size (QueueSizeV2) /device:GPU:0 shuffle_batch (QueueDequeueManyV2) /device:GPU:0

2021-07-18 22:17:23.502312: W tensorflow/core/common_runtime/colocation_graph.cc:960] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_nameindex=-1 requested_devicename='/device:GPU:0' assigned_devicename='' resource_devicename='/device:GPU:0' supported_devicetypes=[CPU] possibledevices=[] ReaderReadV2: CPU FixedLengthRecordReaderV2: CPU QueueSizeV2: GPU CPU XLA_CPU XLA_GPU QueueCloseV2: GPU CPU XLA_CPU XLA_GPU FIFOQueueV2: CPU XLA_CPU XLA_GPU QueueEnqueueManyV2: CPU

Colocation members, user-requested devices, and framework assigned devices, if any: input_producer_1 (FIFOQueueV2) /device:GPU:0 input_producer_1/input_producer_1_EnqueueMany (QueueEnqueueManyV2) /device:GPU:0 input_producer_1/input_producer_1_Close (QueueCloseV2) /device:GPU:0 input_producer_1/input_producer_1_Close_1 (QueueCloseV2) /device:GPU:0 input_producer_1/input_producer_1_Size (QueueSizeV2) /device:GPU:0 FixedLengthRecordReaderV2_1 (FixedLengthRecordReaderV2) /device:GPU:0 ReaderReadV2_1 (ReaderReadV2) /device:GPU:0

2021-07-18 22:17:23.502470: W tensorflow/core/common_runtime/colocation_graph.cc:960] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_nameindex=-1 requested_devicename='/device:GPU:0' assigned_devicename='' resource_devicename='/device:GPU:0' supported_devicetypes=[CPU] possibledevices=[] QueueDequeueManyV2: CPU QueueCloseV2: GPU CPU XLA_CPU XLA_GPU FIFOQueueV2: CPU XLA_CPU XLA_GPU QueueSizeV2: GPU CPU XLA_CPU XLA_GPU QueueEnqueueV2: GPU CPU XLA_CPU XLA_GPU

Colocation members, user-requested devices, and framework assigned devices, if any: shuffle_batch_1/fifo_queue (FIFOQueueV2) /device:GPU:0 shuffle_batch_1/fifo_queue_enqueue (QueueEnqueueV2) /device:GPU:0 shuffle_batch_1/fifo_queue_Close (QueueCloseV2) /device:GPU:0 shuffle_batch_1/fifo_queue_Close_1 (QueueCloseV2) /device:GPU:0 shuffle_batch_1/fifo_queue_Size (QueueSizeV2) /device:GPU:0 shuffle_batch_1 (QueueDequeueManyV2) /device:GPU:0

2021-07-18 22:17:23.502634: W tensorflow/core/common_runtime/colocation_graph.cc:960] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0 /job:localhost/replica:0/task:0/device:XLA_CPU:0 /job:localhost/replica:0/task:0/device:XLA_GPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_nameindex=-1 requested_devicename='/device:GPU:0' assigned_devicename='' resource_devicename='/device:GPU:0' supported_devicetypes=[CPU, XLA_CPU, XLA_GPU] possibledevices=[] AssignAddVariableOp: CPU XLA_CPU XLA_GPU ReadVariableOp: GPU CPU XLA_CPU XLA_GPU AssignVariableOp: CPU XLA_CPU XLA_GPU VarIsInitializedOp: GPU CPU XLA_CPU XLA_GPU Const: GPU CPU XLA_CPU XLA_GPU VarHandleOp: CPU XLA_CPU XLA_GPU

Colocation members, user-requested devices, and framework assigned devices, if any: Variable/Initializer/initial_value (Const) Variable (VarHandleOp) /device:GPU:0 Variable/IsInitialized/VarIsInitializedOp (VarIsInitializedOp) /device:GPU:0 Variable/Assign (AssignVariableOp) /device:GPU:0 Variable/Read/ReadVariableOp (ReadVariableOp) /device:GPU:0 ReadVariableOp (ReadVariableOp) /device:GPU:0 PiecewiseConstant/ReadVariableOp (ReadVariableOp) /device:GPU:0 Momentum/Const (Const) /device:GPU:0 Momentum (AssignAddVariableOp) /device:GPU:0

of samples: 50000

of samples: 5000

Noise Injection: none 5466 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,

0 ,4608 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,

0 ,0 ,5091 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,

0 ,0 ,0 ,4841 ,0 ,0 ,0 ,0 ,0 ,0 ,

0 ,0 ,0 ,0 ,4981 ,0 ,0 ,0 ,0 ,0 ,

0 ,0 ,0 ,0 ,0 ,4913 ,0 ,0 ,0 ,0 ,

0 ,0 ,0 ,0 ,0 ,0 ,5322 ,0 ,0 ,0 ,

0 ,0 ,0 ,0 ,0 ,0 ,0 ,4999 ,0 ,0 ,

0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,4970 ,0 ,

0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,0 ,4809 ,

run: 1

2021-07-18 22:18:05.634972: W tensorflow/core/kernels/queue_base.cc:277] _3_input_producer_1: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.635268: W tensorflow/core/kernels/queue_base.cc:277] _2_input_producer: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.635857: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.635907: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.635936: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.635952: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.635968: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.635982: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.635998: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636012: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636039: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636055: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636070: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636086: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636104: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636143: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636169: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed 2021-07-18 22:18:05.636184: W tensorflow/core/kernels/queue_base.cc:277] _4_shuffle_batch_1/fifo_queue: Skipping cancelled enqueue attempt with queue not closed Traceback (most recent call last): File "/content/drive/Shareddrives/Eranti-Vijay-Su21-2/code/Updated_SELFIE/SELFIE/SELFIE/main.py", line 108, in main() File "/content/drive/Shareddrives/Eranti-Vijay-Su21-2/code/Updated_SELFIE/SELFIE/SELFIE/main.py", line 104, in main selfie(gpu_id, input_reader, model_name, total_epochs, batch_size, lr_boundaries, lr_values, optimizer, noise_rate, noise_type, warm_up, threshold, queue_size, restart=restart, log_dir=log_dir) File "/content/drive/Shareddrives/Eranti-Vijay-Su21-2/code/Updated_SELFIE/SELFIE/SELFIE/algorithm/selfie.py", line 189, in selfie training(sess, warm_up, batch_size, train_batch_patcher, test_batch_patcher, trainer, 0, method="warm-up", correcter=correcter, training_log=training_log) File "/content/drive/Shareddrives/Eranti-Vijay-Su21-2/code/Updated_SELFIE/SELFIE/SELFIE/algorithm/selfie.py", line 48, in training train_loss, trainacc, , softmax_matrix = sess.run([trainer.train_loss_op, trainer.train_accuracy_op, trainer.train_op, trainer.train_prob_op], feed_dict={trainer.model.train_image_placeholder: images, trainer.model.train_label_placeholder: new_labels}) File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/client/session.py", line 956, in run run_metadata_ptr) File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/client/session.py", line 1156, in _run (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (128, 64, 64, 3) for Tensor 'DenseNet/train_images:0', which has shape '(None, 32, 32, 3)'****