Closed ShivanshuPurohit closed 3 years ago
I am also having this same issue - TF 2.4 / CUDA 11.1
Did you manage to solve the issue?
I'm getting this error too. I'm on macOS with M1 and Tensorflow is running on GPU as a Pluggable Device. Given that @RhynoTime had this issue with CUDA, perhaps it has to do with GPU? It doesn't seem likely but not something to rule out.
I've gotten this traced back to the GAN.GMA
model. The input shapes match all of those shown in the error message, but the final input (<tf.Tensor 'IteratorGetNext:8' shape=(16, 1) dtype=float32>
) does not. For reference, here's the model summary for GAN.GMA
.
Model: "model_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_18 (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
input_19 (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
input_20 (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
input_21 (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
input_22 (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
input_23 (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
input_24 (InputLayer) [(None, 512)] 0
__________________________________________________________________________________________________
sequential (Sequential) (None, 512) 1050624 input_18[0][0]
input_19[0][0]
input_20[0][0]
input_21[0][0]
input_22[0][0]
input_23[0][0]
input_24[0][0]
__________________________________________________________________________________________________
input_25 (InputLayer) [(None, 256, 256, 1) 0
__________________________________________________________________________________________________
model_1 (Functional) (None, 256, 256, 3) 14269368 sequential[0][0]
sequential[1][0]
sequential[2][0]
sequential[3][0]
sequential[4][0]
sequential[5][0]
sequential[6][0]
input_25[0][0]
==================================================================================================
Total params: 15,319,992
Trainable params: 15,319,992
Non-trainable params: 0
__________________________________________________________________________________________________
As stated in error message, the problem is at line 519, but a similar call also occurs at line 546. The tensor in question of dimension (16, 1) is called trunc
, and its passed in a list with another tensor that's then added to another. Upon removing trunc
in both calls, the error goes away, but I'm monitoring my system resources in Activity Monitor, and it doesn't seem like there's anything going on. I did get two new additions to the Results
folder, so maybe that's a good sign?
Bug ! trunc = 0.5 n1 = noiseList(64) n2 = nImage(64) trunc = np.ones([64, 1]) * trunc trunc.shape (64,1)
len(n1) 7
n2.shape (64, 256, 256, 1)
len(n1 + [n2, trunc]) 9
==> Passing 9 inputs to the generator while the generator accepts only 8. Must be a bug and this code is completely not tested before releasing ...?
You can do this: (I am NOT sure if multiply is a correct ops though...)
trunc = tf.reshape(trunc, (trunc.shape[0], 1, 1, 1)) n2_trunc = tf.math.multiply(n2, trunc)
...and then
generated_images = self.GAN.GMA.predict(n1 + [n2_trunc], batch_size = BATCH_SIZE)
Hello everyone! Thank you for your comments. It appears that before uploading the code after some updates, I forgot to remove the input variable 'trunc' on lines 493, 508 and 534. Sorry about that! Updated code, compatible with TF 2.5.x should be uploaded tonight. Thanks again.
I got this error when loading the images. The full error is
Is it because the blocks are written in functional form while the
self.generator
is a sequential model? When I change thedef generator(self)
toit says self.S can't be converted to a tensor