llSourcell / deep_dream_challenge

Deep Dream Challenge code by @SIrajology on Youtube (Learn Python for Data Science #5)
84 stars 73 forks source link

input and filter must have the same depth: 4 vs 3 #5

Closed TD-101 closed 7 years ago

TD-101 commented 7 years ago

Hi,

I get this error / exception, and while it is being, handled, the same error/exception occurs. I am fairly new to this so it could be a multitude of problems, from the way I have set up python, tf etc, to improper hardware, but I thought I would put it up here in case someone had an easy fix!

Traceback (most recent call last): File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 965, in _do_call return fn(*args) File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 947, in _run_fn status, run_metadata) File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 66, in exit next(self.gen) File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/errors.py", line 450, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors.InvalidArgumentError: input and filter must have the same depth: 4 vs 3 [[Node: import/conv2d0_pre_relu/conv = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/cpu:0"](ExpandDims, import/conv2d0_w)]]

ghost commented 7 years ago

@tomdawson91 have you solved it? I had the same problem.

gonzalolc commented 7 years ago

Same issue here! Have anyone solved it?

ghost commented 7 years ago

img0 = np.float32(img0)[:,:,:3]

gonzalolc commented 7 years ago

It works! thanks @dattranx

TD-101 commented 7 years ago

@dattranx Thanks - as I understand it, this just cuts out the alpha channel

R-Miner commented 6 years ago

I get an error like: Status(StatusCode=InvalidArgument, Detail="input and filter must have the same depth: 1 vs 3 [[Node: conv2d_1/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_1_0_0, conv2d_1/kernel/read)]]")'

Any thoughts on this? for me it is 1 Vs 3

CyLouisKoo commented 6 years ago

I have also met the problem and How you resolved it then? Thank you for your reply. @TD-101 image

vishalvanpariya commented 5 years ago

@gonzalolc please explain in deep i cannot understood

khanfarhan10 commented 4 years ago

I am coding Grad-CAM with keras vis I tried seed_input =tf.convert_to_tensor(img[:,:,:3]) seed_input=np.float32(img)[:,:,:3]

But none of them worked for me I get the following error

InvalidArgumentError Traceback (most recent call last)

in () 20 penultimate_layer_idx = penultimate_layer_idx,#None, 21 backprop_modifier = None, ---> 22 grad_modifier = None) 8 frames /usr/local/lib/python3.6/dist-packages/vis/visualization/saliency.py in visualize_cam(model, layer_idx, filter_indices, seed_input, penultimate_layer_idx, backprop_modifier, grad_modifier) 237 (ActivationMaximization(model.layers[layer_idx], filter_indices), -1) 238 ] --> 239 return visualize_cam_with_losses(model.input, losses, seed_input, penultimate_layer, grad_modifier) /usr/local/lib/python3.6/dist-packages/vis/visualization/saliency.py in visualize_cam_with_losses(input_tensor, losses, seed_input, penultimate_layer, grad_modifier) 158 penultimate_output = penultimate_layer.output 159 opt = Optimizer(input_tensor, losses, wrt_tensor=penultimate_output, norm_grads=False) --> 160 _, grads, penultimate_output_value = opt.minimize(seed_input, max_iter=1, grad_modifier=grad_modifier, verbose=False) 161 162 # For numerical stability. Very small grad values along with small penultimate_output_value can cause /usr/local/lib/python3.6/dist-packages/vis/optimizer.py in minimize(self, seed_input, max_iter, input_modifiers, grad_modifier, callbacks, verbose) 141 142 # 0 learning phase for 'test' --> 143 computed_values = self.compute_fn([seed_input, 0]) 144 losses = computed_values[:len(self.loss_names)] 145 named_losses = zip(self.loss_names, losses) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py in __call__(self, inputs) 3790 value = math_ops.cast(value, tensor.dtype) 3791 converted_inputs.append(value) -> 3792 outputs = self._graph_fn(*converted_inputs) 3793 3794 # EagerTensor.numpy() will often make a copy to ensure memory safety. /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs) 1603 TypeError: For invalid positional/keyword argument combinations. 1604 """ -> 1605 return self._call_impl(args, kwargs) 1606 1607 def _call_impl(self, args, kwargs, cancellation_manager=None): /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _call_impl(self, args, kwargs, cancellation_manager) 1643 raise TypeError("Keyword arguments {} unknown. Expected {}.".format( 1644 list(kwargs.keys()), list(self._arg_keywords))) -> 1645 return self._call_flat(args, self.captured_inputs, cancellation_manager) 1646 1647 def _filtered_call(self, args, kwargs): /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1744 # No tape is watching; skip to running the function. 1745 return self._build_call_outputs(self._inference_function.call( -> 1746 ctx, args, cancellation_manager=cancellation_manager)) 1747 forward_backward = self._select_forward_and_backward_functions( 1748 args, /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager) 596 inputs=args, 597 attrs=attrs, --> 598 ctx=ctx) 599 else: 600 outputs = execute.execute_with_cancellation( /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 58 ctx.ensure_initialized() 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: 62 if name is not None: InvalidArgumentError: input depth must be evenly divisible by filter depth: 443 vs 3 [[node conv2d_1_1/convolution (defined at /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_10171] Function call stack: keras_scratch_graph
asr-aditya commented 4 years ago

The error is because of mismatch in dimensions of the input provided. The model requires a depth of '3' for the input but is given '4'.

AnjanaChankya commented 4 years ago

whats depth of 3 means

asr-aditya commented 4 years ago

whats depth of 3 means

It means if you are giving an image, it has 3 channels i.e. size of image is (256,256,3)

LucasColas commented 3 years ago

Yes you have to change the depth.

Umraz-Hussain-MyWorld commented 3 years ago

Screenshot from 2021-06-16 21-40-13

Umraz-Hussain-MyWorld commented 3 years ago

some one please help

Zapbbx commented 2 years ago

I'm not an expert, but it seems that the 2 source images I'm using don't have the same number of channels. IE, I'm using PNG files and one set of them has a transparent background (Alpha 0) - Wheras the other set of images has colored backgrounds.. Saving both sets of files as JPG images, works around the error. I think because this gets rid of the transparent (alpha) channel. IE, the "shape" of your data needs to be the same.. If you feed a grayscale image into CNN which expects a color image. Find shape of input, e.g. print(model.input.shape) in Keras, you get (None, 224, 224, 3) and your input blob must have a corresponding shape, so having a grayscale image you have to convert it into a color image all three channels need to be the same.