Closed mhsacampos closed 3 years ago
Can you provide the code you used? Did you use the complex_input or ComplexInput layer?
Below is the code :
init = tf.keras.initializers.GlorotUniform(seed=117) model = Sequential() model.add(complex_layers.ComplexInput(input_shape=input_shape, name='main_input')) model.add(complex_layers.ComplexConv1D(30, (3), activation='cart_relu')) model.add(complex_layers.ComplexFlatten()) model.add(complex_layers.ComplexDense(64, activation='cart_relu', kernel_initializer=init)) model.add(complex_layers.ComplexDense(7, kernel_initializer=init))
but the code doesn't go beyond the ComplexConv1D layer
I tried your code and worked, here is the code.
What version are you using? Try updating your version of the code.
Or maybe it is breaking somewhere else for what I would need more of the code.
Dear, Barrachina, thank you so much. By means of your kind reply I discovered that the error happens when the input "kernel_initializer=init" is included in the ComplexConv1D as an argument. The kernel initialization with a complex value maybe can cause that. I'm not sure about
By the way, have you created the ComplexMaxPooling1D ? Or have you any idea how to create such layer from the 2D case?
Yes indeed, initializations are done like this:
if self.my_dtype.is_complex:
self.w_r = tf.Variable(
name='kernel_r',
initial_value=self.kernel_initializer(shape=(input_shape[-1], self.units), dtype=self.my_dtype),
trainable=True
)
self.w_i = tf.Variable(
name='kernel_i',
initial_value=self.kernel_initializer(shape=(input_shape[-1], self.units), dtype=self.my_dtype),
trainable=True
)
So as you can see, they are used as real-valued. Now the error message makes sense (and why it was expecting a float and not a complex). I did it like this because Tensorflow does not like me to have complex weights and throws an error. I implemented 4 initializations. You can do your own if you wish, here is the link to my implementations.
For the 1DPooling, I would like it if you can create a new issue so I can close this topic. I can already tell you they are not implemented but if you create a new issue I can label it as a feature request and see to it someday (hopefully this weekend).
Hi, congrats for your work !
I am trying to make the code (in the examples) works with the convolution ComplexConv1D layer without success at all, in spite of several different attempts I've done. The complex inputs were generated in numpy. The output has been:
File "/home/mhsc/anaconda3/envs/cvnn/lib/python3.6/site-packages/tensorflow/python/ops/init_ops_v2.py", line 1051, in _assert_float_dtype raise ValueError("Expected floating point type, got %s." % dtype)
ValueError: Expected floating point type, got <dtype: 'complex64'>.
Any help would be immensely appreciated