import tensorflow as tf
import numpy as np
from qkeras import QActivation
# build the model
l_0 = tf.keras.layers.Input(shape=2)
l_1 = QActivation("bernoulli")(l_0)
l_2 = tf.keras.layers.Dense(units=10, activation="sigmoid")(l_1)
l_3 = QActivation("bernoulli")(l_2)
out = tf.keras.layers.Dense(units=1, activation="sigmoid")(l_3)
# create the model
model = tf.keras.models.Model(inputs=l_0, outputs=out)
model.compile(loss='binary_crossentropy')
# create some data
x = np.array([[1,2],[3,4],[5,6]])
y = np.array([[0],[1],[1]])
# fit the model
model.fit(x, y)
# eval the model layers
layer_out = None
for layer in model.layers:
if "input" in layer.name:
layer_out = layer(x)
if "input" not in layer.name:
layer_out = layer(layer_out)
Until fitting everything works well but in the evaluation step of my model layers I encounter the following errro:
Traceback (most recent call last):
File "test.py", line 30, in <module>
layer_out = layer(layer_out)
File "keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "qkeras/qlayers.py", line 177, in call
return self.quantizer(inputs)
File "qkeras/quantizers.py", line 796, in __call__
p = tf.keras.backend.sigmoid(self.temperature * x / std)
TypeError: Exception encountered when calling layer 'q_activation' (type QActivation).
Cannot convert 6.0 to EagerTensor of dtype int64
Call arguments received by layer 'q_activation' (type QActivation):
• inputs=tf.Tensor(shape=(3, 2), dtype=int64)
I think the problem is caused because in quantizers.py the variables std and temperature are not match up with the input data type of x. One way to fix it is to change the code from line 790 to:
std = tf.constant(1.0, dtype=tf.float32)
if self.use_real_sigmoid:
self.temperature = tf.constant(self.temperature, dtype=std.dtype)
x = tf.cast(x, std.dtype)
p = tf.keras.backend.sigmoid(self.temperature * x / std)
Hi all,
My setup is:
Arch Linux 5.15.78-1-lts Python 3.10.8 Tensorflow 2.11.0 Numpy 1.23.0 qkeras 0.9.0
I am running the following example code:
Until fitting everything works well but in the evaluation step of my model layers I encounter the following errro:
I think the problem is caused because in
quantizers.py
the variablesstd
andtemperature
are not match up with the input data type ofx
. One way to fix it is to change the code from line 790 to:with this one forces the type to be
tf.float32
.Cheers, Marius