Open teaglin opened 3 years ago
Thanks for the reproduction code. However I'm not able to successfully execute it. Even after pip install mnist
, I get the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-6e88d609b1ae> in <module>
42
43 batch_size = 64
---> 44 single_worker_dataset = mnist.mnist_dataset(batch_size)
45 single_worker_model = mnist.build_and_compile_cnn_model()
46 single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
AttributeError: module 'mnist' has no attribute 'mnist_dataset'
Perhaps you are using a version of mnist
other than the latest version?
Reading the documentation for mixed_precision
, it sounds like this is really used for training. Do you actually want the converted Core ML to be mixed precision? Or do you just want to be able to convert this to Core ML and you don't really care about the precision being mixed?
I updated the original code. It wasn't correctly copied over. It should work as is now. For your question – yes the goal is to train on mixed precision then directly export the trained model to CoreML. I just want the benefits of mixed precision for training and don't care about mixed precision in the exported CoreML model.
Thanks for updating the code. I can now reproduce this issue.
I'm going to leave this issue open. However if you need a quick resolution here, I suggest you look into converting your Keras/TF-2.x model so it's not using mixed precision. After training, you could convert the model to not use mixed precision, then convert that model to Core ML.
🐞Describe the bug
NotImplementedError: Cast: Provided destination type fp16 not supported.
To Reproduce
def mnist_dataset(batch_size): (x_train, ytrain), = tf.keras.datasets.mnist.load_data()
The
x
arrays are in uint8 and have values in the range [0, 255].You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255) y_train = y_train.astype(np.int64) train_dataset = tf.data.Dataset.from_tensor_slices( (x_train, y_train)).shuffle(60000).repeat().batch(batch_size) return train_dataset
def build_and_compile_cnn_model(): model = tf.keras.Sequential([ tf.keras.Input(shape=(28, 28)), tf.keras.layers.Reshape(target_shape=(28, 28, 1)), tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, dtype='float32') ]) model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.SGD(learning_rate=0.001), metrics=['accuracy'])
print(model.output) return model
policy = mixed_precision.Policy('mixed_float16') mixed_precision.set_global_policy(policy)
print('Compute dtype: %s' % policy.compute_dtype) print('Variable dtype: %s' % policy.variable_dtype)
batch_size = 64 single_worker_dataset = mnist_dataset(batch_size) single_worker_model = build_and_compile_cnn_model() single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
single_worker_model.save('tf_keras_model') mlmodel = ct.convert('tf_keras_model') mlmodel.save("test.mlmodel")