However, it seems that the model's graph have some TPU constrains that prevent it from loading.
When i try:
tf.train.import_meta_graph(source + ".meta", True)
i get:
No OpKernel was registered to support Op 'InfeedEnqueueTuple' used by node input_pipeline_task0/while/InfeedQueue/enqueue/0 (defined at <ipython-input-9-ce32c8268294>:8) with these attrs: [dtypes=[DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32], layouts=[], _class=["loc:@input_pipeline_task0/while/IteratorGetNext"], shapes=[[16,1], [16,512], [16,512], [16,512], [16,512], [16,512], [16,512], [16,512], [16,512], [16,512], [16,64], [16,512], [16,512]], device_ordinal=0]
It seems like a known issue and the workaround is to save the model with use_tpu=false + export_to_tpu=false.
Can you please release the models without the TPU constraints in the graph meta?
I'm trying to add a new operation without training the models again on the wikisql data.
However, it seems that the model's graph have some TPU constrains that prevent it from loading.
When i try:
tf.train.import_meta_graph(source + ".meta", True)
i get:No OpKernel was registered to support Op 'InfeedEnqueueTuple' used by node input_pipeline_task0/while/InfeedQueue/enqueue/0 (defined at <ipython-input-9-ce32c8268294>:8) with these attrs: [dtypes=[DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32, DT_INT32], layouts=[], _class=["loc:@input_pipeline_task0/while/IteratorGetNext"], shapes=[[16,1], [16,512], [16,512], [16,512], [16,512], [16,512], [16,512], [16,512], [16,512], [16,512], [16,64], [16,512], [16,512]], device_ordinal=0]
It seems like a known issue and the workaround is to save the model with use_tpu=false + export_to_tpu=false. Can you please release the models without the TPU constraints in the graph meta?
Full code example: https://colab.research.google.com/drive/1yoyZ-45So5pEIGmZp85ut38lW653KHXL?usp=sharing