Closed HugoPu closed 4 years ago
Hi @HugoPu , can you confirm the TFX version you're using (especially the TFMA)? Thanks
Hi @HugoPu , can you confirm the TFX version you're using (especially the TFMA)? Thanks
Hi @numerology, the version info is as follow:
tensorboard 1.15.0
tensorflow 1.15.2
tensorflow-data-validation 0.15.0
tensorflow-estimator 1.15.1
tensorflow-metadata 0.15.2
tensorflow-model-analysis 0.15.4
tensorflow-serving-api 1.15.0
tensorflow-text 1.15.0
tensorflow-transform 0.15.0
tfx 0.15.0
It is the same issue which is mentioned in #236, and 1000
is the batch_size. Rewrite the executors of Evaluator and ModelValidator can fix this issure
I meet this problem too.
I created a bert tfx pipeline project which was based on this tutorial, it worked normally until the
Evaluator
steps.Enviroment:
The code of
Evaluator
is as follow:The error log is as follow:
Did it mean that the gpu had no enough RAM? But I set both the
train_batch_size
and theeval_batch_size
to be1
, and setsession_config.gpu_options.per_process_gpu_memory_fraction = 0.5
, it still didn't work. And I found the shape of the variables are [1000, 12 256, 256], it is strange, what are1000
and12
from?Did I make any mistake? Could you give me some advise?