The TensorFlow Cloud repository provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.
I believe this error happened while Tensorflow was preparing itself to run the actual python file.
As a temporary fix downgrading the pyparsing module from current (possibly the latest 3.0.0) worked for me:
use pyparsing==2.4.7 in the requirements.txt
First lines from logs:
E 2021-10-25T17:36:36.045058622Z master-replica-0 AttributeError: module 'pyparsing' has no attribute 'downcaseTokens'
E 2021-10-25T17:36:36.045054622Z master-replica-0 auth_param_name = token.copy().setName("auth-param-name").addParseAction(pp.downcaseTokens)
E 2021-10-25T17:36:36.045051112Z master-replica-0 File "/usr/local/lib/python3.6/dist-packages/httplib2/auth.py", line 20, in <module>
E 2021-10-25T17:36:36.045047843Z master-replica-0 from . import auth
E 2021-10-25T17:36:36.045044283Z master-replica-0 File "/usr/local/lib/python3.6/dist-packages/httplib2/__init__.py", line 52, in <module>
E 2021-10-25T17:36:36.045041016Z master-replica-0 import httplib2
E 2021-10-25T17:36:36.045037449Z master-replica-0 File "/usr/local/lib/python3.6/dist-packages/googleapiclient/discovery.py", line 42, in <module>
undefined
E 2021-10-25T17:36:36.045034064Z master-replica-0 from googleapiclient import discovery # pylint: disable=g-import-not-at-top
I believe this error happened while Tensorflow was preparing itself to run the actual python file.
As a temporary fix downgrading the pyparsing module from current (possibly the latest 3.0.0) worked for me: use
pyparsing==2.4.7
in the requirements.txtFirst lines from logs:
ml_job_job_id_tf_cloud_train_864f3c68_a9d0_45c3_b47f_86da6da97638logs2021-10-25T07-30.csv