juglab / cryoCARE_pip

PIP package of cryoCARE
BSD 3-Clause "New" or "Revised" License
25 stars 14 forks source link

AttributeError: 'DatasetV1Adapter' object has no attribute 'element_spec' #13

Closed mcianfrocco closed 2 years ago

mcianfrocco commented 2 years ago

Hello -

This is my first time running cryoCARE on tomograms and I ran into this error during preparation:

Traceback (most recent call last):
  File "/lsi/groups/mcianfroccolab/mcianfro/conda/cryocare/bin/cryoCARE_train.py", line 43, in <module>
    main()
  File "/lsi/groups/mcianfroccolab/mcianfro/conda/cryocare/bin/cryoCARE_train.py", line 36, in main
    history = model.train(dm.get_train_dataset(), dm.get_val_dataset())
  File "/lsi/groups/mcianfroccolab/mcianfro/conda/cryocare/lib/python3.7/site-packages/cryocare/internals/CryoCARE.py", line 33, in train
    axes = axes_check_and_normalize('S' + self.config.axes, len(train_dataset.element_spec[0].shape) + 1)
AttributeError: 'DatasetV1Adapter' object has no attribute 'element_spec'

Do you have any suggestions on where this could be coming from? I'm providing two tomograms as inputs, each with dimensions (x,y,z,n)= 682 x 960 x 300 x 1.

tibuch commented 2 years ago

I think the issue is that I expect 3D data i.e. XYZ. Could you try to remove the last singleton dimension?

nadavelad commented 2 years ago

Hi, I am getting the same error, was there any solution found?

tibuch commented 2 years ago

How many dimensions does your data have?

nadavelad commented 2 years ago

I have a single tomogram as input, which was split into two 'even and odd' tomograms during movie frame alignment

tibuch commented 2 years ago

Hi @mcianfrocco,

I had another look and I think the issue is related to the tensorflow version. Which version are you using?

Could you try to downgrade tensorflow to 2.3? If this is the issue I have to pin it and update the install instructions.

Thank you!

mcianfrocco commented 2 years ago

Thanks for responding! I'm using tensor flow 1.14.0 - what version should I be using?

tibuch commented 2 years ago

Can you try 2.3?

nadavelad commented 2 years ago

Hi, I was also using tensorflow 1.14.0. Upgrading to 2.3.0 solved the problem and cryoCARE_train is now running. Thanks! I did get the warnings below, is this critical?

tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-07-31 15:32:25.189891: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3200000000 Hz
2022-07-31 15:32:25.192290: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562323792650 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-07-31 15:32:25.192325: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
WARNING:tensorflow:AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f314e7fe710> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to identify source code of lambda function <function _mean_or_not.<locals>.<lambda> at 0x7f314e7fe710>. It was defined in this code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
return (lambda x: K.mean(x,axis=-1)) if mean else (lambda x: x)

This code must contain a single distinguishable lambda. To avoid this problem, define each lambda in a separate expression.
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f314e7feef0> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to identify source code of lambda function <function _mean_or_not.<locals>.<lambda> at 0x7f314e7feef0>. It was defined in this code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
return (lambda x: K.mean(x,axis=-1)) if mean else (lambda x: x)

This code must contain a single distinguishable lambda. To avoid this problem, define each lambda in a separate expression.
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _mean_or_not.<locals>.<lambda> at 0x7f314e818830> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to identify source code of lambda function <function _mean_or_not.<locals>.<lambda> at 0x7f314e818830>. It was defined in this code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
return (lambda x: K.mean(x,axis=-1)) if mean else (lambda x: x)

This code must contain a single distinguishable lambda. To avoid this problem, define each lambda in a separate expression.
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
mcianfrocco commented 2 years ago

Thank you @tibuch - that fixed it for me as well.

thorstenwagner commented 2 years ago

May I ask @nadavelad and @mcianfrocco if you followed the cuda installation instructions? I wonder where the tensorflow 1.14 comes from...

mcianfrocco commented 2 years ago

Hi @thorstenwagner - I'm not sure where it came from, I followed the instructions...

nadavelad commented 2 years ago

Same here

thorstenwagner commented 2 years ago

Thanks for your replies.

As I'm updated the instructions multiple times, I'm not sure which one you actually used :-D Anyway, I consider this as fixed.