tech-srl / code2seq

Code for the model presented in the paper: "code2seq: Generating Sequences from Structured Representations of Code"
http://code2seq.org
MIT License
555 stars 164 forks source link

Error processing property '_dropout_mask_cache' of <ContextValueCache> #117

Open gbaulard opened 2 years ago

gbaulard commented 2 years ago

Hello,

Getting this error when running code2seq.py on the java large preprocessed dataset.

Traceback (most recent call last):
  File "C:\Users\gbaulard\Anaconda3\lib\site-packages\tensorflow\python\module\module.py", line 407, in _flatten_module
    leaves = nest.flatten_with_tuple_paths(
  File "C:\Users\gbaulard\Anaconda3\lib\site-packages\tensorflow\python\util\nest.py", line 1698, in flatten_with_tuple_paths
    flatten(structure, expand_composites=expand_composites)))
  File "C:\Users\gbaulard\Anaconda3\lib\site-packages\tensorflow\python\util\nest.py", line 451, in flatten
    return _pywrap_utils.Flatten(structure, expand_composites)
TypeError: '<' not supported between instances of 'WhileBodyFuncGraph' and 'FuncGraph'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\gbaulard\Documents\GitHub\code2seq\code2seq.py", line 29, in <module>
    model.train()
  File "C:\Users\gbaulard\Documents\GitHub\code2seq\modelrunner.py", line 129, in train
    gradients = tape.gradient(loss, self.model.trainable_variables)
  File "C:\Users\gbaulard\Anaconda3\lib\site-packages\tensorflow\python\module\module.py", line 171, in trainable_variables
    return tuple(
  File "C:\Users\gbaulard\Anaconda3\lib\site-packages\tensorflow\python\module\module.py", line 449, in _flatten_module
    for subvalue in subvalues:
  File "C:\Users\gbaulard\Anaconda3\lib\site-packages\tensorflow\python\module\module.py", line 449, in _flatten_module
    for subvalue in subvalues:
  File "C:\Users\gbaulard\Anaconda3\lib\site-packages\tensorflow\python\module\module.py", line 449, in _flatten_module
    for subvalue in subvalues:
  File "C:\Users\gbaulard\Anaconda3\lib\site-packages\tensorflow\python\module\module.py", line 410, in _flatten_module
    six.raise_from(
  File "<string>", line 3, in raise_from
ValueError: Error processing property '_dropout_mask_cache' of <ContextValueCache at 0x17309fd0df0>

This is the line that is beng trggered in module.py of tensorflow :

try:
      leaves = nest.flatten_with_tuple_paths(
          prop, expand_composites=expand_composites)
    except Exception as cause:  # pylint: disable=broad-except
      six.raise_from(
          ValueError(
              "Error processing property {!r} of {!r}".format(key, prop)),
          cause)

Any thoughts appreciated.

Regards

urialon commented 2 years ago

Hi @gbaulard , Thank you for your interest in our work!

What python version and tensorflow version are you using?

Best, Uri

gbaulard commented 2 years ago

Hi,

We tried :

With your official version, it was python 3.6.5 and tensorflow 1.12.0

The 3 configs give this same error, running train.sh bash WSL or code2seq.py in conda.

Best

ethansaurusrex commented 1 year ago

Hi @gbaulard,

Did you ever find a solution to this issue, I have run into the same problem in the official version and the tensor flow 2.1+ fork by Kolkir. Thanks!

urialon commented 1 year ago

Hi all, I think that the code supports only tensorflow 1.*.

If you are interested in NL<->code, I recommend checking our newer projects which are all publicly available using the Huggingface library: https://github.com/VHellendoorn/Code-LMs https://github.com/neulab/code-bert-score https://github.com/shuyanzhou/docprompting

Best, Uri

ethansaurusrex commented 1 year ago

Hi,

Thank you for the quick reply! I will definitely take a look at those papers.

Best

lapplislazuli commented 1 year ago

For other people coming here from the 2.1 Fork

The following requirements.txt worked for me:

clang
joblib
libclang
lmdb
networkx
numpy==1.19.5
rouge==0.3.2
tensorflow==2.1
tensorflow-gpu==2.1
keras==2.1
tensorflow_addons==0.9.1
scikit_learn
flatbuffers

However if your GPU is recognized depends on the model - newer GPUs do not support the old CUDA Frameworks required for TF 2.1. 3080Ti is not compatible.