huggingface / tflite-android-transformers

DistilBERT / GPT-2 for on-device inference thanks to TensorFlow Lite with Android demo apps
Apache License 2.0
391 stars 81 forks source link

model generation got error #2

Closed gitathrun closed 4 years ago

gitathrun commented 4 years ago

I currently encountered an error when trying to use the model_generation.py script

Problem 1: If the tf.version is 1.15, following statement got error:

from transformers import TFDistilBertForQuestionAnswering

Problem 2: if I installed tf-nightly, instead of 1.15 tf, the version is image The above statement go through, but following error shows up

TypeError                                 Traceback (most recent call last)
<ipython-input-3-77fb6947e6cd> in <module>()
      2 
      3 input_spec = tf.TensorSpec([1, 384], tf.int32)
----> 4 model._set_inputs(input_spec, training=False)
      5 
      6 print(model.inputs)

2 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
    261       except Exception as e:  # pylint:disable=broad-except
    262         if hasattr(e, 'ag_error_metadata'):
--> 263           raise e.ag_error_metadata.to_exception(e)
    264         else:
    265           raise

TypeError: in converted code:

    /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py:735 call  *
        distilbert_output = self.distilbert(inputs, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py:773 __call__
        outputs = call_fn(cast_inputs, *args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py:447 call  *
        tfmr_output = self.transformer([embedding_output, attention_mask, head_mask], training=training)
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py:822 __call__
        outputs = self.call(cast_inputs, *args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py:382 call
        layer_outputs = layer_module([hidden_state, attn_mask, head_mask[i]], training=training)
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py:822 __call__
        outputs = self.call(cast_inputs, *args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py:324 call
        sa_output = self.attention([x, x, x, attn_mask, head_mask], training=training)
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py:822 __call__
        outputs = self.call(cast_inputs, *args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py:229 call
        assert 2 <= len(tf.shape(mask)) <= 3
    /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:733 __len__
        "shape information.".format(self.name))

    TypeError: len is not well defined for symbolic Tensors. (tf_distil_bert_for_question_answering/distilbert/transformer/layer_._0/attention/Shape_2:0) Please call `x.shape` rather than `len(x)` for shape information.

Would you mind update the requirement.txt to fix the version of tf-nightly and transformers, to ensure the script could execute correctly?

Many thanks.

Pierrci commented 4 years ago

Hi, what is your environment? I just tried using the mentionned tf-nightly build 2.1.0-dev20191128 and the last transformers release 2.2.0 and it works perfectly on my machine. Also tried in a notebook and everything is working too: https://colab.research.google.com/drive/1VPK7OcYZRQGVgC4b4RpIl0nvYJPW7o5C

Also working using the last tf-nightly build from today.

gitathrun commented 4 years ago

@Pierrci I have tried your code in the colab, it works perfectly. The error might be caused by colab kernel, while after the pip installation, restart the notbook kernel then the code works.

Many thanks.

stargazing-dino commented 4 years ago

@Pierrci It looks like there's no longer that version of tf-nightly.

ERROR: Could not find a version that satisfies the requirement tf-nightly==2.1.0-dev20191128 (from versions: 2.2.0.dev20200301, 2.2.0.dev20200302, 2.2.0.dev20200303, 2.2.0.dev20200304, 2.2.0.dev20200305, 2.2.0.dev20200306, 2.2.0.dev20200307, 2.2.0.dev20200308, 2.2.0.dev20200309, 2.2.0.dev20200310, 2.2.0.dev20200311, 2.2.0.dev20200312, 2.2.0.dev20200313, 2.2.0.dev20200314, 2.2.0.dev20200315, 2.2.0.dev20200316, 2.2.0.dev20200317, 2.2.0.dev20200318, 2.2.0.dev20200319, 2.2.0.dev20200323, 2.2.0.dev20200324, 2.2.0.dev20200325, 2.2.0.dev20200327, 2.2.0.dev20200328, 2.2.0.dev20200329, 2.2.0.dev20200330, 2.2.0.dev20200331, 2.2.0.dev20200401, 2.2.0.dev20200402, 2.2.0.dev20200403, 2.2.0.dev20200404, 2.2.0.dev20200405, 2.2.0.dev20200406, 2.2.0.dev20200407, 2.2.0.dev20200408, 2.2.0.dev20200409, 2.2.0.dev20200410, 2.2.0.dev20200411, 2.2.0.dev20200412, 2.2.0.dev20200413, 2.2.0.dev20200414, 2.2.0.dev20200415, 2.2.0.dev20200416, 2.2.0.dev20200417, 2.2.0.dev20200418, 2.2.0.dev20200419, 2.2.0.dev20200420, 2.2.0.dev20200421, 2.2.0.dev20200422, 2.2.0.dev20200423, 2.2.0.dev20200424, 2.2.0.dev20200425, 2.2.0.dev20200426, 2.2.0.dev20200427, 2.2.0.dev20200428, 2.2.0.dev20200429, 2.2.0.dev20200430, 2.2.0.dev20200501, 2.2.0.dev20200502, 2.2.0.dev20200503, 2.2.0.dev20200504, 2.2.0.dev20200505, 2.2.0.dev20200506, 2.2.0.dev20200507, 2.2.0.dev20200508, 2.3.0.dev20200512, 2.3.0.dev20200513, 2.3.0.dev20200514, 2.3.0.dev20200515)
ERROR: No matching distribution found for tf-nightly==2.1.0-dev20191128

And building with TensorFlow 2.2.0-* leads me to this issue where

print(model.inputs)
print(model.outputs)

gives

None
None

Edit

!pip install tensorflow==2.1.0

works fine