smousavi05 / EQTransformer

EQTransformer, a python package for earthquake signal detection and phase picking using AI.
https://rebrand.ly/EQT-documentations
MIT License
311 stars 150 forks source link

Problem of chapter "Detection and Picking" in Colab #151

Closed cxf213 closed 1 year ago

cxf213 commented 1 year ago

Hello Dr.Mousavi, I tried to use EQTransformer on the colab with EQTransformerviaGoogleColab.ipynb file. But here was a error while runing the cell

`from EQTransformer.core.predictor import predictor

predictor(input_dir= 'downloads_mseeds_processed_hdfs', input_model='/content/EQTransformer/ModelsAndSampleData/EqT_original_model.h5', output_dir='detections', detection_threshold=0.3, P_threshold=0.1, S_threshold=0.1, number_of_plots=100, plot_mode='time')`

the log is here

============================================================================ Running EqTransformer 0.1.61 *** Loading the model ... 11%|██████▉ | 1/9 [13:51<1:50:48, 831.12s/it] WARNING:absl:lris deprecated in Keras optimizer, please uselearning_rate` or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.Adam. *** Loading is complete!

*** /content/EQTransformer/detections already exists! --> Type (Yes or y) to create a new empty directory! otherwise it will overwrite! y ######### There are files for 3 stations in downloads_mseeds_processed_hdfs directory. ######### ========= Started working on B921, 1 out of 3 ... 0%| | 0/9 [00:00<?, ?it/s]

ValueError Traceback (most recent call last) in <cell line: 3>() 1 from EQTransformer.core.predictor import predictor 2 ----> 3 predictor(input_dir= 'downloads_mseeds_processed_hdfs', input_model='/content/EQTransformer/ModelsAndSampleData/EqT_original_model.h5', output_dir='detections', detection_threshold=0.3, P_threshold=0.1, S_threshold=0.1, number_of_plots=100, plot_mode='time')

8 frames /content/EQTransformer/EQTransformer/core/EqT_utils.py in if_body_1() 41 def if_body_1(): 42 nonlocal e ---> 43 e = ag.converted_call(ag.ld(K).reshape, (ag.converted_call(ag.ld(K).dot, (ag.ld(h), ag.ld(self).Wa), None, fscope) + ag.ld(self).ba, (ag.ld(batch_size), ag__.ld(input_len), ag__.ld(input_len))), None, fscope) 44 45 def else_body_1():

ValueError: in user code:

File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 2169, in predict_function  *
    return step_function(self, iterator)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 2155, in step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 2143, in run_step  **
    outputs = model.predict_step(data)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 2111, in predict_step
    return self(x, training=False)
File "/usr/local/lib/python3.10/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
File "/tmp/__autograph_generated_files4wd_c3v.py", line 44, in tf__call
    ag__.if_stmt(ag__.ld(self).attention_type == ag__.ld(SeqSelfAttention).ATTENTION_TYPE_ADD, if_body_1, else_body_1, get_state_1, set_state_1, ('e',), 1)
File "/tmp/__autograph_generated_files4wd_c3v.py", line 22, in if_body_1
    e = ag__.converted_call(ag__.ld(self)._call_additive_emission, (ag__.ld(inputs),), None, fscope)
File "/tmp/__autograph_generated_fileb86r9cs8.py", line 49, in tf___call_additive_emission
    ag__.if_stmt(ag__.ld(self).use_attention_bias, if_body_1, else_body_1, get_state_1, set_state_1, ('e',), 1)
File "/tmp/__autograph_generated_fileb86r9cs8.py", line 43, in if_body_1
    e = ag__.converted_call(ag__.ld(K).reshape, (ag__.converted_call(ag__.ld(K).dot, (ag__.ld(h), ag__.ld(self).Wa), None, fscope) + ag__.ld(self).ba, (ag__.ld(batch_size), ag__.ld(input_len), ag__.ld(input_len))), None, fscope)

ValueError: Exception encountered when calling layer 'attentionD0' (type SeqSelfAttention).

in user code:

    File "/content/EQTransformer/EQTransformer/core/EqT_utils.py", line 2506, in call  *
        e = self._call_additive_emission(inputs)
    File "/content/EQTransformer/EQTransformer/core/EqT_utils.py", line 2555, in _call_additive_emission  *
        e = K.reshape(K.dot(h, self.Wa) + self.ba, (batch_size, input_len, input_len))
    File "/usr/local/lib/python3.10/dist-packages/keras/backend.py", line 3612, in reshape
        return tf.reshape(x, shape)

    ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.

Call arguments received by layer 'attentionD0' (type SeqSelfAttention):
  • inputs=tf.Tensor(shape=(None, None, 16), dtype=float32)
  • mask=None
  • kwargs={'training': 'False'}`
smousavi05 commented 1 year ago

@cxf213 are you using the version on the GitHub page? how did you install EqT?

cxf213 commented 1 year ago

yes, i use the method mentioned in the colab notebook. but i didnt meet the problem when i installed it in my computer. thanks