calico / basenji

Sequential regulatory activity predictions with deep convolutional neural networks.
Apache License 2.0
410 stars 126 forks source link

Error when running akita_train.py #144

Open fokxon opened 1 year ago

fokxon commented 1 year ago

Hi, I'm trying to train a new akita model. But when I followed the tutorial with exactly the same parameters, I got the following error when running akita_train.py :

2023-01-20 22:44:27.708862: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-20 22:44:28.120731: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3083 MB memory:  -> device: 0, name: NVIDIA A100-SXM4-40GB MIG 1g.5gb, pci bus id: 0000:bd:00.0, compute capability: 8.0
Model: "model_1"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 sequence (InputLayer)          [(None, 1048576, 4)  0           []                               
                                ]                                                                 

 stochastic_reverse_complement   ((None, 1048576, 4)  0          ['sequence[0][0]']               
 (StochasticReverseComplement)  , ())                                                             

 stochastic_shift (StochasticSh  (None, 1048576, 4)  0           ['stochastic_reverse_complement[0
 ift)                                                            ][0]']                           

 re_lu (ReLU)                   (None, 1048576, 4)   0           ['stochastic_shift[0][0]']       

 conv1d (Conv1D)                (None, 1048576, 96)  4224        ['re_lu[0][0]']                  

 batch_normalization (BatchNorm  (None, 1048576, 96)  384        ['conv1d[0][0]']                 
 alization)                                                                                       

 max_pooling1d (MaxPooling1D)   (None, 524288, 96)   0           ['batch_normalization[0][0]']    

 re_lu_1 (ReLU)                 (None, 524288, 96)   0           ['max_pooling1d[0][0]']          

 conv1d_1 (Conv1D)              (None, 524288, 96)   46080       ['re_lu_1[0][0]']                

 batch_normalization_1 (BatchNo  (None, 524288, 96)  384         ['conv1d_1[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_1 (MaxPooling1D)  (None, 262144, 96)  0           ['batch_normalization_1[0][0]']  

 re_lu_2 (ReLU)                 (None, 262144, 96)   0           ['max_pooling1d_1[0][0]']        

 conv1d_2 (Conv1D)              (None, 262144, 96)   46080       ['re_lu_2[0][0]']                

 batch_normalization_2 (BatchNo  (None, 262144, 96)  384         ['conv1d_2[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_2 (MaxPooling1D)  (None, 131072, 96)  0           ['batch_normalization_2[0][0]']  

 re_lu_3 (ReLU)                 (None, 131072, 96)   0           ['max_pooling1d_2[0][0]']        

 conv1d_3 (Conv1D)              (None, 131072, 96)   46080       ['re_lu_3[0][0]']                

 batch_normalization_3 (BatchNo  (None, 131072, 96)  384         ['conv1d_3[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_3 (MaxPooling1D)  (None, 65536, 96)   0           ['batch_normalization_3[0][0]']  

 re_lu_4 (ReLU)                 (None, 65536, 96)    0           ['max_pooling1d_3[0][0]']        

 conv1d_4 (Conv1D)              (None, 65536, 96)    46080       ['re_lu_4[0][0]']                

 batch_normalization_4 (BatchNo  (None, 65536, 96)   384         ['conv1d_4[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_4 (MaxPooling1D)  (None, 32768, 96)   0           ['batch_normalization_4[0][0]']  

 re_lu_5 (ReLU)                 (None, 32768, 96)    0           ['max_pooling1d_4[0][0]']        

 conv1d_5 (Conv1D)              (None, 32768, 96)    46080       ['re_lu_5[0][0]']                

 batch_normalization_5 (BatchNo  (None, 32768, 96)   384         ['conv1d_5[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_5 (MaxPooling1D)  (None, 16384, 96)   0           ['batch_normalization_5[0][0]']  

 re_lu_6 (ReLU)                 (None, 16384, 96)    0           ['max_pooling1d_5[0][0]']        

 conv1d_6 (Conv1D)              (None, 16384, 96)    46080       ['re_lu_6[0][0]']                

 batch_normalization_6 (BatchNo  (None, 16384, 96)   384         ['conv1d_6[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_6 (MaxPooling1D)  (None, 8192, 96)    0           ['batch_normalization_6[0][0]']  

 re_lu_7 (ReLU)                 (None, 8192, 96)     0           ['max_pooling1d_6[0][0]']        

 conv1d_7 (Conv1D)              (None, 8192, 96)     46080       ['re_lu_7[0][0]']                

 batch_normalization_7 (BatchNo  (None, 8192, 96)    384         ['conv1d_7[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_7 (MaxPooling1D)  (None, 4096, 96)    0           ['batch_normalization_7[0][0]']  

 re_lu_8 (ReLU)                 (None, 4096, 96)     0           ['max_pooling1d_7[0][0]']        

 conv1d_8 (Conv1D)              (None, 4096, 96)     46080       ['re_lu_8[0][0]']                

 batch_normalization_8 (BatchNo  (None, 4096, 96)    384         ['conv1d_8[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_8 (MaxPooling1D)  (None, 2048, 96)    0           ['batch_normalization_8[0][0]']  

 re_lu_9 (ReLU)                 (None, 2048, 96)     0           ['max_pooling1d_8[0][0]']        

 conv1d_9 (Conv1D)              (None, 2048, 96)     46080       ['re_lu_9[0][0]']                

 batch_normalization_9 (BatchNo  (None, 2048, 96)    384         ['conv1d_9[0][0]']               
 rmalization)                                                                                     

 max_pooling1d_9 (MaxPooling1D)  (None, 1024, 96)    0           ['batch_normalization_9[0][0]']  

 re_lu_10 (ReLU)                (None, 1024, 96)     0           ['max_pooling1d_9[0][0]']        

 conv1d_10 (Conv1D)             (None, 1024, 96)     46080       ['re_lu_10[0][0]']               

 batch_normalization_10 (BatchN  (None, 1024, 96)    384         ['conv1d_10[0][0]']              
 ormalization)                                                                                    

 max_pooling1d_10 (MaxPooling1D  (None, 512, 96)     0           ['batch_normalization_10[0][0]'] 
 )                                                                                                

 re_lu_11 (ReLU)                (None, 512, 96)      0           ['max_pooling1d_10[0][0]']       

 conv1d_11 (Conv1D)             (None, 512, 48)      13824       ['re_lu_11[0][0]']               

 batch_normalization_11 (BatchN  (None, 512, 48)     192         ['conv1d_11[0][0]']              
 ormalization)                                                                                    

 re_lu_12 (ReLU)                (None, 512, 48)      0           ['batch_normalization_11[0][0]'] 

 conv1d_12 (Conv1D)             (None, 512, 96)      4608        ['re_lu_12[0][0]']               

 batch_normalization_12 (BatchN  (None, 512, 96)     384         ['conv1d_12[0][0]']              
 ormalization)                                                                                    

 dropout (Dropout)              (None, 512, 96)      0           ['batch_normalization_12[0][0]'] 

 add (Add)                      (None, 512, 96)      0           ['max_pooling1d_10[0][0]',       
                                                                  'dropout[0][0]']                

 re_lu_13 (ReLU)                (None, 512, 96)      0           ['add[0][0]']                    

 conv1d_13 (Conv1D)             (None, 512, 48)      13824       ['re_lu_13[0][0]']               

 batch_normalization_13 (BatchN  (None, 512, 48)     192         ['conv1d_13[0][0]']              
 ormalization)                                                                                    

 re_lu_14 (ReLU)                (None, 512, 48)      0           ['batch_normalization_13[0][0]'] 

 conv1d_14 (Conv1D)             (None, 512, 96)      4608        ['re_lu_14[0][0]']               

 batch_normalization_14 (BatchN  (None, 512, 96)     384         ['conv1d_14[0][0]']              
 ormalization)                                                                                    

 dropout_1 (Dropout)            (None, 512, 96)      0           ['batch_normalization_14[0][0]'] 

 add_1 (Add)                    (None, 512, 96)      0           ['add[0][0]',                    
                                                                  'dropout_1[0][0]']              

 re_lu_15 (ReLU)                (None, 512, 96)      0           ['add_1[0][0]']                  

 conv1d_15 (Conv1D)             (None, 512, 48)      13824       ['re_lu_15[0][0]']               

 batch_normalization_15 (BatchN  (None, 512, 48)     192         ['conv1d_15[0][0]']              
 ormalization)                                                                                    

 re_lu_16 (ReLU)                (None, 512, 48)      0           ['batch_normalization_15[0][0]'] 

 conv1d_16 (Conv1D)             (None, 512, 96)      4608        ['re_lu_16[0][0]']               

 batch_normalization_16 (BatchN  (None, 512, 96)     384         ['conv1d_16[0][0]']              
 ormalization)                                                                                    

 dropout_2 (Dropout)            (None, 512, 96)      0           ['batch_normalization_16[0][0]'] 

 add_2 (Add)                    (None, 512, 96)      0           ['add_1[0][0]',                  
                                                                  'dropout_2[0][0]']              

 re_lu_17 (ReLU)                (None, 512, 96)      0           ['add_2[0][0]']                  

 conv1d_17 (Conv1D)             (None, 512, 48)      13824       ['re_lu_17[0][0]']               

 batch_normalization_17 (BatchN  (None, 512, 48)     192         ['conv1d_17[0][0]']              
 ormalization)                                                                                    

 re_lu_18 (ReLU)                (None, 512, 48)      0           ['batch_normalization_17[0][0]'] 

 conv1d_18 (Conv1D)             (None, 512, 96)      4608        ['re_lu_18[0][0]']               

 batch_normalization_18 (BatchN  (None, 512, 96)     384         ['conv1d_18[0][0]']              
 ormalization)                                                                                    

 dropout_3 (Dropout)            (None, 512, 96)      0           ['batch_normalization_18[0][0]'] 

 add_3 (Add)                    (None, 512, 96)      0           ['add_2[0][0]',                  
                                                                  'dropout_3[0][0]']              

 re_lu_19 (ReLU)                (None, 512, 96)      0           ['add_3[0][0]']                  

 conv1d_19 (Conv1D)             (None, 512, 48)      13824       ['re_lu_19[0][0]']               

 batch_normalization_19 (BatchN  (None, 512, 48)     192         ['conv1d_19[0][0]']              
 ormalization)                                                                                    

 re_lu_20 (ReLU)                (None, 512, 48)      0           ['batch_normalization_19[0][0]'] 

 conv1d_20 (Conv1D)             (None, 512, 96)      4608        ['re_lu_20[0][0]']               

 batch_normalization_20 (BatchN  (None, 512, 96)     384         ['conv1d_20[0][0]']              
 ormalization)                                                                                    

 dropout_4 (Dropout)            (None, 512, 96)      0           ['batch_normalization_20[0][0]'] 

 add_4 (Add)                    (None, 512, 96)      0           ['add_3[0][0]',                  
                                                                  'dropout_4[0][0]']              

 re_lu_21 (ReLU)                (None, 512, 96)      0           ['add_4[0][0]']                  

 conv1d_21 (Conv1D)             (None, 512, 48)      13824       ['re_lu_21[0][0]']               

 batch_normalization_21 (BatchN  (None, 512, 48)     192         ['conv1d_21[0][0]']              
 ormalization)                                                                                    

 re_lu_22 (ReLU)                (None, 512, 48)      0           ['batch_normalization_21[0][0]'] 

 conv1d_22 (Conv1D)             (None, 512, 96)      4608        ['re_lu_22[0][0]']               

 batch_normalization_22 (BatchN  (None, 512, 96)     384         ['conv1d_22[0][0]']              
 ormalization)                                                                                    

 dropout_5 (Dropout)            (None, 512, 96)      0           ['batch_normalization_22[0][0]'] 

 add_5 (Add)                    (None, 512, 96)      0           ['add_4[0][0]',                  
                                                                  'dropout_5[0][0]']              

 re_lu_23 (ReLU)                (None, 512, 96)      0           ['add_5[0][0]']                  

 conv1d_23 (Conv1D)             (None, 512, 48)      13824       ['re_lu_23[0][0]']               

 batch_normalization_23 (BatchN  (None, 512, 48)     192         ['conv1d_23[0][0]']              
 ormalization)                                                                                    

 re_lu_24 (ReLU)                (None, 512, 48)      0           ['batch_normalization_23[0][0]'] 

 conv1d_24 (Conv1D)             (None, 512, 96)      4608        ['re_lu_24[0][0]']               

 batch_normalization_24 (BatchN  (None, 512, 96)     384         ['conv1d_24[0][0]']              
 ormalization)                                                                                    

 dropout_6 (Dropout)            (None, 512, 96)      0           ['batch_normalization_24[0][0]'] 

 add_6 (Add)                    (None, 512, 96)      0           ['add_5[0][0]',                  
                                                                  'dropout_6[0][0]']              

 re_lu_25 (ReLU)                (None, 512, 96)      0           ['add_6[0][0]']                  

 conv1d_25 (Conv1D)             (None, 512, 48)      13824       ['re_lu_25[0][0]']               

 batch_normalization_25 (BatchN  (None, 512, 48)     192         ['conv1d_25[0][0]']              
 ormalization)                                                                                    

 re_lu_26 (ReLU)                (None, 512, 48)      0           ['batch_normalization_25[0][0]'] 

 conv1d_26 (Conv1D)             (None, 512, 96)      4608        ['re_lu_26[0][0]']               

 batch_normalization_26 (BatchN  (None, 512, 96)     384         ['conv1d_26[0][0]']              
 ormalization)                                                                                    

 dropout_7 (Dropout)            (None, 512, 96)      0           ['batch_normalization_26[0][0]'] 

 add_7 (Add)                    (None, 512, 96)      0           ['add_6[0][0]',                  
                                                                  'dropout_7[0][0]']              

 re_lu_27 (ReLU)                (None, 512, 96)      0           ['add_7[0][0]']                  

 conv1d_27 (Conv1D)             (None, 512, 64)      30720       ['re_lu_27[0][0]']               

 batch_normalization_27 (BatchN  (None, 512, 64)     256         ['conv1d_27[0][0]']              
 ormalization)                                                                                    

 re_lu_28 (ReLU)                (None, 512, 64)      0           ['batch_normalization_27[0][0]'] 

 one_to_two (OneToTwo)          (None, 512, 512, 64  0           ['re_lu_28[0][0]']               
                                )                                                                 

 concat_dist2d (ConcatDist2D)   (None, 512, 512, 65  0           ['one_to_two[0][0]']             
                                )                                                                 

 re_lu_29 (ReLU)                (None, 512, 512, 65  0           ['concat_dist2d[0][0]']          
                                )                                                                 

 conv2d (Conv2D)                (None, 512, 512, 48  28080       ['re_lu_29[0][0]']               
                                )                                                                 

 batch_normalization_28 (BatchN  (None, 512, 512, 48  192        ['conv2d[0][0]']                 
 ormalization)                  )                                                                 

 symmetrize2d (Symmetrize2D)    (None, 512, 512, 48  0           ['batch_normalization_28[0][0]'] 
                                )                                                                 

 re_lu_30 (ReLU)                (None, 512, 512, 48  0           ['symmetrize2d[0][0]']           
                                )                                                                 

 conv2d_1 (Conv2D)              (None, 512, 512, 24  10368       ['re_lu_30[0][0]']               
                                )                                                                 

 batch_normalization_29 (BatchN  (None, 512, 512, 24  96         ['conv2d_1[0][0]']               
 ormalization)                  )                                                                 

 re_lu_31 (ReLU)                (None, 512, 512, 24  0           ['batch_normalization_29[0][0]'] 
                                )                                                                 

 conv2d_2 (Conv2D)              (None, 512, 512, 48  1152        ['re_lu_31[0][0]']               
                                )                                                                 

 batch_normalization_30 (BatchN  (None, 512, 512, 48  192        ['conv2d_2[0][0]']               
 ormalization)                  )                                                                 

 dropout_8 (Dropout)            (None, 512, 512, 48  0           ['batch_normalization_30[0][0]'] 
                                )                                                                 

 add_8 (Add)                    (None, 512, 512, 48  0           ['symmetrize2d[0][0]',           
                                )                                 'dropout_8[0][0]']              

 symmetrize2d_1 (Symmetrize2D)  (None, 512, 512, 48  0           ['add_8[0][0]']                  
                                )                                                                 

 re_lu_32 (ReLU)                (None, 512, 512, 48  0           ['symmetrize2d_1[0][0]']         
                                )                                                                 

 conv2d_3 (Conv2D)              (None, 512, 512, 24  10368       ['re_lu_32[0][0]']               
                                )                                                                 

 batch_normalization_31 (BatchN  (None, 512, 512, 24  96         ['conv2d_3[0][0]']               
 ormalization)                  )                                                                 

 re_lu_33 (ReLU)                (None, 512, 512, 24  0           ['batch_normalization_31[0][0]'] 
                                )                                                                 

 conv2d_4 (Conv2D)              (None, 512, 512, 48  1152        ['re_lu_33[0][0]']               
                                )                                                                 

 batch_normalization_32 (BatchN  (None, 512, 512, 48  192        ['conv2d_4[0][0]']               
 ormalization)                  )                                                                 

 dropout_9 (Dropout)            (None, 512, 512, 48  0           ['batch_normalization_32[0][0]'] 
                                )                                                                 

 add_9 (Add)                    (None, 512, 512, 48  0           ['symmetrize2d_1[0][0]',         
                                )                                 'dropout_9[0][0]']              

 symmetrize2d_2 (Symmetrize2D)  (None, 512, 512, 48  0           ['add_9[0][0]']                  
                                )                                                                 

 re_lu_34 (ReLU)                (None, 512, 512, 48  0           ['symmetrize2d_2[0][0]']         
                                )                                                                 

 conv2d_5 (Conv2D)              (None, 512, 512, 24  10368       ['re_lu_34[0][0]']               
                                )                                                                 

 batch_normalization_33 (BatchN  (None, 512, 512, 24  96         ['conv2d_5[0][0]']               
 ormalization)                  )                                                                 

 re_lu_35 (ReLU)                (None, 512, 512, 24  0           ['batch_normalization_33[0][0]'] 
                                )                                                                 

 conv2d_6 (Conv2D)              (None, 512, 512, 48  1152        ['re_lu_35[0][0]']               
                                )                                                                 

 batch_normalization_34 (BatchN  (None, 512, 512, 48  192        ['conv2d_6[0][0]']               
 ormalization)                  )                                                                 

 dropout_10 (Dropout)           (None, 512, 512, 48  0           ['batch_normalization_34[0][0]'] 
                                )                                                                 

 add_10 (Add)                   (None, 512, 512, 48  0           ['symmetrize2d_2[0][0]',         
                                )                                 'dropout_10[0][0]']             

 symmetrize2d_3 (Symmetrize2D)  (None, 512, 512, 48  0           ['add_10[0][0]']                 
                                )                                                                 

 re_lu_36 (ReLU)                (None, 512, 512, 48  0           ['symmetrize2d_3[0][0]']         
                                )                                                                 

 conv2d_7 (Conv2D)              (None, 512, 512, 24  10368       ['re_lu_36[0][0]']               
                                )                                                                 

 batch_normalization_35 (BatchN  (None, 512, 512, 24  96         ['conv2d_7[0][0]']               
 ormalization)                  )                                                                 

 re_lu_37 (ReLU)                (None, 512, 512, 24  0           ['batch_normalization_35[0][0]'] 
                                )                                                                 

 conv2d_8 (Conv2D)              (None, 512, 512, 48  1152        ['re_lu_37[0][0]']               
                                )                                                                 

 batch_normalization_36 (BatchN  (None, 512, 512, 48  192        ['conv2d_8[0][0]']               
 ormalization)                  )                                                                 

 dropout_11 (Dropout)           (None, 512, 512, 48  0           ['batch_normalization_36[0][0]'] 
                                )                                                                 

 add_11 (Add)                   (None, 512, 512, 48  0           ['symmetrize2d_3[0][0]',         
                                )                                 'dropout_11[0][0]']             

 symmetrize2d_4 (Symmetrize2D)  (None, 512, 512, 48  0           ['add_11[0][0]']                 
                                )                                                                 

 re_lu_38 (ReLU)                (None, 512, 512, 48  0           ['symmetrize2d_4[0][0]']         
                                )                                                                 

 conv2d_9 (Conv2D)              (None, 512, 512, 24  10368       ['re_lu_38[0][0]']               
                                )                                                                 

 batch_normalization_37 (BatchN  (None, 512, 512, 24  96         ['conv2d_9[0][0]']               
 ormalization)                  )                                                                 

 re_lu_39 (ReLU)                (None, 512, 512, 24  0           ['batch_normalization_37[0][0]'] 
                                )                                                                 

 conv2d_10 (Conv2D)             (None, 512, 512, 48  1152        ['re_lu_39[0][0]']               
                                )                                                                 

 batch_normalization_38 (BatchN  (None, 512, 512, 48  192        ['conv2d_10[0][0]']              
 ormalization)                  )                                                                 

 dropout_12 (Dropout)           (None, 512, 512, 48  0           ['batch_normalization_38[0][0]'] 
                                )                                                                 

 add_12 (Add)                   (None, 512, 512, 48  0           ['symmetrize2d_4[0][0]',         
                                )                                 'dropout_12[0][0]']             

 symmetrize2d_5 (Symmetrize2D)  (None, 512, 512, 48  0           ['add_12[0][0]']                 
                                )                                                                 

 re_lu_40 (ReLU)                (None, 512, 512, 48  0           ['symmetrize2d_5[0][0]']         
                                )                                                                 

 conv2d_11 (Conv2D)             (None, 512, 512, 24  10368       ['re_lu_40[0][0]']               
                                )                                                                 

 batch_normalization_39 (BatchN  (None, 512, 512, 24  96         ['conv2d_11[0][0]']              
 ormalization)                  )                                                                 

 re_lu_41 (ReLU)                (None, 512, 512, 24  0           ['batch_normalization_39[0][0]'] 
                                )                                                                 

 conv2d_12 (Conv2D)             (None, 512, 512, 48  1152        ['re_lu_41[0][0]']               
                                )                                                                 

 batch_normalization_40 (BatchN  (None, 512, 512, 48  192        ['conv2d_12[0][0]']              
 ormalization)                  )                                                                 

 dropout_13 (Dropout)           (None, 512, 512, 48  0           ['batch_normalization_40[0][0]'] 
                                )                                                                 

 add_13 (Add)                   (None, 512, 512, 48  0           ['symmetrize2d_5[0][0]',         
                                )                                 'dropout_13[0][0]']             

 symmetrize2d_6 (Symmetrize2D)  (None, 512, 512, 48  0           ['add_13[0][0]']                 
                                )                                                                 

 cropping2d (Cropping2D)        (None, 448, 448, 48  0           ['symmetrize2d_6[0][0]']         
                                )                                                                 

 upper_tri (UpperTri)           (None, 99681, 48)    0           ['cropping2d[0][0]']             

 dense (Dense)                  (None, 99681, 2)     98          ['upper_tri[0][0]']              

 switch_reverse_triu (SwitchRev  (None, 99681, 2)    0           ['dense[0][0]',                  
 erseTriu)                                                        'stochastic_reverse_complement[0
                                                                 ][1]']                           

==================================================================================================
Total params: 751,506
Trainable params: 746,002
Non-trainable params: 5,504
__________________________________________________________________________________________________
None
model_strides [2048]
target_lengths [99681]
target_crops [-49585]
Epoch 1/10000
Traceback (most recent call last):
  File "/home/labadmiin/basenji-master/bin/akita_train.py", line 182, in <module>
    main()
  File "/home/labadmiin/basenji-master/bin/akita_train.py", line 171, in main
    seqnn_trainer.fit_keras(seqnn_model)
  File "/home/labadmiin/basenji-master/basenji/trainer.py", line 139, in fit_keras
    seqnn_model.model.fit(
  File "/miniconda3/envs/basenji/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/miniconda3/envs/basenji/lib/python3.9/site-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:

2 root error(s) found.
  (0) INVALID_ARGUMENT:  Input to reshape is a tensor with 1048576 values, but the requested shape has 4194304
     [[{{node Reshape}}]]
     [[IteratorGetNext]]
     [[IteratorGetNext/_8]]
  (1) INVALID_ARGUMENT:  Input to reshape is a tensor with 1048576 values, but the requested shape has 4194304
     [[{{node Reshape}}]]
     [[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_20821]

I built the environment with conda and prespecified.yml on ubuntu 20.04, cuda 11.4, cudnn 8.4.0 How can I deal with it? Thank you

davek44 commented 1 year ago

Hi, this indicates a mismatch between the sequence lengths, data resolution, and model pooling. If you send the statistics.json file of your dataset and you’re parameters json file of your model, I can help debug.

fokxon commented 1 year ago

Thank you for your reply. I can't access the files right now but I think it is the same as what the tutorial generates cause I didn't change anything in it. I can send you mine in a few days if you want.

davek44 commented 1 year ago

Hi, I tracked down the bug. Pull the latest from master branch and rerun the notebook from the beginning, including the dataset generation.

lbw124765283 commented 1 year ago

Hello, I also encountered this problem, how to debug this problem

davek44 commented 1 year ago

Hi, can you share some details about how you're running the script and the error output that you see? I thought I fixed this bug.

lbw124765283 commented 1 year ago

Thank U. This is the err_log. Sincerely, Wen

At 2023-03-24 06:11:27, "David Kelley" @.***> wrote:

Hi, can you share some details about how you're running the script and the error output that you see? I thought I fixed this bug.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***> python basenji_train.py -k -o ./data/1m/train_out/ ./data/1m/params_tutorial.json ./data/1m/

2023-03-25 12:57:32.355071: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-03-25 12:57:32.571997: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2023-03-25 12:57:32.575328: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2023-03-25 12:57:32.575342: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2023-03-25 12:57:33.961203: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory 2023-03-25 12:57:33.961325: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory 2023-03-25 12:57:33.961344: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. 2023-03-25 12:57:35.935947: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2023-03-25 12:57:35.936030: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory 2023-03-25 12:57:35.936081: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory 2023-03-25 12:57:35.936120: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcufft.so.10'; dlerror: libcufft.so.10: cannot open shared object file: No such file or directory 2023-03-25 12:57:35.991712: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory 2023-03-25 12:57:35.991776: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory 2023-03-25 12:57:35.991802: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1934] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2023-03-25 12:57:35.992460: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. WARNING:tensorflow:From /data/user/liangbw/anaconda3/envs/basenji/lib/python3.8/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23. Instructions for updating: Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089 Model: "model_1"


Layer (type) Output Shape Param # Connected to

sequence (InputLayer) [(None, 1048576, 4) 0 []
]

stochastic_reverse_complement ((None, 1048576, 4) 0 ['sequence[0][0]']
(StochasticReverseComplement) , ())

stochastic_shift (StochasticSh (None, 1048576, 4) 0 ['stochastic_reverse_complement[0 ift) ][0]']

re_lu (ReLU) (None, 1048576, 4) 0 ['stochastic_shift[0][0]']

conv1d (Conv1D) (None, 1048576, 96) 4224 ['re_lu[0][0]']

batch_normalization (BatchNorm (None, 1048576, 96) 384 ['conv1d[0][0]']
alization)

max_pooling1d (MaxPooling1D) (None, 524288, 96) 0 ['batch_normalization[0][0]']

re_lu_1 (ReLU) (None, 524288, 96) 0 ['max_pooling1d[0][0]']

conv1d_1 (Conv1D) (None, 524288, 96) 46080 ['re_lu_1[0][0]']

batch_normalization_1 (BatchNo (None, 524288, 96) 384 ['conv1d_1[0][0]']
rmalization)

max_pooling1d_1 (MaxPooling1D) (None, 262144, 96) 0 ['batch_normalization_1[0][0]']

re_lu_2 (ReLU) (None, 262144, 96) 0 ['max_pooling1d_1[0][0]']

conv1d_2 (Conv1D) (None, 262144, 96) 46080 ['re_lu_2[0][0]']

batch_normalization_2 (BatchNo (None, 262144, 96) 384 ['conv1d_2[0][0]']
rmalization)

max_pooling1d_2 (MaxPooling1D) (None, 131072, 96) 0 ['batch_normalization_2[0][0]']

re_lu_3 (ReLU) (None, 131072, 96) 0 ['max_pooling1d_2[0][0]']

conv1d_3 (Conv1D) (None, 131072, 96) 46080 ['re_lu_3[0][0]']

batch_normalization_3 (BatchNo (None, 131072, 96) 384 ['conv1d_3[0][0]']
rmalization)

max_pooling1d_3 (MaxPooling1D) (None, 65536, 96) 0 ['batch_normalization_3[0][0]']

re_lu_4 (ReLU) (None, 65536, 96) 0 ['max_pooling1d_3[0][0]']

conv1d_4 (Conv1D) (None, 65536, 96) 46080 ['re_lu_4[0][0]']

batch_normalization_4 (BatchNo (None, 65536, 96) 384 ['conv1d_4[0][0]']
rmalization)

max_pooling1d_4 (MaxPooling1D) (None, 32768, 96) 0 ['batch_normalization_4[0][0]']

re_lu_5 (ReLU) (None, 32768, 96) 0 ['max_pooling1d_4[0][0]']

conv1d_5 (Conv1D) (None, 32768, 96) 46080 ['re_lu_5[0][0]']

batch_normalization_5 (BatchNo (None, 32768, 96) 384 ['conv1d_5[0][0]']
rmalization)

max_pooling1d_5 (MaxPooling1D) (None, 16384, 96) 0 ['batch_normalization_5[0][0]']

re_lu_6 (ReLU) (None, 16384, 96) 0 ['max_pooling1d_5[0][0]']

conv1d_6 (Conv1D) (None, 16384, 96) 46080 ['re_lu_6[0][0]']

batch_normalization_6 (BatchNo (None, 16384, 96) 384 ['conv1d_6[0][0]']
rmalization)

max_pooling1d_6 (MaxPooling1D) (None, 8192, 96) 0 ['batch_normalization_6[0][0]']

re_lu_7 (ReLU) (None, 8192, 96) 0 ['max_pooling1d_6[0][0]']

conv1d_7 (Conv1D) (None, 8192, 96) 46080 ['re_lu_7[0][0]']

batch_normalization_7 (BatchNo (None, 8192, 96) 384 ['conv1d_7[0][0]']
rmalization)

max_pooling1d_7 (MaxPooling1D) (None, 4096, 96) 0 ['batch_normalization_7[0][0]']

re_lu_8 (ReLU) (None, 4096, 96) 0 ['max_pooling1d_7[0][0]']

conv1d_8 (Conv1D) (None, 4096, 96) 46080 ['re_lu_8[0][0]']

batch_normalization_8 (BatchNo (None, 4096, 96) 384 ['conv1d_8[0][0]']
rmalization)

max_pooling1d_8 (MaxPooling1D) (None, 2048, 96) 0 ['batch_normalization_8[0][0]']

re_lu_9 (ReLU) (None, 2048, 96) 0 ['max_pooling1d_8[0][0]']

conv1d_9 (Conv1D) (None, 2048, 96) 46080 ['re_lu_9[0][0]']

batch_normalization_9 (BatchNo (None, 2048, 96) 384 ['conv1d_9[0][0]']
rmalization)

max_pooling1d_9 (MaxPooling1D) (None, 1024, 96) 0 ['batch_normalization_9[0][0]']

re_lu_10 (ReLU) (None, 1024, 96) 0 ['max_pooling1d_9[0][0]']

conv1d_10 (Conv1D) (None, 1024, 96) 46080 ['re_lu_10[0][0]']

batch_normalization_10 (BatchN (None, 1024, 96) 384 ['conv1d_10[0][0]']
ormalization)

max_pooling1d_10 (MaxPooling1D (None, 512, 96) 0 ['batch_normalization_10[0][0]'] )

re_lu_11 (ReLU) (None, 512, 96) 0 ['max_pooling1d_10[0][0]']

conv1d_11 (Conv1D) (None, 512, 48) 13824 ['re_lu_11[0][0]']

batch_normalization_11 (BatchN (None, 512, 48) 192 ['conv1d_11[0][0]']
ormalization)

re_lu_12 (ReLU) (None, 512, 48) 0 ['batch_normalization_11[0][0]']

conv1d_12 (Conv1D) (None, 512, 96) 4608 ['re_lu_12[0][0]']

batch_normalization_12 (BatchN (None, 512, 96) 384 ['conv1d_12[0][0]']
ormalization)

dropout (Dropout) (None, 512, 96) 0 ['batch_normalization_12[0][0]']

add (Add) (None, 512, 96) 0 ['max_pooling1d_10[0][0]',
'dropout[0][0]']

re_lu_13 (ReLU) (None, 512, 96) 0 ['add[0][0]']

conv1d_13 (Conv1D) (None, 512, 48) 13824 ['re_lu_13[0][0]']

batch_normalization_13 (BatchN (None, 512, 48) 192 ['conv1d_13[0][0]']
ormalization)

re_lu_14 (ReLU) (None, 512, 48) 0 ['batch_normalization_13[0][0]']

conv1d_14 (Conv1D) (None, 512, 96) 4608 ['re_lu_14[0][0]']

batch_normalization_14 (BatchN (None, 512, 96) 384 ['conv1d_14[0][0]']
ormalization)

dropout_1 (Dropout) (None, 512, 96) 0 ['batch_normalization_14[0][0]']

add_1 (Add) (None, 512, 96) 0 ['add[0][0]',
'dropout_1[0][0]']

re_lu_15 (ReLU) (None, 512, 96) 0 ['add_1[0][0]']

conv1d_15 (Conv1D) (None, 512, 48) 13824 ['re_lu_15[0][0]']

batch_normalization_15 (BatchN (None, 512, 48) 192 ['conv1d_15[0][0]']
ormalization)

re_lu_16 (ReLU) (None, 512, 48) 0 ['batch_normalization_15[0][0]']

conv1d_16 (Conv1D) (None, 512, 96) 4608 ['re_lu_16[0][0]']

batch_normalization_16 (BatchN (None, 512, 96) 384 ['conv1d_16[0][0]']
ormalization)

dropout_2 (Dropout) (None, 512, 96) 0 ['batch_normalization_16[0][0]']

add_2 (Add) (None, 512, 96) 0 ['add_1[0][0]',
'dropout_2[0][0]']

re_lu_17 (ReLU) (None, 512, 96) 0 ['add_2[0][0]']

conv1d_17 (Conv1D) (None, 512, 48) 13824 ['re_lu_17[0][0]']

batch_normalization_17 (BatchN (None, 512, 48) 192 ['conv1d_17[0][0]']
ormalization)

re_lu_18 (ReLU) (None, 512, 48) 0 ['batch_normalization_17[0][0]']

conv1d_18 (Conv1D) (None, 512, 96) 4608 ['re_lu_18[0][0]']

batch_normalization_18 (BatchN (None, 512, 96) 384 ['conv1d_18[0][0]']
ormalization)

dropout_3 (Dropout) (None, 512, 96) 0 ['batch_normalization_18[0][0]']

add_3 (Add) (None, 512, 96) 0 ['add_2[0][0]',
'dropout_3[0][0]']

re_lu_19 (ReLU) (None, 512, 96) 0 ['add_3[0][0]']

conv1d_19 (Conv1D) (None, 512, 48) 13824 ['re_lu_19[0][0]']

batch_normalization_19 (BatchN (None, 512, 48) 192 ['conv1d_19[0][0]']
ormalization)

re_lu_20 (ReLU) (None, 512, 48) 0 ['batch_normalization_19[0][0]']

conv1d_20 (Conv1D) (None, 512, 96) 4608 ['re_lu_20[0][0]']

batch_normalization_20 (BatchN (None, 512, 96) 384 ['conv1d_20[0][0]']
ormalization)

dropout_4 (Dropout) (None, 512, 96) 0 ['batch_normalization_20[0][0]']

add_4 (Add) (None, 512, 96) 0 ['add_3[0][0]',
'dropout_4[0][0]']

re_lu_21 (ReLU) (None, 512, 96) 0 ['add_4[0][0]']

conv1d_21 (Conv1D) (None, 512, 48) 13824 ['re_lu_21[0][0]']

batch_normalization_21 (BatchN (None, 512, 48) 192 ['conv1d_21[0][0]']
ormalization)

re_lu_22 (ReLU) (None, 512, 48) 0 ['batch_normalization_21[0][0]']

conv1d_22 (Conv1D) (None, 512, 96) 4608 ['re_lu_22[0][0]']

batch_normalization_22 (BatchN (None, 512, 96) 384 ['conv1d_22[0][0]']
ormalization)

dropout_5 (Dropout) (None, 512, 96) 0 ['batch_normalization_22[0][0]']

add_5 (Add) (None, 512, 96) 0 ['add_4[0][0]',
'dropout_5[0][0]']

re_lu_23 (ReLU) (None, 512, 96) 0 ['add_5[0][0]']

conv1d_23 (Conv1D) (None, 512, 48) 13824 ['re_lu_23[0][0]']

batch_normalization_23 (BatchN (None, 512, 48) 192 ['conv1d_23[0][0]']
ormalization)

re_lu_24 (ReLU) (None, 512, 48) 0 ['batch_normalization_23[0][0]']

conv1d_24 (Conv1D) (None, 512, 96) 4608 ['re_lu_24[0][0]']

batch_normalization_24 (BatchN (None, 512, 96) 384 ['conv1d_24[0][0]']
ormalization)

dropout_6 (Dropout) (None, 512, 96) 0 ['batch_normalization_24[0][0]']

add_6 (Add) (None, 512, 96) 0 ['add_5[0][0]',
'dropout_6[0][0]']

re_lu_25 (ReLU) (None, 512, 96) 0 ['add_6[0][0]']

conv1d_25 (Conv1D) (None, 512, 48) 13824 ['re_lu_25[0][0]']

batch_normalization_25 (BatchN (None, 512, 48) 192 ['conv1d_25[0][0]']
ormalization)

re_lu_26 (ReLU) (None, 512, 48) 0 ['batch_normalization_25[0][0]']

conv1d_26 (Conv1D) (None, 512, 96) 4608 ['re_lu_26[0][0]']

batch_normalization_26 (BatchN (None, 512, 96) 384 ['conv1d_26[0][0]']
ormalization)

dropout_7 (Dropout) (None, 512, 96) 0 ['batch_normalization_26[0][0]']

add_7 (Add) (None, 512, 96) 0 ['add_6[0][0]',
'dropout_7[0][0]']

re_lu_27 (ReLU) (None, 512, 96) 0 ['add_7[0][0]']

conv1d_27 (Conv1D) (None, 512, 64) 30720 ['re_lu_27[0][0]']

batch_normalization_27 (BatchN (None, 512, 64) 256 ['conv1d_27[0][0]']
ormalization)

re_lu_28 (ReLU) (None, 512, 64) 0 ['batch_normalization_27[0][0]']

one_to_two (OneToTwo) (None, 512, 512, 64 0 ['re_lu_28[0][0]']
)

concat_dist2d (ConcatDist2D) (None, 512, 512, 65 0 ['one_to_two[0][0]']
)

re_lu_29 (ReLU) (None, 512, 512, 65 0 ['concat_dist2d[0][0]']
)

conv2d (Conv2D) (None, 512, 512, 48 28080 ['re_lu_29[0][0]']
)

batch_normalization_28 (BatchN (None, 512, 512, 48 192 ['conv2d[0][0]']
ormalization) )

symmetrize2d (Symmetrize2D) (None, 512, 512, 48 0 ['batch_normalization_28[0][0]'] )

re_lu_30 (ReLU) (None, 512, 512, 48 0 ['symmetrize2d[0][0]']
)

conv2d_1 (Conv2D) (None, 512, 512, 24 10368 ['re_lu_30[0][0]']
)

batch_normalization_29 (BatchN (None, 512, 512, 24 96 ['conv2d_1[0][0]']
ormalization) )

re_lu_31 (ReLU) (None, 512, 512, 24 0 ['batch_normalization_29[0][0]'] )

conv2d_2 (Conv2D) (None, 512, 512, 48 1152 ['re_lu_31[0][0]']
)

batch_normalization_30 (BatchN (None, 512, 512, 48 192 ['conv2d_2[0][0]']
ormalization) )

dropout_8 (Dropout) (None, 512, 512, 48 0 ['batch_normalization_30[0][0]'] )

add_8 (Add) (None, 512, 512, 48 0 ['symmetrize2d[0][0]',
) 'dropout_8[0][0]']

symmetrize2d_1 (Symmetrize2D) (None, 512, 512, 48 0 ['add_8[0][0]']
)

re_lu_32 (ReLU) (None, 512, 512, 48 0 ['symmetrize2d_1[0][0]']
)

conv2d_3 (Conv2D) (None, 512, 512, 24 10368 ['re_lu_32[0][0]']
)

batch_normalization_31 (BatchN (None, 512, 512, 24 96 ['conv2d_3[0][0]']
ormalization) )

re_lu_33 (ReLU) (None, 512, 512, 24 0 ['batch_normalization_31[0][0]'] )

conv2d_4 (Conv2D) (None, 512, 512, 48 1152 ['re_lu_33[0][0]']
)

batch_normalization_32 (BatchN (None, 512, 512, 48 192 ['conv2d_4[0][0]']
ormalization) )

dropout_9 (Dropout) (None, 512, 512, 48 0 ['batch_normalization_32[0][0]'] )

add_9 (Add) (None, 512, 512, 48 0 ['symmetrize2d_1[0][0]',
) 'dropout_9[0][0]']

symmetrize2d_2 (Symmetrize2D) (None, 512, 512, 48 0 ['add_9[0][0]']
)

re_lu_34 (ReLU) (None, 512, 512, 48 0 ['symmetrize2d_2[0][0]']
)

conv2d_5 (Conv2D) (None, 512, 512, 24 10368 ['re_lu_34[0][0]']
)

batch_normalization_33 (BatchN (None, 512, 512, 24 96 ['conv2d_5[0][0]']
ormalization) )

re_lu_35 (ReLU) (None, 512, 512, 24 0 ['batch_normalization_33[0][0]'] )

conv2d_6 (Conv2D) (None, 512, 512, 48 1152 ['re_lu_35[0][0]']
)

batch_normalization_34 (BatchN (None, 512, 512, 48 192 ['conv2d_6[0][0]']
ormalization) )

dropout_10 (Dropout) (None, 512, 512, 48 0 ['batch_normalization_34[0][0]'] )

add_10 (Add) (None, 512, 512, 48 0 ['symmetrize2d_2[0][0]',
) 'dropout_10[0][0]']

symmetrize2d_3 (Symmetrize2D) (None, 512, 512, 48 0 ['add_10[0][0]']
)

re_lu_36 (ReLU) (None, 512, 512, 48 0 ['symmetrize2d_3[0][0]']
)

conv2d_7 (Conv2D) (None, 512, 512, 24 10368 ['re_lu_36[0][0]']
)

batch_normalization_35 (BatchN (None, 512, 512, 24 96 ['conv2d_7[0][0]']
ormalization) )

re_lu_37 (ReLU) (None, 512, 512, 24 0 ['batch_normalization_35[0][0]'] )

conv2d_8 (Conv2D) (None, 512, 512, 48 1152 ['re_lu_37[0][0]']
)

batch_normalization_36 (BatchN (None, 512, 512, 48 192 ['conv2d_8[0][0]']
ormalization) )

dropout_11 (Dropout) (None, 512, 512, 48 0 ['batch_normalization_36[0][0]'] )

add_11 (Add) (None, 512, 512, 48 0 ['symmetrize2d_3[0][0]',
) 'dropout_11[0][0]']

symmetrize2d_4 (Symmetrize2D) (None, 512, 512, 48 0 ['add_11[0][0]']
)

re_lu_38 (ReLU) (None, 512, 512, 48 0 ['symmetrize2d_4[0][0]']
)

conv2d_9 (Conv2D) (None, 512, 512, 24 10368 ['re_lu_38[0][0]']
)

batch_normalization_37 (BatchN (None, 512, 512, 24 96 ['conv2d_9[0][0]']
ormalization) )

re_lu_39 (ReLU) (None, 512, 512, 24 0 ['batch_normalization_37[0][0]'] )

conv2d_10 (Conv2D) (None, 512, 512, 48 1152 ['re_lu_39[0][0]']
)

batch_normalization_38 (BatchN (None, 512, 512, 48 192 ['conv2d_10[0][0]']
ormalization) )

dropout_12 (Dropout) (None, 512, 512, 48 0 ['batch_normalization_38[0][0]'] )

add_12 (Add) (None, 512, 512, 48 0 ['symmetrize2d_4[0][0]',
) 'dropout_12[0][0]']

symmetrize2d_5 (Symmetrize2D) (None, 512, 512, 48 0 ['add_12[0][0]']
)

re_lu_40 (ReLU) (None, 512, 512, 48 0 ['symmetrize2d_5[0][0]']
)

conv2d_11 (Conv2D) (None, 512, 512, 24 10368 ['re_lu_40[0][0]']
)

batch_normalization_39 (BatchN (None, 512, 512, 24 96 ['conv2d_11[0][0]']
ormalization) )

re_lu_41 (ReLU) (None, 512, 512, 24 0 ['batch_normalization_39[0][0]'] )

conv2d_12 (Conv2D) (None, 512, 512, 48 1152 ['re_lu_41[0][0]']
)

batch_normalization_40 (BatchN (None, 512, 512, 48 192 ['conv2d_12[0][0]']
ormalization) )

dropout_13 (Dropout) (None, 512, 512, 48 0 ['batch_normalization_40[0][0]'] )

add_13 (Add) (None, 512, 512, 48 0 ['symmetrize2d_5[0][0]',
) 'dropout_13[0][0]']

symmetrize2d_6 (Symmetrize2D) (None, 512, 512, 48 0 ['add_13[0][0]']
)

cropping2d (Cropping2D) (None, 448, 448, 48 0 ['symmetrize2d_6[0][0]']
)

upper_tri (UpperTri) (None, 99681, 48) 0 ['cropping2d[0][0]']

dense (Dense) (None, 99681, 2) 98 ['upper_tri[0][0]']

switch_reverse_triu (SwitchRev (None, 99681, 2) 0 ['dense[0][0]',
erseTriu) 'stochastic_reverse_complement[0 ][1]']

================================================================================================== Total params: 751,506 Trainable params: 746,002 Non-trainable params: 5,504


None model_strides [2048] target_lengths [99681] target_crops [-49585] <basenji.seqnn.SeqNN object at 0x7f4c3e99bd90> Epoch 1/10000 Traceback (most recent call last): File "basenji_train.py", line 183, in main() File "basenji_train.py", line 172, in main seqnn_trainer.fit_keras(seqnn_model) File "/data/user/liangbw/code_L/basenji_proj/basenji/trainer.py", line 139, in fit_keras seqnn_model.model.fit( File "/data/user/liangbw/anaconda3/envs/basenji/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/data/user/liangbw/anaconda3/envs/basenji/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:

Input to reshape is a tensor with 498405 values, but the requested shape has 199362 [[{{node Reshape}}]] [[IteratorGetNext]] [Op:__inference_train_function_22364]

davek44 commented 1 year ago

I can't reproduce the error. Can you make sure you've pulled the latest code from master and cleared out all of the data so that you're starting from scratch?

lbw124765283 commented 1 year ago

Dear Prof. David Kelley, I am sorry but when I try it again using the new code from the github, I am also get this error. I'm unfamiliar with tensorflow, but I feel like it should be a network input dimension error, and I wonder if there are updates to the data used, etc. Thanks!

Yours Wen. L.

At 2023-03-26 07:28:20, "David Kelley" @.***> wrote:

I can't reproduce the error. Can you make sure you've pulled the latest code from master and cleared out all of the data so that you're starting from scratch?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

davek44 commented 1 year ago

Hmm that's puzzling. What version of Tensorflow are you using?