kpe / bert-for-tf2

A Keras TensorFlow 2.0 implementation of BERT, ALBERT and adapter-BERT.
https://github.com/kpe/bert-for-tf2
MIT License
803 stars 193 forks source link

Activation after bert-layer differs #92

Open Alok-Ranjan23 opened 2 years ago

Alok-Ranjan23 commented 2 years ago

I am using the TF-2.5-vanilla Conda environment. It has tensorflow2.5 installed using the command pip install tensorflow==2.5.0 . I have installed your bert using pip install bert-for-tf2. I have written the following test code to check the activation of your bert. Please check this code snippet. import bert import numpy as np import tensorflow as tf #wget --quiet https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-768_A-12.zip and unzip model_dir = "./uncased_L-12_H-768_A-12" bert_params = bert.params_from_pretrained_ckpt(model_dir) l_bert = bert.BertModelLayer.from_params(bert_params, name="bert") #Pass1 np.random.seed(171) input_ids = tf.Variable(np.random.randint(low=0, high=30522, size=(32, 384))) np.save('input.npy',input_ids) output = l_bert(input_ids) print(type(output)) print() print(output.shape) np.save('file1.npy',output)

I have used this above code snippet twice with the same Conda environment TF-2.5-vanilla. The first time, the input, is generated with the seed 171, is stored in input.npy. The second time, the input, is generated with the same seed(171), which is stored in input2.npy. and after comparison of these np arrays, input.npy==input2.npy

The first time, the output of l_bert, using input_ids, is stored in file1.npy. The second time, the output of l_bert, using the same input_ids, is stored in file2.npy. and after comparison of these np arrays, file1.npy != file2.npy

Why is my output of l_bert, using the same input_ids, different?