macanv / BERT-BiLSTM-CRF-NER

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services
https://github.com/macanv/BERT-BiLSMT-CRF-NER
4.68k stars 1.25k forks source link

batch_size 无论设置多大,都会OOM是什么原因呢 #343

Open ChChwang opened 4 years ago

ChChwang commented 4 years ago

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[12928,768] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info

xdbc commented 4 years ago

请问这个问题要怎么解决呢?怎么就关闭了啊

ChChwang commented 4 years ago

请问这个问题要怎么解决呢?怎么就关闭了啊

batch_size 在py文件里改不行,bert-base-ner-train 执行的时候,-batch_size 16 就可以了

wqx9826 commented 3 years ago

请问这个问题要怎么解决呢?怎么就关闭了啊

batch_size 在py文件里改不行,bert-base-ner-train 执行的时候,-batch_size 16 就可以了

请问如何在bert-base-ner-train 里面改呢,我现在用的是coda的虚拟环境,tf1.12。输入命令就错误按照博主的流程做的但是就是运行不了bert-base-ner-train