Closed Sero8139 closed 5 years ago
Hi, no mistake, just ugly behavior, see: https://github.com/MichalBusta/DeepTextSpotter/issues/67#issuecomment-430993331
@MichalBusta , thanks for your reply . I have seen the #67 , and I have tried to change the loglevel. but I still have some problem .
the following is some information.
I train the model use train.py command.
python train.py -data_dir / -train_list data/train.txt -valid_list data/test.txt -batch_size 1
and listfile of my data like below
/opt/DeepTextSpotter/data/test/_i_0_j_0_00401.jpg /opt/DeepTextSpotter/data/test/_i_0_j_0_00403.jpg /opt/DeepTextSpotter/data/test/_i_0_j_0_00407.jpg /opt/DeepTextSpotter/data/test/_i_0_j_0_00409.jpg /opt/DeepTextSpotter/data/test/_i_0_j_0_00411.jpg /opt/DeepTextSpotter/data/test/_i_0_j_0_00413.jpg
and annotation file of my data is like below
cls_id , cx, cy, width, height, angle -1 0.17445838815789472 0.08311944901315789 0.0685370767088 0.0485656452315 0.0379568530092 30A
I have seen #53 , and use the method to convert my data.
and I always encountered libgomp: Out of memory allocating 328111969088 bytes
and I observe the phenomenon which is always occur when the recognition phase start to run.
and I struggle with the problem for long time.
Hi, @MichalBusta
I try to train the model use my data. and I output the log , find out that gt_labels is a float type .
and it always occur at first test sample.
Could you give me some tips ? I don't know what mistake I do.
thx