Closed ongcj closed 5 years ago
@ongcj this warning come from ctc loss. you can print the loss exception in recognition loss. and i'll fix these exceptions in some days. pls be patient
@ongcj i think over and over again, i think there is no nessary to care this exception. as the recognize model converges, the problem disappears
i think i need upload the pretrained icdar2015 model,so you can finetune with it.
@ongcj i think over and over again, i think there is no nessary to care this exception. as the recognize model converges, the problem disappears
i think i need upload the pretrained icdar2015 model,so you can finetune with it.
Alright thanks. It will be great if you could upload the pretrained ICDAR2015 model. Thanks.
@ongcj ah~
Hi @novioleo,
im currently still training the model with ICDAR2015 data set and im currently at 400 epoch. I tried running eval.py and the result is very bad. TP: 858, FP: 1020, FN: 1373, precision: 0.456869, recall: 0.384581.
One of the example shown below.
May I know if the poor result is due to some error in my config? Or just that the number of epoch is still too little. The below is my config.
{
"name": "FOTS",
"cuda": true,
"gpus": [0],
"finetune": "",
"need_grad_backbone": true,
"data_loader": {
"dataset":"icdar2015",
"data_dir": "./data/OCR/ICDAR2015",
"batch_size": 4,
"shuffle": true,
"workers": 0
},
"validation": {
"validation_split": 0.15,
"shuffle": true
},
"lr_scheduler_type": "ExponentialLR",
"lr_scheduler_freq": 50,
"lr_scheduler": {
"gamma": 0.94
},
"optimizer_type": "Adam",
"optimizer": {
"lr": 0.0001,
"weight_decay": 1e-5
},
"loss": "FOTSLoss",
"metrics": ["fots_metric"],
"trainer": {
"epochs": 1000,
"save_dir": "./data/OCR/model/ICDAR2015",
"save_freq": 1,
"verbosity": 2,
"monitor": "loss",
"monitor_mode": "min"
},
"arch": "FOTSModel",
"model": {
"mode": "united",
"scale": 512,
"crnn": {
"img_h": 16,
"hidden": 1024
},
"keys": "custom_1"
}
}
@ongcj there is no error in your config,but i don't adapt to icdar2015....so you can't reach the result of paper refered.
can i close this issue now? @ongcj
Hi @novioleo , for every validation during training, it keeps saying "target_lengths must be of size batch_size". I am trying to train using batch size of 4 using ICDAR2015 data set. Hope you could help me with this issue. Thanks