Boundary character value = 0.4012882678889833 | Threshold character value = 0.43128826788898333 | Threshold character upper value = 0.6012882678889833
Boundary affinity value = 0.45783336177161427 | Threshold affinity value = 0.4878333617716143 | Threshold affinity upper value = 0.6578333617716143
Scale character value = 1.3397710965632044 | Scale affinity value = 1.365808455018482
Training Dataset = ICDAR2013_ICDAR2017 | Testing Dataset = ICDAR2013
Number of parameters in the model: 20770466
Generating for iteration: 0
0%| | 0/8 [00:00<?, ?it/s]THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=383 error=11 : invalid argument
F-score: 0.7510562308524278: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:45<00:00, 4.64s/it]
Testing for iteration: 0
F-score: 0.678219963897929| Cumulative F-score: 0.7265306122448979: 100%|████████████████████████████████████████████████████████████████████████████| 8/8 [01:19<00:00, 8.78s/it]
Test Results for iteration: 0 | F-score: 0.7265306122448979 | Precision: 0.7216216216216216 | Recall: 0.7315068493150685
Fine-tuning for iteration: 0
Learning Rate Changed to 5e-05
Loading Synthetic dataset
Loaded DEBUG
0%| | 0/12500 [00:00<?, ?it/s]Traceback (most recent call last):
File "main.py", line 201, in <module>
main()
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "main.py", line 101, in weak_supervision
model, optimizer, loss, accuracy = train(model, optimizer, iteration)
File "/root/CRAFT-Remade/train_weak_supervision/trainer.py", line 133, in train
output = model(image)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/root/CRAFT-Remade/src/craft_model.py", line 55, in forward
sources = self.basenet(x)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/root/CRAFT-Remade/src/vgg16bn.py", line 66, in forward
h = self.slice1(x)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/root/CRAFT-Remade/env/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 1.12 GiB (GPU 0; 14.73 GiB total capacity; 13.26 GiB already allocated; 675.88 MiB free; 20.96 MiB cached)
Could you please tell how much memory is required and if it is possible to somehow lower it and still run on 16 Gb Tesla?
Hello,
following the README.md we are getting failures while running
this step on Tesla T4 with 16 Gb VRAM.
Could you please tell how much memory is required and if it is possible to somehow lower it and still run on 16 Gb Tesla?
Thanks a lot!