Closed huzecong closed 3 years ago
Merging #320 into master will increase coverage by
0.14%
. The diff coverage is87.64%
.
@@ Coverage Diff @@
## master #320 +/- ##
==========================================
+ Coverage 79.93% 80.08% +0.14%
==========================================
Files 133 134 +1
Lines 11143 11195 +52
==========================================
+ Hits 8907 8965 +58
+ Misses 2236 2230 -6
Impacted Files | Coverage Δ | |
---|---|---|
texar/torch/run/executor.py | 79.38% <82.75%> (+0.29%) |
:arrow_up: |
texar/torch/run/metric/base_metric.py | 88.50% <83.33%> (+11.12%) |
:arrow_up: |
texar/torch/run/metric/__init__.py | 100.00% <100.00%> (ø) |
|
texar/torch/run/metric/io.py | 100.00% <100.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update a2f1186...6cd0a31. Read the comment docs.
File handling Now in
Executor
, files are only opened intrain
andtest
and closed afterwards. Originally files were opened at the point of construction, and closed only when the program exits, using anatexit
hook. This keeps theExecutor
object alive and creates a memory leak.Here we keep the change, but handles the logic more carefully. A few corner cases were also considered, and we also open files in
write_log
. This allows writing logs after construction and before training, while the original implementation simply does nothing, which is confusing for users.Metrics not evaluated When
Executor
uses aSimpleMetric
that is not logged, the metric is simply not evaluated. This is problematic for metrics with side effects, e.g.FileWriterMetric
used in the BERT examples. We now explicitly evaluate all metrics at the end of each epoch, and the end of validation and testing.