Closed OTDE closed 4 years ago
Merging #23 into master will decrease coverage by
1.96%
. The diff coverage is96.68%
.
@@ Coverage Diff @@
## master #23 +/- ##
=======================================
- Coverage 98.96% 97% -1.97%
=======================================
Files 3 3
Lines 194 200 +6
=======================================
+ Hits 192 194 +2
- Misses 2 6 +4
Impacted Files | Coverage Δ | |
---|---|---|
src/EDF.jl | 100% <100%> (ø) |
:arrow_up: |
src/read.jl | 95.53% <94.59%> (-2.76%) |
:arrow_down: |
src/write.jl | 98.59% <98.52%> (-1.41%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update e3a341b...7770b60. Read the comment docs.
OOF. Remembering how much the me of two months ago needed to reread YASGuide. These'll be up in a sec.
Do you know whether this has an impact on performance, e.g. @benchmark EDF.read("file.edf")
I don't, unfortunately. I could set something up in the next tagged release so that we have a way to check this, if that works!
Should be super quick and simple to do. Check out master, run
using BenchmarkTools
using EDF
@benchmark EDF.read("some_edf_file.edf")
Then check out this branch and run the same thing and report both results. You could use one or more of the EDF files checked in as test cases here to benchmark.
Old:
julia> @benchmark EDF.read("test/data/test.edf")
BenchmarkTools.Trial:
memory estimate: 3.33 MiB
allocs estimate: 8071
--------------
minimum time: 937.574 μs (0.00% GC)
median time: 962.879 μs (0.00% GC)
mean time: 1.068 ms (7.88% GC)
maximum time: 5.179 ms (81.16% GC)
--------------
samples: 4677
evals/sample: 1
New:
@benchmark EDF.read("test/data/test.edf")
BenchmarkTools.Trial:
memory estimate: 3.60 MiB
allocs estimate: 15404
--------------
minimum time: 1.347 ms (0.00% GC)
median time: 1.413 ms (0.00% GC)
mean time: 1.618 ms (9.79% GC)
maximum time: 5.273 ms (65.65% GC)
--------------
samples: 3087
evals/sample: 1
Looks like performance is being affected.
After latest test:
julia> @benchmark EDF.read("test/data/test.edf")
BenchmarkTools.Trial:
memory estimate: 3.33 MiB
allocs estimate: 6819
--------------
minimum time: 884.232 μs (0.00% GC)
median time: 913.414 μs (0.00% GC)
mean time: 1.014 ms (8.20% GC)
maximum time: 2.654 ms (54.53% GC)
--------------
samples: 4924
evals/sample: 1
Okay, this ended up being a bigger PR than I expected -- would love to hear thoughts on any suggested changes or things I'd need to walk back here.
Extreme makeover: EDF Edition
EDF.jl
Pair{RecordAnnotation, TimestampAnnotation}
to better reflect that the first annotation in each data record is both required and structured differentlyEDF.write
Header
renamed toFileHeader
as multiple header records exist in an EDF fileI/O
Tests
There's a lot here, so I decided this would probably be a good stopping point before I added anything else (and tbh I probably should have requested input a bit sooner), but I'm hoping this PR will make transitioning to an Mmap solution much easier, which, in turn, could make future EDF endeavors a bit more friendly on the memory.
Things that still need to be added, should I get the go-ahead