Closed kahaaga closed 2 years ago
Merging #141 (1d9f52c) into main (5667d64) will decrease coverage by
0.20%
. The diff coverage is71.42%
.
@@ Coverage Diff @@
## main #141 +/- ##
==========================================
- Coverage 80.38% 80.17% -0.21%
==========================================
Files 35 35
Lines 785 792 +7
==========================================
+ Hits 631 635 +4
- Misses 154 157 +3
Impacted Files | Coverage Δ | |
---|---|---|
src/encoding/outcomes.jl | 0.00% <0.00%> (ø) |
|
src/encoding/utils.jl | 100.00% <ø> (ø) |
|
src/probabilities_estimators/count_occurences.jl | 44.44% <0.00%> (-5.56%) |
:arrow_down: |
...ties_estimators/histograms/visitation_frequency.jl | 26.66% <0.00%> (-13.34%) |
:arrow_down: |
...imators/permutation_ordinal/spatial_permutation.jl | 91.30% <ø> (ø) |
|
...abilities_estimators/timescales/wavelet_overlap.jl | 90.00% <ø> (ø) |
|
src/probabilities.jl | 78.26% <50.00%> (ø) |
|
...babilities_estimators/timescales/power_spectrum.jl | 75.00% <50.00%> (ø) |
|
...tors/permutation_ordinal/SymbolicAmplitudeAware.jl | 94.73% <66.66%> (ø) |
|
...permutation_ordinal/SymbolicWeightedPermutation.jl | 94.44% <66.66%> (ø) |
|
... and 12 more |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Yes for outcomes instead of events.
For Discretation
I would lean towards no. I prefer to use Symbolization
instead, or perhaps, even better would be Encoding
? The problem I have with discretization is that it leads to the expectation that the dimensionality of the data remains the same. However this is not at all the case with e.g. the ordinal patterns. Furthermore, I guess the BinEncoders
would become subtype of this as well?
Yeah after seeing the changes, the Discretization
is certainly the one thing that doesn't sit well with me. Further problems with it: what if my input data are always discrete, like integers, or words? Then surely it would be confusing to "discretize" them more. I'm happy with either Symbolization
or Encoding
.
For Discretation I would lean towards no. I prefer to use Symbolization instead, or perhaps, even better would be Encoding? The problem I have with discretization is that it leads to the expectation that the dimensionality of the data remains the same. However this is not at all the case with e.g. the ordinal patterns. Furthermore, I guess the BinEncoders would become subtype of this as well? Yeah after seeing the changes, the Discretization is certainly the one thing that doesn't sit well with me. Further problems with it: what if my input data are always discrete, like integers, or words? Then surely it would be confusing to "discretize" them more. I'm happy with either Symbolization or Encoding.
I think I'll go for Encoding
then. It's shorter.
@Datseris Do you have any idea why the documentation build fails? It seems to be related to ChaosTools, but I haven't been able to find out exactly what causes the incompatibility. Can it have something to do with the version lock in DynamicalSystems?
We have to remove DynamicalSystems usage from the docs due to the version lock. If we want to use trajectory
we can use DynamicalSystemsBase
. If we want something from ChaosTOols, then use that. I'm still in the process of thinking how it will all work in the unified docs but for now each package will have its own docs. The downside of this is that you cannot use DynamicalSystems
due to circular dependencies when major version increments occur.
fantastic
A suggestion for addressing #140. Also makes the documentation regarding
probabilities_and_outcomes
more consistent.In summary:
Discretization
as a supertype forOrdinalPattern
andGaussianSymbolization
, because that's what these are doing - they take some data and discretizes it in some manner.ProbabilitiesEstimator
.ProbabilitiesEstimator
.