Open kellertuer opened 1 year ago
I already see one small disadvantage of this, namely that all functions on all manifolds still has to be somehow intersected with what is implemented, similar to the features
map I used in my last approach.
I will think about that and will maybe first implement a nice “tell me what this manifold has implemented” function in another PR (I started that on an old PR already).
Attention: Patch coverage is 0%
with 136 lines
in your changes are missing coverage. Please review.
Project coverage is 98.35%. Comparing base (
32744d6
) to head (6a155f1
).:exclamation: Current head 6a155f1 differs from pull request most recent head 45e0051. Consider uploading reports for the commit 45e0051 to get more accurate results
Files | Patch % | Lines |
---|---|---|
src/utils/features.jl | 0.00% | 136 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I'm somewhat afraid that this PR may again aim to achieve too much to be finished in a finite amount of time :wink: .
It aims to just be a proof of concept and keep the tests as legacy.
I hope the steps above are of finite length and the new test suite should hopefully just cover the exponential map on. - say - the sphere.
I'm not sure if this is what you want to address here, but would you agree that running the whole test suite takes a lot of time? (For what it's worth, and this may not be a fair comparison because of differences between Python/Julia, I just checked that, for the numpy project (a massive project with 500k+ lines of code, a blend of python and C), the full test suite (20k tests) runs in five minutes on a laptop.)
Yes, it tests takes much more time than we would like. I'm not sure if you include compilation time for numpy as well? Most of the time spent in our test suite is actually compilation.
I actually very rarely run the full test suite of Manifolds.jl locally. I usually just run a single test file and rely on CI to catch other issues. We could surely just run fewer tests but figuring out which ones are safe to remove isn't easy. Sadly, this PR is unlikely to address this issue and I wouldn't be surprised if it just made the performance worse.
I'm not sure if this is what you want to address here, but would you agree that running the whole test suite takes a lot of time? (For what it's worth, and this may not be a fair comparison because of differences between Python/Julia, I just checked that, for the numpy project (a massive project with 500k+ lines of code, a blend of python and C), the full test suite (20k tests) runs in five minutes on a laptop.)
Yes, the comparison is not that fair., since NumPy has also slightly more manpower than just 3 scientists spending their evenings on a project. Yes we overtest (some lines covered more than once but on different types) a bit, but we do reach 99.5% code coverage. I have Ideas to rework that, but that would be quite some work that I started on a PR but did not yet have time to get properly started (and after that start I expect to rework most test for about a year of work – feel free to join in then).
edit: NumPy seems to have a code coverage of about 80% (cf. https://app.codecov.io/gh/numpy/numpy), which is reasonable for such a large project – but we are then slightly beyond that.
This is not stalled, I am just looking for more motivation to continue this. Sure having better organises / modular / faster tests is great, just that with our code base, there is a lot of tests to rework ;)
I just continued on this but wanted to get a bit of an overview (again) first. With Aqua I noticed, we have quite a bit of (own) type piracy from ManifoldsBase – that is constructors and functions we could easily move over there to resolve that.
If you @mateuszbaran have a bit of time, maybe you could check them out :)
I've taken a look at ManifoldsBase.jl type piracy and it is a tough task. There are a few "most likely move to ManifoldsBase.jl" but majority of cases are either related to statistics or StaticArrays (maybe they should be in package extensions of ManifoldsBase.jl) or require some nontrivial considerations.
Ah, then I did not look careful enough and mainly saw the power and metric manifold cases which should be moved.
Yes the others maybe could be moved to a package extension of ManifoldsBase then :)
I narrowed down the list of types / functions for piracy, we have one unbounded parameter in the Tucker manifold, that I currently am not able to resolve. The last error on combat bounds is more like checking versions of extra packages. So we are nearly running aqua correctly already.
The unbound parameter can be fixed, here is how the method should look like:
function TuckerPoint(
core::AbstractArray{T,D},
factors::Vararg{<:AbstractMatrix{T},D},
) where {T,D}
By the way, maybe we should add Aqua tests in a separate PR to not make this PR too large?
We could keep them here, I wanted to get a bit of an overview and for that this was a very good thing to do.
We will still run the old tests after this PR anyways, I was hoping to finish the concept and maybe one manifold on this PR with at least a few functions (that I would then remove from legacy already).
OK, let's see how it works out.
Besides the error in the ambiguities that I completely do not understand (Name `Manifolds.GroupManifold` is resolved to a different object.
) Aqua should be fine, Readme is updated – an I again understand my plan and structure here. So I would say this was a good time spent here.
I also updated the readme badges.
What did you do to get that GroupManifold
error?
just include("test/aqua.jl")
to run the current Aqua.jl tests.
For all but the ambiguity errors I narrowed down all functions / types to check (that is e.g. for piracy), but for ambiguities I currently can not continue narrowing down functions we have to check, since I get that error instead of a list of ambiguities, and I have no clue where that comes from (I do not exclude that type currently) nor what it even means.
That is because you can't put aliases on the exclude
list. I guess Aqua.jl could be a bit more clear about it.
I've made an issue about it: https://github.com/JuliaTesting/Aqua.jl/issues/290 .
Nice, I would not have seen that one.
Here is a next idea towards #144 and #321. The idea is to have 3 (4?) setup axes
Model
Let's start with the first two
By default these are assumed to be all, unless the
--manifold
shell argument specifies otherwise (analogously--function
) – similarly correspondingENV
variables. You can also exclude manifolds/functions )--exclude-manifold
which happens after the filter above, so in principle, one could list some manifolds above and exclude one again (though that does not make so much sense); usually this is meant to exclude from the all default.Further
Float64
but one should be able to provide a list)should be able to be specified, here I am not yet 100% sure, since Float64 should also mean for complex manifolds to use the complex type, so I am not yet 100% sure about the design here. Similarly the vector/array type (could also be a list of types) would override the eltype or (more complicated) might use each of its types with each eltypes (for simplicity I would prefer the first).
For now the new test suite includes running the old tests as well. For now I would like to add a parameter to disable that
--no-legacy-tests
or such. That way by default new and old tests are run and we have coverage. For checking coverage of new tests, one can (for a while) disable legacy tests on a PR.Feedback wanted
Is this a good idea?
--group
keyword (or ENV) might be nice to define some meta groups of manifolds to be set as all in the above default sense.In this PR
runtests.jl
to be a script and executable from shell acceptingARGS
to set it up. If run from shell, it still does the same test environment setup as before. when run in] test Manifolds
this setup is not done, but setting the ENV variables accordingly one could achieve the same thing.manifolds=
,functions=
)