ysig / GraKeL

A scikit-learn compatible library for graph kernels
https://ysig.github.io/GraKeL/
Other
587 stars 96 forks source link

Update CI to allow newer Python versions #83

Closed eddiebergman closed 1 year ago

eddiebergman commented 1 year ago

(WIP) Example of the output for the (github workflows) here:

TODOS

Changes


Brief note:

This point above was a bit unnecessary but modern python env/packaging tools such as poetry and flit use this to quickly generate validate dependancy graphs and find valid ones, using only this meta-data from PyPI. This wasn't possible with setup.py specifying requirements as it's a python file and highly un-uniform to parse, while requirements.txt does not handle test/dev dependencies. Specifying them in setup.cfg would work but it provides little benefit.

In general, the python eco-system is heading towards static build/meta/tooling information in pyproject.toml, with python even including a toml parser in it's core library soon. It's probably the safest long term bet at the cost of backwards compatibility.


ysig commented 1 year ago

Hi @eddiebergman that's great!

Let me address the above points (Todos -> T, Changes -> C):

T1,2: nosetests seems to be the one that breaks because of callable, see: https://github.com/eddiebergman/GraKeL/actions/runs/3041633008/jobs/4898960711

I have no objection in migrating to pytests.

T4: Can we replace nosetests with pytest or it is a very big deal? (again nosetests was the one used by sklearn at the time)

C2: that's totally fine. As the library is stable for now, we would like to be up to date with python's life-cycle.

C3: That's great!

ysig commented 1 year ago

tldr; I think nosetests is the main bottleneck right now. Let's replace it with whatever you think is best (pytest?): I have no objection to migrate to the proposed "T1" testing pipeline.

ysig commented 1 year ago

@eddiebergman All tests can be found here: https://github.com/ysig/GraKeL/tree/master/grakel/tests

eddiebergman commented 1 year ago

@ysig yup I found them, dont worry :) I ran pytest and nosetest side by side and got the same tests collected and all passed. However pytest additionally reported there's 909 warnings, mostly deprecation warnings from numpy. I can quickly fix those if you like?

ysig commented 1 year ago

@eddiebergman

yup I found them, dont worry :) I ran pytest and nosetest side by side and got the same tests collected and all passed. However pytest additionally reported there's 909 warnings, mostly deprecation warnings from numpy. I can quickly fix those if you like?

Yes that would be great!

ysig commented 1 year ago

So what I see as two main errors for now: 1) https://github.com/ysig/GraKeL/blob/8500dbf6812116459df308396716b7bcae13ef9c/grakel/tools.py#L175 collections.Hashable should probably be replaced by something that is valid for this version of python.

2) Windows errors seem to be related with cvxopt version - not sure how to approach this for now. (why is the output truncated btw?)

eddiebergman commented 1 year ago

Hi @ysig,

Thanks for going in and fixing them :) I'm a bit busy today to do much here but I will fix the wheel builder tests tomorrow or friday!

Best, Eddi

ysig commented 1 year ago

Hi @eddiebergman,

all are passing right now except 2 windows processes:

Also for codecov reply to my mail so we configure it. Also can you tell me how I can setup actions to run on my account instead of yours?

For the wheels I see some bugs related with openblas or import k_ij_triangular which is a synonym for the Cython extension not bring installed correctly - not sure why. It is manually setup using:

python setup.py build_ext
python setup.py install

Best, y

eddiebergman commented 1 year ago

Just an update, I reran all the tests: https://github.com/eddiebergman/GraKeL/actions/runs/3051878668

EDIT: Realised the following isn't your code as inside cvxopt instead

I imagine this could be due to two cases: 1) A flaky test where either the code is not fully correct, the test is wrong or numerical instability 2) Windows 3.9/3.10 ships with a different numerical backend.

I reran the tests, if the same 2 fail then it's very likely 2, otherwise it is case 1.

If it's case 1) then either the test/code has to change, if it's case 2) we need to further debug this in an environment where we can't reproduce it. I think in this case we need to raise the original exception and not throw your own.

The default way to do this with Python 3+ is exception chaining

try:
    f(x, dy, s)
except ArithmeticError as e:
   raise ValueError("Rank(A) < p or Rank([G; A]) < n") from e

However, for the backwards compatibility (if you care for newer versions still being backwards compatible), then I suggest something like:

import traceback

try:
    f(x, dy, s)
except ArithmeticError:
   tb = traceback.format_exc()  # Get str version of current traceback
   raise ValueError("\n" + tb + "\n" + Rank(A) < p or Rank([G; A]) < n")

Best, Eddie

eddiebergman commented 1 year ago

I reran the tests and this time it failed for all windows versions, 3.7 to 3.10. My guess is a slight numerical instability or api change with some library on windows which is causing the ArithmeticError which is then swallod. I will change it to the second proposed error message to gain some more info.

Failing tests in all windows version seem to do with lovasz, whatever that may be.

FAILED grakel/tests/test_Kernel.py::test_lovasz_theta - AssertionError: Value...
FAILED grakel/tests/test_Kernel.py::test_lovasz_theta_pd - AssertionError: Va...
FAILED grakel/tests/test_common.py::test_lovasz_theta - ValueError: Rank(A) <...
FAILED grakel/tests/test_kernels.py::test_lovasz_theta - ValueError: Rank(A) ..
eddiebergman commented 1 year ago

Some more digging and essentially the error is coming from cvxopt as you said, specifically this line.

Something is going wrong with the sdp solver and its inputs.

Some googling of the errors makes me thing there is either an issue in the test case or the implementation itself that was ignored previously but fixed in a more recent version of cvxopt. Looking down the chain all of test failures, it seems to be coming down to some call to what cvxopt call kktsolver which eventually goes to calling lapack functions. Looking at the docs, it gives an example for all the reasons an ArithmeticError might show up (mostly for singular matrices, not full rank or if not semi-positive definite. I imagine these are already transformed matrices and out of your control.

Assuming cvxopt is correct in its code, I would imagine somehow the sdp solver in def _calculate_lovaz_embeddings is given some problem it can't solve or is mis-specified.

Seeing as these are randomly generated problems by your generate_dataset in the test functions, I could see this being an issue occurring. Now why it happens sproadically and only on windows, I have a guess it's to do with the fact test ordering is not entirely deterministic and somehow on windows, this test is more sensitive to the randomized input.

I'll do some brief testing to confirm this.

eddiebergman commented 1 year ago

Going to run 100 different seeds on Linux and see if I can reproduce the error, if not, I will submit the same to github actions and see if we get stochastic failures across seeds or stochastic errors across github workers. If it's stochastic across seeds then somehow there is numerical instability with some windows library. If it's stochastic with respect to github action workers then I imagine there is some variance in terms of the packages they download, which might give a further hint.


    # Trying 100 different seeds
    @pytest.mark.parametrize("r", list(range(100)))
    def test_lovasz_theta(r):
        """Random input test for the Lovasz-theta distance kernel."""
        train, test = generate_dataset(n_graphs=50,
                                       r_vertices=(5, 10),
                                       r_connectivity=(0.4, 0.8),
                                       r_weight_edges=(1, 1),
                                       n_graphs_test=20,
                                       random_state=r,
                                       features=None)

        lt_kernel = LovaszTheta(verbose=verbose, normalize=normalize)

        try:
            lt_kernel.fit_transform(train)
            lt_kernel.transform(test)
            assert True
        except Exception as exception:
            assert False, exception

Update, I also realized that the LovaszTheta class takes a random_state but non is passed during the construction in the test, only the dataset construction, this is another source of test stochasticness.

eddiebergman commented 1 year ago

Seems 100 seeds on Linux still passed everything. From this I'm concluding that it's some numerical instability with Windows that's the prime cause.

Everything stems from _calculate_lovasz_embeddings and fails at this line on Windows. I'm going to essentially just print what goes in and see if we can create a minimal working example that at the very least can be posted to cvxopt if we can not reproduce it.

def _calculate_lovasz_embeddings_(A):
    """Calculate the lovasz embeddings for the given graph.
    Parameters
    ----------
    A : np.array, ndim=2
        The adjacency matrix.
    Returns
    -------
    lovasz_theta : float
        Returns the lovasz theta number.
    optimization_solution : np.array
        Returns the slack variable solution
        of the primal optimization program.
    """
    if A.shape[0] == 1:
        return 1.0
    else:
        tf_adjm = (np.abs(A) <= min_weight)
        np.fill_diagonal(tf_adjm, False)
        i_list, j_list = np.nonzero(np.triu(tf_adjm.astype(int), k=1))

        nv = A.shape[0]
        ne = len(i_list)

        x_list = list()
        e_list = list()
        for (e, (i, j)) in enumerate(zip(i_list, j_list)):
            e_list.append(int(e)), x_list.append(int(i*nv+j))
            if tf_adjm[i, j]:
                e_list.append(int(e)), x_list.append(int(i*nv+j))

    # Add on the last row, diagonal elements
    e_list = e_list+(nv*[ne])
    x_list = x_list+[int(i*nv + i) for i in range(nv)]

    # initialise g sparse (to values -1, based on two list that
    # define index and one that defines shape
    g_sparse = spmatrix(-1, x_list, e_list, (nv*nv, ne+1))

    # Initialise optimization parameters
    h = matrix(-1.0, (nv, nv))
    c = matrix([0.0]*ne + [1.0])

    # Solve the convex optimization problem

    # DEBUGGING > tmp.txt
    # =====================
    print(f"{nv}, {ne}, {e_list}, {x_list}")
    # =====================

    sol = sdp(c, Gs=[g_sparse], hs=[h])

    return np.array(sol['ss'][0]), sol['x'][ne]

Test file:

# test debug
import ...

@pytest.mark.parametrize("nv, ne, e_list, x_list", [...])
def test_debug(nv, ne, e_list, x_list):

    # Taken from the above function
    g_sparse = spmatrix(-1, x_list, e_list, (nv*nv, ne+1))
    h = matrix(-1.0, (nv, nv))
    c = matrix([0.0]*ne + [1.0])

    # Should raise here on windows
    sol = sdp(c, Gs=[g_sparse], hs=[h])
    assert sol is not None
eddiebergman commented 1 year ago

Hurray success! The minimal example works and essentially there are failing cases for the sdp solver only occuring on windows. https://github.com/eddiebergman/GraKeL/actions/runs/3183531279/jobs/5190864022

eddiebergman commented 1 year ago

More debugging: Both cvxopt on windows, linux and mac have installed the same versions of everything except windows seems to install colorama-0.4.5 as an extra dependancy. My guess this has nothing to do with the issue.

eddiebergman commented 1 year ago

I can get it down to one single example that fails on Windows but success and mac/linux. Unfortunatly, I don't really know how to proceed from here due to my lack of knowledge of what is being represented and whether this should be solvable. With your go ahead, I would raise an issue with cvxopt github.

# Passed on linux, mac - Fails on Windows (cvxopt 1.3.0)
import pytest
from cvxopt.base import matrix, spmatrix
from cvxopt.solvers import sdp

@pytest.mark.parametrize(
    "nv, ne, e_list, x_list",
    [
        (9,
         15,
         [ 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, 15],
         [ 2, 2, 3, 3, 6, 6, 7, 7, 12, 12, 14, 14, 15, 15, 21, 21, 22, 22, 23, 23, 25, 25, 41, 41, 44, 44, 53, 53, 71, 71, 0, 10, 20, 30, 40, 50, 60, 70, 80],
        )
    ],
)
def test_windows_sdp(nv, ne, e_list, x_list) -> None:
    # initialise g sparse (to values -1, based on two list that
    # define index and one that defines shape
    print(nv, ne, e_list, x_list)
    g_sparse = spmatrix(-1, x_list, e_list, (nv * nv, ne + 1))

    # Initialise optimization parameters
    h = matrix(-1.0, (nv, nv))
    c = matrix([0.0] * ne + [1.0])

    # Solve the convex optimization problem
    # Should raise here on windows
    sol = sdp(c, Gs=[g_sparse], hs=[h])
    assert sol is not None
eddiebergman commented 1 year ago

Okay, at this point I think it's a cvxopt/lapack issue as essentially the failures are stochastic, they seem to fail randomly on windows on different versions, sometimes succeeding for no reason I can discern. The github runners that are running the setup and tests seem to be the same so I'm not entirely sure why this is happening at all.

Test 1 (Success on 3.7, 3.8)

Test 2 (Failed all)

Assuming the generated sdp problem given is correct, my current thoughts on solutions:

My preference is definitely for the last one given my reasons for updating GraKeL but I understand this is not ideal. I think all solutions are do-able in parallel, i.e

if windows: 
    solver = partial(sdp, something_windows_specific=...)
    # OR
    raise RuntimeError("LovazsKerenl not available for Windows, see: <issue link>")  
else:
   solver = sdp

sol = solver(C, Gs=[g_sparse], hs=[h])

I will raise an issue with cvxopt anyways with all the details and tag you in there as well in the meantime

Best, Eddie

ysig commented 1 year ago

Hi @eddiebergman

Sorry for jumping in a bit late. That's amazing work and thank you so much for pinning down this problem.

Lovasz has always been a pain due to numerical instability in windows.

To give you some further context, essentially what happens is the following:

So I suggest the following:

eddiebergman commented 1 year ago

Hi @ysig,

So I think it's finally done: tests, wheels

Tests

I skip the lovasz tests on windows specifically and also inserted some code to raise a warning when using it on windows.

The only step left you to do would be to sign up on codecov.io which provides free test coverage reporting for open-source code. You would just have to enable it for this repo and that should be it (You do not need a token as it's a public repo).

To run tests locally, as pytest is now used, I've added it as a "dev" dependancy and they can be invoked by simply typing pytest anywhere in the repository. I defer you to pytest documentation for any more customization of it if needed.

Wheels

Everything seems to be building fine except some slight exceptions which are documented on the workflow. I would recommend you have a quick look through the workflow to see if things make sense for you there or if anything is not clear.

When the workflow is done and everything passes, all the compiled wheels will be available here and by clicking on the Artificat. This is a zipped up download of all the wheels. You can test them manually on your own machine first for piece of mind.

I would first merge everything into the main repository and then let them run and download from this repo, rather than my fork but the content should be identical.

Workflows in General

Since you may not be familiar with github action workflows, they are just .yaml files that are located in your repo under .github/workflows. They describe automated CI procedures for github to do and under what circumstances. I've enabled some sane options for test.yaml and release.yaml, basically having it so wheels are only generated on a push to the master branch or to a X.X.X version branch. Feel free to change this to what makes sense to you. For testing, they are trigged on any push to master or any push to a PR targeting master. Both of these can also be manually triggered using the Actions tab on github.

Future Releases

When Python 3.11/3.12 come out and are mainstream, and all the dependancies have updated (i.e. scikitlearn, scipy, etc...) then you can quite easily add a wheel for this be appending to this line. Likewise for testing here. I'm also happy to provide support for any of this in the future :)

Let me know if there's any more explanation you'd like or worries you may have!

Best, Eddie

ysig commented 1 year ago

Hi @eddiebergman thank you so much!

Two final things I would like you to add before I merge this PR is:

  1. Previously we had this script that would upload the build wheels and sdist on pypi https://github.com/ysig/GraKeL/blob/0.1a8/ci_scripts/travis/success.sh can you add this on the wheel builders? I can for sure add the PYPI token

  2. it would be great to have a commit keyword to skip uploading

    • equivalently we could have an action to push wheels that can only be triggered by the click of a contributor of the library - me in this case - but I'm not sure whether this is possible.

Lmk if sth doesn't make sense.

eddiebergman commented 1 year ago

Hi @ysig,

I added something from Pypa that does it now, I went with the easiest manual solution. Here's the action code for reference

On the actions tab, you will see the possibility to manually run the release.yaml workflow on any branch you have. There will be a checkbox (lower right of image) which will deploy to PyPI if everything runs correctly. The one downside is that this means you will likely run the workflow twice, once on a push to master automatically, and once when you manually select to release.

Screenshot_2022-10-06_21-07-35

As you mentioned before, you will have to add a secret to you github acount which the documentation from the action specifies and you can find here.

If you're not happy with this solution and want it more automated then there are some other possibilities, for example, can trigger only a push to <branch> with a <tag> starting with a particular <substr>. I would advise against trying to automate it since the library is not under active development. I would guess that going to figure out what the automation trigger is not anymore convenient then just triggering it manually from github.

eddiebergman commented 1 year ago

Small update:

You can see in this workflow in the last step "Upload Release" that it seems to be working as intended. It fails for me as I have no access to PyPI/GraKeL and I also have no access token set. If you look at the log for it, you'll see the wheels and dist that it checks and tries to upload.

I think it's fully mergable then, other than I will resolve the install.sh conflict.

Solved, and also I removed the appveyor and Travis shields from the README