This PR lays the groundwork for being able to install neural-lam as a package, thereby making it possible to run from anywhere once the package has been installed. This means that it would be possible (in theory) to train neural-lam on a .npy-file based dataset (once https://github.com/mllam/neural-lam/pull/31 has been merged so that the training configuration is moved from neural_lam/constants.py to a yaml-config file) with the neural-lam package installed into a user's site-packages (i.e. in their virtualenv).
I appreciate that currently most of us will be checking out the codebase to make modifications, and then train a model with neural-lam and so these changes might seem superfluous. But making these later will be a lot harder than doing them now.
The primary changes are:
move all *.py that are currently outside of neural_lam/ into that folder, but keep the files the same
change all examples of running the neural-lam "scripts", e.g. python create_mesh.py by python -m neural_lam.create_mesh in the README
change all absolute imports to package-relative imports, i.e. from . import utils rather than from neural_lam import utils
add tests that all the CLI entrypoints to neural_lam can be imported and add ci/cd action to run these tests
I still need to resolve the issue around depending on cpu or gpu versions of pytorch. I know @sadamov found a work-around where the cpu or gpu package was automatically picked based on whether CUDA is detected. I haven't had time to look into this yet, but once I have done that I will mark this PR as complete from my side and ask for your thoughts on it :)
This PR is definitely a work-in-progress and is meant to serve as a discussion point.
(also this PR includes changes that PR #29 adds, so this PR definitely shouldn't be merged or even reviewed probably before #29 is in)
This PR lays the groundwork for being able to install
neural-lam
as a package, thereby making it possible to run from anywhere once the package has been installed. This means that it would be possible (in theory) to train neural-lam on a.npy
-file based dataset (once https://github.com/mllam/neural-lam/pull/31 has been merged so that the training configuration is moved fromneural_lam/constants.py
to a yaml-config file) with the neural-lam package installed into a user'ssite-packages
(i.e. in their virtualenv).I appreciate that currently most of us will be checking out the codebase to make modifications, and then train a model with neural-lam and so these changes might seem superfluous. But making these later will be a lot harder than doing them now.
The primary changes are:
*.py
that are currently outside of neural_lam/ into that folder, but keep the files the samepython create_mesh.py
bypython -m neural_lam.create_mesh
in the READMEfrom . import utils
rather thanfrom neural_lam import utils
I still need to resolve the issue around depending on cpu or gpu versions of pytorch. I know @sadamov found a work-around where the cpu or gpu package was automatically picked based on whether CUDA is detected. I haven't had time to look into this yet, but once I have done that I will mark this PR as complete from my side and ask for your thoughts on it :)
This PR is definitely a work-in-progress and is meant to serve as a discussion point.
(also this PR includes changes that PR #29 adds, so this PR definitely shouldn't be merged or even reviewed probably before #29 is in)