FLAMEGPU / FLAMEGPU2

FLAME GPU 2 is a GPU accelerated agent based modelling framework for CUDA C++ and Python
https://flamegpu.com
MIT License
107 stars 22 forks source link

Release Binary distribution #514

Closed ptheywood closed 3 years ago

ptheywood commented 3 years ago

Need to figure out how to do binary distributions.

c++ should be fine similar to F1, but automated via an action (on push to tags which match a pattern?)

Python will be more difficult. PyQuest is potentially a very useful reference.

Related:

ptheywood commented 3 years ago

Python/pypy References:

In general, uploading to pypy seems relatively simple, via twine we can upload source and binary builds. we should use the test pypy mirror to see how viable this actually is.

It should be possible to automate this as part of an action (will require the use of github org secrets + the associated change to the actions to acomplish this).

Still need to consider what will be uploaded

The majority of python users probably want release builds, potentially with and without SEATBELTS. Possibly a separate pypy package?

Visualisation support will be additional faff too.

ptheywood commented 3 years ago

Another thing to consider:

Create multiple python packages as a way of providing differnet configurations of flame gpu via pypi.

https://packaging.python.org/guides/packaging-namespace-packages/

Users could then do import flamegpu for whatver, or import flamegpu-release-seatbelts-on as flamegpu or something to that effect potentially?

ptheywood commented 3 years ago

Also worth considering altentatives to pypi due to issues w/ CUDA. The Rapids team have opted for conda or Docker as the preferred methods, or docker / source.

https://rapids.ai/start.html#get-rapids

https://medium.com/rapids-ai/rapids-0-7-release-drops-pip-packages-47fc966e9472

Conda's more recent licence changing is worth considering though, although in practice it's probably not an issue

We clarified our definition of commercial usage in our Terms of Service in an update on Sept. 30, 2020. The new language states that use by individual hobbyists, students, universities, non-profit organizations, or businesses with less than 200 employees is allowed, and all other usage is considered commercial and thus requires a business relationship with Anaconda. source

ptheywood commented 3 years ago

As a quick test, building the pyflamegpu wheel on one machine (Ubuntu 1804, Python 3.8, cuda 11.2, SM61,70), copying the .whl onto another machine (ubuntu 2004, py3.8, cuda 11.2, Pascal GPU) works. (via pip install <filename>.whl into a venv).

Changing the version of cuda on my path to 11.4 still works, with the jitify cache file name still referencing 11.2.

After uninstalling nvrtc 11.2 it still runs (including purging the cache), implying the cache identifier is based on the version used to build the library, not the current nvrtc version (which is probably fair enough, if a little incorrect).

If i adjust my path to contain CUDA 10.0, it also still works...

Robadob commented 3 years ago

If i adjust my path to contain CUDA 10.0, it also still works…

This seems like one of the things, where there are likely to be hidden bugs due to ABI changes. So it’s probably not to be recommended. We don’t want people reporting bugs that we can’t repro.

I’m interested to try this with windows on a clean machine that has never had visual studio installed.

On Tue, 13 Jul 2021 at 14:28, Peter Heywood @.***> wrote:

As a quick test, building the pyflamegpu wheel on one machine (Ubuntu 1804, Python 3.8, cuda 11.2, SM61,70), copying the .whl onto another machine (ubuntu 2004, py3.8, cuda 11.2, Pascal GPU) works. (via pip install .whl into a venv).

Changing the version of cuda on my path to 11.4 still works, with the jitify cache file name still referencing 11.2.

After uninstalling nvrtc 11.2 it still runs (including purging the cache), implying the cache identifier is based on the version used to build the library, not the current nvrtc version (which is probably fair enough, if a little incorrect).

If i adjust my path to contain CUDA 10.0, it also still works...

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/FLAMEGPU/FLAMEGPU2/issues/514#issuecomment-879088504, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFVGCS6OW2IRLQ6A3NS6FDTXQ5RHANCNFSM425KHOTA .

ptheywood commented 3 years ago

For now, we will manually atttache .whl to the github release for the initial python binary release, before potentially using pypi (file size + not strict image conformance issues) or conda (licencing issues, less users than pypi). Docker will be an alternative.

ptheywood commented 3 years ago

Wheel filename's have a convention for the filename. {distribution}-{version}(-{build tag})?-{python tag}-{abi tag}-{platform tag}.whl (source).

This doesn't make it clear if there is room to encode the cuda version in the wheel name. TF doesn't, and as it only supports a single CUDA version per wheel that's not an issue (so we can probably do the same).

cupy includes the cuda version in the package name. I.e. cupy_cuda101-... in the wheel filename. while the main cupy pacakge is a source only distribution (i.e. no compiled objects, so no cuda version dependency?)

ptheywood commented 3 years ago

Building for all non-tegra CUDA compute capabilities, the bin directory is 3.3GB in size for release mode under linux (2GB when compressed), as the static library is 109MB, and then each executable is > 200MB, with the tests binary alone becoming 800MiB. This is CUDA 11.2.

If we provided 2 seatbelts configs 2 platforms 1 cuda version, that's 10GB+ per release.

We could reduce the architectures we build for, to only be one per Major compute capability (52, 60, 70, 80) potentially, which will provide decent perf on all supported devices, but not include optimistaions in some cases (i.e. consumer cards)

For now, I'm going to hold off on binary C++ releases due to this. Longer term it may be worth us providing a reduced set of examples in the pre-built binary (and in the core repo). I.e we don't need to be shipping 5 variants of boids.

The full fat wheel bbuild is ~200M, compared to 40M for a single arch.

ptheywood commented 3 years ago

Enabling the vis causes windows CI to fail due to WError settings in the main repo CI, but not the vis repo. These warnigns are in third party code so this might be a little bit fun. See https://github.com/FLAMEGPU/FLAMEGPU2-visualiser/issues/71.

ptheywood commented 3 years ago

Initiall binary distribution will just by pyflamegpu, for a single cuda version, with a single build configuration of release, seatbelts=on, vis=on (subject to vis builds not causing issues for headless nodes), and for major cuda architectures only.

Subsequnt issues have been created which may expand on this in the future (for post-alpha releases?): #603, #604 & #605.

ptheywood commented 3 years ago

Visualisation enabled wheels are now building and being attached to draft releases! See https://github.com/ptheywood/FLAMEGPU2/releases/tag/v0.1.3-beta for an example.

Remaining steps are:

ptheywood commented 3 years ago

Runninv Vis python wheels on linux boxes which do not have the vis shared libraries availabel (libGLEW.so for instance) results in an error.

>>> import pyflamegpu
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ptheywood/Downloads/f2whl/0.1.3-beta/venv38/lib/python3.8/site-packages/pyflamegpu/__init__.py", line 9, in <module>
    from .pyflamegpu import *
  File "/home/ptheywood/Downloads/f2whl/0.1.3-beta/venv38/lib/python3.8/site-packages/pyflamegpu/pyflamegpu.py", line 13, in <module>
    from . import _pyflamegpu
ImportError: libGLEW.so.2.1: cannot open shared object file: No such file or directory
>>> 

This will be an issue on HPC systems.

Robadob commented 3 years ago

Yeah this is to be expected (although thought we might have been able to get away with trying to run a sim without vis). Not sure how we could resolve it without doing something grim like runtime loading.

On Thu, 29 Jul 2021 at 11:22, Peter Heywood @.***> wrote:

Runninv Vis python wheels on linux boxes which do not have the vis shared libraries availabel (libGLEW.so for instance) results in an error.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/FLAMEGPU/FLAMEGPU2/issues/514#issuecomment-888997118, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFVGCWKQGRRUNVIFZQGN3LT2ETW5ANCNFSM425KHOTA .

ptheywood commented 3 years ago

After discussing with @mondus, the plan is to just break wheel naming conventions for the alpha releases, to provide pyflamegpu-<stuff>.whl with visualisation enabled and pyflamegpu-<console/headless>-<stuff>.whl without vis, for use on linux HPC / remote boxes.

This will need to be explained in the release notes.