iree-org / iree

A retargetable MLIR-based machine learning compiler and runtime toolkit.
http://iree.dev/
Apache License 2.0
2.49k stars 556 forks source link

Python 3.12 Wheels #15856

Open theoparis opened 7 months ago

theoparis commented 7 months ago

Request description

Add pre-built python 3.12 wheels

What component(s) does this issue relate to?

Python

Additional context

I couldn't find an existing issue that mentions python 3.12 since it was released in october.

> python3.12 -m pip install \
  --find-links https://iree.dev/pip-release-links.html \
  --upgrade iree-compiler
Looking in links: https://iree.dev/pip-release-links.html
ERROR: Could not find a version that satisfies the requirement iree-compiler (from versions: none)
ERROR: No matching distribution found for iree-compiler
stellaraccident commented 4 months ago

Looks like our minimal deps are available on pypi (still missing some optional, non-nightly testing deps), so I'm trying to enable the 3.12 wheels.

stellaraccident commented 4 months ago

We're going to need a better Python version testing story soon. Currently we test the oldest version in regular CI and leave later versions to release. This has been practical for much of the 3.x series, but we're entering a period of high/feature changes with 3.12/3.13. I expect we are going to need to more aggressively test newer versions as part of the regular build before too much longer.

ScottTodd commented 4 months ago

We're going to need a better Python version testing story soon. Currently we test the oldest version in regular CI and leave later versions to release. This has been practical for much of the 3.x series, but we're entering a period of high/feature changes with 3.12/3.13. I expect we are going to need to more aggressively test newer versions as part of the regular build before too much longer.

Are the bindings/python tests (compiler and runtime) representative enough for that, or would integration / full-model tests be needed too? BTW, I had https://github.com/openxla/iree/pull/15878 open to run some of the Python tests in more jobs.

stellaraccident commented 4 months ago

We're going to need a better Python version testing story soon. Currently we test the oldest version in regular CI and leave later versions to release. This has been practical for much of the 3.x series, but we're entering a period of high/feature changes with 3.12/3.13. I expect we are going to need to more aggressively test newer versions as part of the regular build before too much longer.

Are the bindings/python tests (compiler and runtime) representative enough for that, or would integration / full-model tests be needed too? BTW, I had #15878 open to run some of the Python tests in more jobs.

They are representative enough to push wheels for newer versions, but we should be running more framework level tests on the more recent versions. There's chicken and egg, though: for example, PyTorch for 3.12 is only in nightly right now, and onnx won't publish wheels until their next release.

Mainly just jotting down thoughts.

I also want to better modularize the Python build. There is no reason that we need to build the full C++ codebase to build Python wheels. Both the compiler and runtime wheels should be able to be built against our dev packages, as they just depend on public C headers (although the runtime is small enough and we get value from a more exotic setup because we build different variants and can LTO).

Would make the testing story easier...

stellaraccident commented 4 months ago

Leaving this open until we verify they built.

stellaraccident commented 4 months ago

If we better modularized the python build, we could re-arrange the PkgCI into a step to build the native dev packages sans Python on big CPU runners. Then we could have N={python versions} to build and run unit tests for each Python version. We could have a flag in the matrix that would control which ones actually got published as artifacts (i.e. maybe just publish one version for presubmit) for downstream jobs to consume. These python build/testers could run on standard GH action runners. Downstream integration tests could depend on the built wheels.

ScottTodd commented 4 months ago

If we better modularized the python build, we could re-arrange the PkgCI into a step to build the native dev packages sans Python on big CPU runners.

Not sure if this is quite what you had in mind, but I had started forking build_linux_packages.sh into build_linux_dist.sh on https://github.com/openxla/iree/compare/main...ScottTodd:iree:infra-linux-dist. I was hoping that reusing the same toolchain files (e.g. https://github.com/openxla/iree/blob/main/build_tools/pkgci/linux_toolchain_release.cmake) would help with cache hits and compatibility between the 'native' build and the 'python package' builds.

stellaraccident commented 4 months ago

I think we can actually completely forgo most of the gunk in build_linux_packages.sh (all of the docker stuff). For other projects, I'm either using GHA action runners that are already the right OS or telling GHA to run in the context of the manylinux docker image. Makes it really, really simple.

But +1 on re-using the toolchain and hard fought for flags.

Anyway, might give it some more thought.