Open PokhodenkoSA opened 3 years ago
Thanks for making this issue! Let's confer with @conda-forge/core
b) We doubt any of the CI systems that are available to feedstock will have Gen9 GPUs. We can of course build the packages without needing a GPU, but will not be able to test them on a GPU. An option can be to test them on a CPU SYCL device (using OpenCL or Level Zero CPU drivers) like we do in our GitHub Actions CI.
Are you aware of any public CI service supports integrated GPUs?
No, we do not have access to such a system. We have been working through various means to enable this but we'll need outside sponsorship/support to make it happen.
c) We do not support MacOS. Will that be a problem?
Not a problem at all.
We want to start pushing out our packages dpctl, numba-dppy, dpnp to conda-forge to get make them widely available, and want some tips.
Does numba-dppy
mean that there will be two versions of numba on conda-forge? We should discuss this to ensure existing environments still work and that things are appropriately marked as conflicting to the solver, etc.
a) All of our packages will need the DPC++ compiler. I think we can handle it in our recipes by installing DPC++ (here is an example of what we currently do for dpctl on Github CI: https://github.com/IntelPython/dpctl/blob/48794f78206389f157ea2e86dbabb46fadbab6e8/.github/workflows/generate-coverage.yaml#L22) or explore using a prebuilt Docker image that has oneAPI preinstalled in our recipe.
We have been coordinating with another group at intel on the compilers. I sent this issue to them but cannot seem to find their github handles right now. We should get their opinions here too.
No, we do not have access to such a system. We have been working through various means to enable this but we'll need outside sponsorship/support to make it happen.
One option is to use Intel DevCloud. Intel DevCloud has Intel GPU nodes and is free for use via SSH. We can potentially (at least for Github CI) connect to DevCloud and use a GPU node there. I have yet to try it out, so not sure of latency and node availability. I will try it and post an update.
Does
numba-dppy
mean that there will be two versions of numba on conda-forge? We should discuss this to ensure existing environments still work and that things are appropriately marked as conflicting to the solver, etc.
No, numba-dppy
is a standalone extension/plug-in to Numba. Numba-dppy
is only needed to get access to the code-generator for SYCL devices.
a) All of our packages will need the DPC++ compiler. I think we can handle it in our recipes by installing DPC++ (here is an example of what we currently do for dpctl on Github CI: https://github.com/IntelPython/dpctl/blob/48794f78206389f157ea2e86dbabb46fadbab6e8/.github/workflows/generate-coverage.yaml#L22) or explore using a prebuilt Docker image that has oneAPI preinstalled in our recipe.
We have been coordinating with another group at intel on the compilers. I sent this issue to them but cannot seem to find their github handles right now. We should get their opinions here too.
@PokhodenkoSA We will be publishing the DPC++ compiler conda packages for 2021.3 release in a few weeks. There are 2021.2 versions that are in a test channel, but in any case it should work just fine by adding the following to the meta.yaml:
requirements: build:
Start working on recipe for dpctl: https://github.com/conda-forge/staged-recipes/pull/16391
But it seems that dpcpp_linux-64 and other DPC++ compiler packages are not available.
@beckermr Is it possible to use dpcpp_linux-64 from defaults channel somehow?
We recently removed defaults from our default channels list for builds. It should be possible to add it back, but I am sure how to do that on staged recipes.
Thank you for your feedback.
@beckermr - any ideas where to look for ideas on enabling this for staged recipes? if this would be added in PR i assume base staging repo would be polluted.
Dummy way here would be to get empty initial package deployed and then work on feed-stock - but i don't think this great idea )
I'm not sure. Probably easiest to merge a dummy build and then do the work in the feedstock. Feel free to bump core for this if you want to go ahead that way.
We should first package the compilers and runtimes in conda-forge before creating downstream packages.
@beckermr, can you clarify how to add the defaults channel for a feedstock?
It is in the docs iirc. You add a specific line to the conda_build_config.yaml. I forget the syntax offhand.
Issue:
We want to start pushing out our packages
dpctl
,numba-dppy
,dpnp
to conda-forge to get make them widely available, and want some tips.a) All of our packages will need the DPC++ compiler. I think we can handle it in our recipes by installing DPC++ (here is an example of what we currently do for dpctl on Github CI: https://github.com/IntelPython/dpctl/blob/48794f78206389f157ea2e86dbabb46fadbab6e8/.github/workflows/generate-coverage.yaml#L22) or explore using a prebuilt Docker image that has oneAPI preinstalled in our recipe.
Do you anticipate any problems with the usage of DPC++ and oneAPI in general in our recipes?
b) We doubt any of the CI systems that are available to feedstock will have Gen9 GPUs. We can of course build the packages without needing a GPU, but will not be able to test them on a GPU. An option can be to test them on a CPU SYCL device (using OpenCL or Level Zero CPU drivers) like we do in our GitHub Actions CI.
Are you aware of any public CI service supports integrated GPUs?
c) We do not support MacOS. Will that be a problem?
@diptorupd @oleksandr-pavlyk @reazulhoque