alpaka-group / alpaka

Abstraction Library for Parallel Kernel Acceleration :llama:
https://alpaka.readthedocs.io
Mozilla Public License 2.0
346 stars 72 forks source link

CMake Modernization #919

Open j-stephan opened 4 years ago

j-stephan commented 4 years ago

I'd like to improve the current CMake infrastructure and adjust it to modern CMake idioms (as seen here and here).

Goals to achieve:

My current idea is to make everything (Alpaka headers, test cases, examples, third-party dependencies) a target, add some high-level targets (like a common target for all unit tests, another for all integration tests, and so on) and let CMake handle the build and usage requirements. Example: CUDA would be a third-party target. The user would link his project to alpaka::cuda which would then pull in a CUDA requirement. If he prefers OpenMP4, he'd link to alpaka::omp4 without the need for CUDA.

There are some questions remaining that I'd like to discuss with the Alpaka developers:

j-stephan commented 4 years ago

@ax3l Are you still working on #478?

BenjaminW3 commented 4 years ago

Which Alpaka headers are we considering public (should be used by the user) and which private (for internal use only)?

nearly everything is public

Should we still use the deprecated script or instead ditch clang-cuda for the time being?

I really like clang-cuda and would not want to drop support for it.

Which CMake version should we aim for?

I have no problem requiring CMake 3.15. I am not sure if we should already require 3.16. If it provides a huge value then it should be possible to even require 3.16.

BenjaminW3 commented 4 years ago

My current idea is to make everything (Alpaka headers, test cases, examples, third-party dependencies) a target

This should already be the case.

add some high-level targets (like a common target for all unit tests, another for all integration tests, and so on)

CMake automatically generates a RUN_TESTS target. It does not differentiate between unit and integration tests so I am not opposed to add those meta targets.

and let CMake handle the build and usage requirements. Example: CUDA would be a third-party target. The user would link his project to alpaka::cuda which would then pull in a CUDA requirement. If he prefers OpenMP4, he'd link to alpaka::omp4 without the need for CUDA.

I am not sure how this will work. In some cases we have interdependencies between the enabled backends. If we enable the CUDA backend together with the OpenMP backends we have to do some special magic with the compiler flags.

The old FindCUDA scripts forces us to use old-school CMake in a lot of places but the same is true for HIP so we would not get rid of it even when we remove support for clang-cuda.

I would like to see incremental improvements and no big bang change. Some work on the individual topics e.g. "Precompiled headers", "install", "test meta targets", etc. is much appreciated.

ax3l commented 4 years ago

Excellent proposal to modernize this!

@ax3l Are you still working on #478?

No, please feel free to roll your own installers while you are on it :) Scavenge (or scrap) what you need.

Just for refs.: the slowly progressing upstream CMake support for CUDA-clang. This issue also contains a lot of info what needs to be done, so it can also be contributed to CMake upstream :)

CMake version

Since its easy to install and does not trigger much pain in build systems (aka it's only a tool, not a linked lib), feel free to use even very recent CMake versions.

j-stephan commented 4 years ago

This should already be the case.

Not exactly. CUDA and HIP are not included as targets but use their outdated CMake modules instead. We are not calling target_link_libraries(SomeTarget cuda::cuda) or something similar but are instead forced to use CUDA_ADD_EXECUTABLE. OpenMP isn't used as a target as well as far as I can tell.

I am not sure how this will work. In some cases we have interdependencies between the enabled backends. If we enable the CUDA backend together with the OpenMP backends we have to do some special magic with the compiler flags.

I will look into this.

I would like to see incremental improvements and no big bang change. Some work on the individual topics e.g. "Precompiled headers", "install", "test meta targets", etc. is much appreciated.

Alright. The first topic will be "install" then.

ax3l commented 4 years ago

Not exactly. CUDA and HIP are not included as targets but use their outdated CMake modules instead. We are not calling target_link_libraries(SomeTarget cuda::cuda) ...

Didn't know that works, since modern CMake supports CUDA (and soon HIP) as C/C++ language feature of targets as well. This is also the location where one would need to extend clang compiler support. Cool.

Alright. The first topic will be "install" then.

Great idea! :clap:

BenjaminW3 commented 4 years ago

It looks like Ubuntu 20.04 will get CMake 3.15 by default, so we may want to support this version for some time as this will be the upcoming LTS release.

j-stephan commented 4 years ago

Didn't know that works

Well, I don't know if it will work. My goal is to abstract Find{CUDA,HIP}.cmake away and hide its functionality behind a cuda::cuda target.

and soon HIP

Nice, I missed that. Do you happen to have a link for further reference?

It looks like Ubuntu 20.04 will get CMake 3.15 by default, so we may want to support this version for some time as this will be the upcoming LTS release.

Understandable. We would have to postpone PCH for a while, though. Chances are quite good that Kitware will supply an apt repository for more recent CMake versions (as they did for 16.04 and 18.04).

Speaking of platforms we support: While looking through the existing CMake scripts I noticed a workaround for 32-bit Windows. Is Alpaka actually still in productive use on this platform?

SimeonEhrig commented 4 years ago

Understandable. We would have to postpone PCH for a while, though. Chances are quite good that Kitware will supply an apt repository for more recent CMake versions (as they did for 16.04 and 18.04).

PCH should be loosely linked to this issue. First, you should modernize the CMake without new features and take a look at what is needed for a later PCH integration. This way you can use 3.15 without problems, which makes more sense if it comes with Ubuntu 20.04. Later, when PCH is implemented and useful, we can bump to 3.16.

BenjaminW3 commented 4 years ago

I do not think that there is productive use on Windows x86. Our CI also only builds and executes Windows x64. However, some local development might happen with 32-bit builds.

tdd11235813 commented 4 years ago

I am not sure how this will work. In some cases we have interdependencies between the enabled backends. If we enable the CUDA backend together with the OpenMP backends we have to do some special magic with the compiler flags.

One could separate discovery and target configuration.

# ... discover compiler setup first ...
include(cmake/CudaTargets.cmake)
if(TARGET cuda::cuda AND ALPAKA_ACC_GPU_CUDA_ENABLE)
    add_library(alpaka::cuda INTERFACE)

# ...after all targets have been defined...
if(TARGET alpaka::cuda
  AND NOT TARGET alpaka::openmp)
# ... configuring flags ...

# configure common things (could depend on targets too) (just an example) 
target_link_libraries(alpaka::common INTERFACE
    alpaka_version
    Boost::system
    Boost::program_options
    ${CMAKE_DL_LIBS}
    $<alpaka::cuda:${OpenMP_CXX_FLAGS}> # if target cuda exists, add those flags
    )

# combine
add_executable(${TARGET} ${SRC})
target_link_libraries(${TARGET} PRIVATE alpaka::common alpaka::cuda)
ax3l commented 4 years ago

Good news everyone, CMake 3.17+ will honour the SYSTEM property for include dependencies when compiling with NVCC: https://gitlab.kitware.com/cmake/cmake/merge_requests/4317

Significantly less (boost) warnings when compiling apps for CUDA!

j-stephan commented 4 years ago

It looks like Ubuntu 20.04 will get CMake 3.15 by default, so we may want to support this version for some time as this will be the upcoming LTS release.

Ubuntu 20.04 switched to CMake 3.16:

https://packages.ubuntu.com/focal/cmake

This means we could aim for PCH next.

j-stephan commented 4 years ago

By the way, do we still require the librt dependency? clock_gettime was merged into glibc in 2012.

Do we have users on ancient Red Hat systems? RHEL 7 and 8 wouldn't be affected by this, only RHEL 5 (retired, critical security updates will be stopped after 2020) and 6. RHEL 6 will be retired in November 2020 but will receive critical security updates until 2024 (if payed for). This means that variants like CentOS and Scientific Linux are rapidly approaching their EOL date.

BenjaminW3 commented 4 years ago

CMake 3.16 and PCH sounds reasonable.

I remember someone having some issues without librt some time ago but I am not sure. We should look into the history of the cmake changes.

j-stephan commented 4 years ago

Okay, librt was first introduced with commit 8d00bdb and activated by default with commit e8df877. Apparantly this was originally a fix for clang 3.7 (which we no longer support) and later ported back to gcc, too. I assume the latter happened to support the RHEL derivatives.

I vouch for removing librt in a separate PR and check if anything breaks in the CI.

ax3l commented 4 years ago

Sounds ok. Labs that I am aware off using RHEL 6 will retire it by summer this year, yet manylinux2010 is derived from Centos 6: https://www.python.org/dev/peps/pep-0571/ Anyway, manylinux2014 is around the corner: https://www.python.org/dev/peps/pep-0599/

j-stephan commented 4 years ago

Full support for clang-cuda has just been merged into CMake:

https://gitlab.kitware.com/cmake/cmake/-/merge_requests/4442

Finally!

BenjaminW3 commented 4 years ago

Now hope for a release very soon!

BenjaminW3 commented 4 years ago

As soon as I can get a nightly build and some spare time I will try this out.

I do not see a way forward supporting the old FindCUDA together with the new CMake CUDA language support so the change will probably completely remove the FindCUDA way. This will result in losing support for the older CMake releases. Therefore, we should do an alpaka release before this change is applied.

SimeonEhrig commented 4 years ago

Full support for clang-cuda has just been merged into CMake:

https://gitlab.kitware.com/cmake/cmake/-/merge_requests/4442

Finally!

Just as side node. Near to the end of the discussion, there is also question of HIP support and the answer, that should be really similar to the Clang+CUDA support.

psychocoderHPC commented 4 years ago

I used already a plain clang for HIP as prototype with hard coded path to the libs. https://github.com/psychocoderHPC/picongpu/commit/be71be2a8e2233dc97abe98358ecbf0a21b97220 This branch is was for CRAY to test PIConGPU without hipcc.

Note: FindHip is currently required to use HIP-nvcc, a side effect of changing to native compiler will mean losing HIP-nvcc or still maintain the FindHIP part in CMake to have HIP-nvcc support.

j-stephan commented 4 years ago

Note: FindHip is currently required to use HIP-nvcc, a side effect of changing to native compiler will mean losing HIP-nvcc or still maintain the FindHIP part in CMake to have HIP-nvcc support.

For NVIDIA GPUs: Is there any benefit to using alpaka + HIP over alpaka + CUDA?

j-stephan commented 4 years ago

Since CMake 3.18 is now available: Are we okay with transitioning to CMake 3.18 (more or less immediately)?

BenjaminW3 commented 4 years ago

I have no problem with increasing the minimum supported version to 3.18 if it delivers us valuable new features. Before we do this, we should see if the native CUDA support is already good enough for us.

psychocoderHPC commented 4 years ago

Note: FindHip is currently required to use HIP-nvcc, a side effect of changing to native compiler will mean losing HIP-nvcc or still maintain the FindHIP part in CMake to have HIP-nvcc support.

For NVIDIA GPUs: Is there any benefit to using alpaka + HIP over alpaka + CUDA?

CUDA support via HIP is nice as showcase that HIP+CUDA is not providing 100% of the native CUDA performance. The support should not be removed because it is required for our CAAR project.

j-stephan commented 4 years ago

Before we do this, we should see if the native CUDA support is already good enough for us.

I'll open a draft PR so we can look at the resulting CMake code.

The support should not be removed because it is required for our CAAR project.

Alright. Hopefully someone will implement native HIP support for CMake soon.

SimeonEhrig commented 3 years ago

GitLab CI introduces MacOS runners: https://about.gitlab.com/blog/2021/08/23/build-cloud-for-macos-beta/ Sorry wrong issue :sweat_smile:

j-stephan commented 2 years ago

Thanks for the necromancy, @SimeonEhrig!

I just looked at the original comment again.

  1. We are still missing support for precompiled headers. I assume we still want this.
  2. I originally had the idea to introduce back-end-specific targets. I believe we might be able to tackle this soon because alpaka_add_{executable,library} are much less complex than they once were. The only thing still done by these functions is to set the source file properties for a given target. We should investigate whether this can be solved by setting the LINKER_LANGUAGE property on the alpaka target.
SimeonEhrig commented 2 years ago
2\. I originally had the idea to introduce back-end-specific targets. I believe we might be able to tackle this soon because `alpaka_add_{executable,library}` are much less complex than they once were. The only thing still done by these functions is to set the source file properties for a given target. We should investigate whether this can be solved by setting the `LINKER_LANGUAGE` property on the alpaka target.

For the user interface, do you mean something like:

add_executable(testExe main.cpp)
target_link_library(testExe PUBLIC alpaka::CudaACC)
j-stephan commented 2 years ago

Yes. So a user employing both the CUDA and the OpenMP back-ends will do something similar to the following:

add_executable(testExe main.cpp)
target_link_library(testExe PRIVATE alpaka::cuda alpaka::omp2)

If the LINKER_LANGUAGE works as I expect, these targets will look like this:

set_target_properties(cuda PROPERTIES LINKER_LANGUAGE CUDA)
set_target_properties(omp2 PROPERTIES LINKER_LANGUAGE CXX)

This should hopefully be enough to tell CMake that we want to compile the user's source files in CUDA mode.

SimeonEhrig commented 2 years ago

Okay, but I'm not sure if it will work, if link one target with two different backends. But like we already offline discussed, I think the behavior will be similar to the current cmake. It depends on the compiler, if e.g. OpenMP with CUDA is working together. But we get also a new ability:

add_executable(testExe main.cpp)

add_library(cudaKernels cudaKernels.cpp)
target_link_library(cudaKernels PRIVATE alpaka::cuda)

add_library(omp2Kernels omp2Kernels.cpp)
target_link_library(omp2Kernels PRIVATE alpaka::omp2)

target_link_library(testExe PRIVATE cudaKernels omp2Kernels)

This allows for example, to compile the code with the Clang compiler as host and device. At the moment, it does not work, because clang++ -fopenmp -xcuda is not working.