NVIDIA / cutlass

CUDA Templates for Linear Algebra Subroutines
Other
5.32k stars 896 forks source link

[FEA] CUTLASS should ensure all its symbols are hidden from shared object libraries #1027

Open jrhemstad opened 1 year ago

jrhemstad commented 1 year ago

Is your feature request related to a problem? Please describe.

As a user of CUTLASS, I would like to build a shared object library, libA.so, that internally uses CUTLASS function templates, including __global__ function templates.

Today, CUTLASS does nothing to hide the visibility of its __global__ function templates or any other host template functions, and by default these symbols have weak visibility. In short, this means if I link two dynamic libraries A.so and B.so into my application that both contain identical instantiations of a CUTLASS template, then the linker will discard one of the two instantiations and use only one of them. This can lead to disastrous and insidious issues like spurious silent failures.

This issue is true of any header-only, C++ template library, but is particularly bad for CUDA C++ libraries that ship __global__ function templates. Consider this trivial example of one of many ways things can go wrong

The following code has two TUs:

Each TU has a single function ( volta()​ or pascal()​ respectively) and this function queries and prints the ptxVersion​ of a kernel<void>​ using cudaFuncGetAttributes​.

These TUs are linked into a program that determines the compute capability of device 0 and invokes volta()​ or pascal​() accordingly.

One would expect that invoking volta​() would always print 70​ and invoking pascal()​ would print 60​.

However, this is not the case. As described above, the kernel​ template has weak linkage, and so when linking the volta.o​ and pascal.o​ TUs together, the linker selects one of the instantiations of kernel<void>​ and discards the other.

The end result is that the program will randomly print 60 or 70 depending on which instantiation the linker picked.

// kernel.cuh
template <typename T>
__global__ void kernel(){}

// volta.cu
#include "kernel.cuh"
void volta(){
   cudaFuncAttributes attrs;
   cudaFuncGetAttributes(&attrs, (void *)(kernel<void>));
   printf("%d\n, attrs.ptxVersion*10);
}

// pascal.cu
#include "kernel.cuh"
void pascal(){

   cudaFuncAttributes attrs;
   cudaFuncGetAttributes(&attrs, (void *)(kernel<void>));
   printf("%d\n, attrs.ptxVersion*10);

}

// main.cpp
void volta();
void pascal();
int main(){
   int compute_capability;
   cudaDeviceGetAttribute(&compute_capability, cudaDevAttrComputeCapabilityMajor, 0);
   if(compute_capability >= 70)
      volta();
   else
      pascal();
}

nvcc -c -arch=sm_70 volta.cu
nvcc -c -arch=sm_60 pascal.cu
nvcc -c main.cpp
nvcc -o test volta.o pascal.o main.o
./test

TL;DR:

Describe the solution you'd like

Luckily the solution is quite simple. Every host template function (including __global__ functions) in CUTLAS should be annotated with __attribute__((visibility("hidden"))).

This makes the symbol hidden in any resulting dynamic library.

Additional Context

We've been bitten by this in Thrust/CUB several times over the years.

Like CUTLASS, Thrust/CUB also have the ability to allow users to customize the namespace in order to differentiate the symbols and avoid this problem. However, this solution is not robust. First of all, it requires every user to remember to customize the namespace. Secondly, it's possible for users to properly customize the namespace and still run afoul of the issues that can result.

See:

github-actions[bot] commented 1 year ago

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

github-actions[bot] commented 6 months ago

This issue has been labeled inactive-90d due to no recent activity in the past 90 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed.