rapidsai / raft

RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
https://docs.rapids.ai/api/raft/stable/
Apache License 2.0
770 stars 194 forks source link

[FEA] Investigating build times and number of specializations #1201

Open cjnolet opened 1 year ago

cjnolet commented 1 year ago

While investigating some causes of our slow build times, @ahendriksen has provided some statistics (and scripts) to compute

1) a ranking of which device functions in a cuda-compiled binary have the most specializations 2) the total sizes of the functions from 1, to help determine the total impact.

Allard privided a script to extract the information from a binary using cuobjdump:


# Get binary sizes per kernel
​
​
# See: <https://nvidia.slack.com/archives/C02DP33ERFS/p1661468240691609>
​
mkdir tmp_objdump
cp libraft_distance.so tmp_objdump/
cd tmp_objdump/
cuobjdump --extract-elf all libraft_distance.so

rm -f kernel_sizes.txt

# Write all kernel sizes to a file
for f in libraft_distance.*.sm_70.cubin ; do
    echo ${f};
    nm --print-size --size-sort --radix=d ${f} | sed 's/$_/_/'| sed 's/$.*//' | cu++filt >> kernel_sizes.txt ;
done

# Remove template arguments and remove kernel addresses. Output looks like:
# 0000000000000064 W void raft::matrix::detail::select::warpsort::block_kernel
cat kernel_sizes.txt | sed 's/^[0-9]*//' | sed 's/<.*//' | sort -h > kernel_sizes_filtered.txt

​```

and a script to interpret the results:

```python
import numpy as np
import pandas as pd
import seaborn as sns
import io
from pathlib import Path

txt = Path('kernel_sizes_filtered.txt').read_text()

split_lines = [l.strip().replace(' void', '').split(' ') for l in txt.splitlines()]

for l in split_lines: 
    if len(l) != 3: 
        print(l)

split_lines[1]

sizes_str, indicators, names = zip(*split_lines)

sizes = np.array(list(map(int, sizes_str)))
sizes_MB = sizes / 1e6

df = pd.DataFrame(dict(size_MB=sizes_MB, name=names, indicator=indicators))

top20 = (df[df.indicator=='T']
          .groupby('name')['size_MB']
          .agg(['count', 'mean', 'sum'])
          .sort_values(by='sum', ascending=False)[:20])

top20

image

It's clear that the ivf_pq::ivfpq_compute_similarity_kernel has the most specializations and is also adding to the larger binary size. It's also likely that this kernel is contributing significantly to the compile times of libraft-distance.

cc @tfeher @achirkin

The second largest kernel looks like the pairwiseDitanceMatKernel. We should consider addressing some of the items at the top of this list to improve compile times. Now that CI is compiling w/ CUDA 11.8, it's compiling 6 different architectures (sm_60, sm_70, sm_75, sm_80, sm_86, sm_90)

achirkin commented 1 year ago

Definitely, it's a well-known problem of the ivfpq_compute_similarity_kernel! At some point during integration, I downlifted the PqBits to runtime, but that made the compute scores part of the kernel much heavier on ALU and resulted in 10-20% downgrade in the search QPS. Hence we've decided to bring it back to the template parameters. It can take values from 4 to 8, resulting in 5x more template instantiations.

What we potentially can improve is specializations of this kernel (which are done via the ivfpq_compute_similarity helper struct). On the one hand, they speed up the compile times by allowing compiling different instances in parallel. On the other hand, the current implementation of this produces some instances that do not make sense. For example, one could argue that it does not make much sense to have LutT (float8/float16/float32 possible) larger than OutT (float16/float32 possible), hence we can avoid compiling (float32, float16) combination. EnableSMemLut boolean in the current selection logic implies PrecompBaseDiff, so we can disable one of the four combinations of the two booleans. With a bit of code tweaking, we can fix IdxT = uin32_t when the fused sort is disabled (Capacity == 0).

ahendriksen commented 1 year ago

For example, one could argue that it does not make much sense to have LutT (float8/float16/float32 possible) larger than OutT (float16/float32 possible), hence we can avoid compiling (float32, float16) combination

This sounds like a good idea. Perhaps we can make it a run-time error? Something like:

if constexpr (sizeof(OutT) < sizeof(LutT) {
  RAFT_EXPECTS(false, "Size of LutT may not be larger than size of OutT"); 
} else {
 // launch kernel
}
achirkin commented 1 year ago

Sure, that's not a problem, we can restrict the instantiation in the kernel selection logic (OutT is the ScoreT there) and throw the error right there, or along with the other runtime checks. This will only reduce the number of instances by 1/6 though.

ahendriksen commented 1 year ago

For the pairwise-distance kernels, I think the following would help:

  1. Consider removing some specializations for data alignments that confer little performance benefit (e.g.:VecLen==2 for floats)
  2. Standardize on a single index type. We now have specialization for int and uint32_t. That seems excessive.

For measurements, I would propose we also save the ninja.log file as a downloadable build artifact (I hope this is possible with github actions). This way, we can run ninjatracing easily to identify the bottlenecks.

As a general coding practice, maybe the following could help:

  1. Shrink the "surface area" of the template specializations. Often we have multiple templates arguments, each of which is only used in a specific portion of the code. We often use the following patten:
    template <typename T1, typename T2, typename T3>
    struct foo {
    T2 do1(T1 x) { /* .. */}
    T3 do2(T2 x) { /* .. */}
    T3 call(T1 x) {
    auto y = do1(x);
    auto z = do2(x);
    return z;
    }
    };

    We could use this pattern instead:

    
    template <typename T1, T2>
    T2 do1(T1 x) { /* .. */}
    template <typename T2, T3>
    T3 do2(T2 x) { /* .. */}

template <typename T1, T2, T3> T3 call(T1 x) { auto y = do1(x); auto z = do2(x); return z; }


This will reduce the number of instantiations. For instance, when the types `Ti` can be `float` or `double`, then will have 8 instantiations of `foo::do1` (one for each of the two possibilities of T1, T2, T3), but only 4 instantiations of `do1` (one for each of the  two possibilities of T1, T2). I hope that this can reduce build times, but I have not measured it.

I am guessing that @Nyrio has some input as well!
ahendriksen commented 1 year ago

Hi I am sharing my progress in this PR: https://github.com/rapidsai/raft/pull/1228

I have so far reduced compile times by 22% and number of kernels by more than 50% with two 2-line changes in the pairwisedistance code.

ahendriksen commented 1 year ago

Another measurement technique we can use is add the --time nvcc_compile.log flag to the nvcc command-line. This will write the time taken by all intermediate compilation stages of nvcc to the file in csv format. We can make this file a downloadable artifact for further analysis, or create a graph/plot/table immediately in CI.

--time <file name>                              (-time)                         
        Generate a comma separated value table with the time taken by each compilation
        phase, and append it at the end of the file given as the option argument.
        If the file is empty, the column headings are generated in the first row
        of the table. If the file name is '-', the timing data is generated in stdout.
tfeher commented 1 year ago

For IVF-PQ compute similarity kernels, it does not add much value to have both int64 and uint64 specialization. I quick fix would be to just remove the int64 version from all instantiations (and tests and benchmarks): https://github.com/rapidsai/raft/blob/aa14ed49818417fb6569ac1e31f3c8861831d137/cpp/src/distance/neighbors/specializations/detail/ivfpq_compute_similarity_float_fast.cu#L23-L25

[update] Unfortunately we have public interface both for uint64 and int64 specialization:

@achirkin mentioned that it might be possible to keep in this internal kernel only uint32 (because we are working with smaller chunks of the dataset). He is investigating this option.

cjnolet commented 1 year ago

Posting a ninjatracing log here to provide a breakdown of the compile times on my workstation. Just to focus on the things that other projects depend directly upon, this trace only includes source files compiled into the shared libs and not tests or benchmarks.

Screenshot from 2023-02-02 20-18-11

One good thing to note is that most of the source files which are bottlenecks are in a specializations directory. Any source files compiling code for raft::runtime should be directly calling a specialization. neighbors/ivfpq_search.cu violates this currently so we should look into that for sure (as outlined in https://github.com/rapidsai/raft/issues/1194) because it 1) needs to be split into separate compilation units and 2) use consistent template types so it can use the existing specializations.

Other offenders (not necessarily in order) in addition to ivfpq_search specializations:

  1. ball_cover.cu
  2. refine_*.cu
  3. knn.cu (hoping this might be largely fixed once @benfred's new brute-force changes are merged)
  4. lpunexpanded*.cu
  5. cosine.cu
  6. jensen_shannon.cu
cjnolet commented 1 year ago

Here's the timeline for the end-to-end build w/o #1232:

Screenshot from 2023-02-03 07-09-18

And with #1232: Screenshot from 2023-02-03 07-09-06

Unfortunately, I need to revert a couple changes for the release because the build time in CI has caused some problems for users. For 23.04, we should focus on getting those changes back while also introducing more changes like #1230 #1228 and #1220 to lower the build time further. It would be great it we can get the end-to-end build time under 20 minutes again on a single architecture. Now that we're building for 6 architectures w/ CUDA 11.8, we should strive to keep the initial cpp-build times in CI to just over an hour if possible.

ahendriksen commented 1 year ago

This is some additional analysis on the ninja log traces that @cjnolet shared. It shows the compilation units that have more than 1 minute compile time and compares between before and after the revert in #1232. The primary impact of the revert seems to be on lp_unexpanded_float_float_float_uint32.cu and canberra.cu.

image

cjnolet commented 1 year ago

@ahendriksen that analysis is great! Strangely, I wasn't seeing that issue at all when compiling exclusively for sm_70. It appears so be something that starts w/ sm_80 and above? But out of the 6 architectures that compile in CI, I guess at least 4 of them are sm_80 and above so that might explain why that file alone is taking 4+ hours.

ahendriksen commented 1 year ago

I did have 42 minute compile time on the lp_unexpanded with sm_70. (pre-revert). I would be surprised if it were architecture specific.

cjnolet commented 1 year ago

I opened https://github.com/rapidsai/raft/pull/1235 to remove all the uint32_t specializations.

achirkin commented 1 year ago

Another idea for ivf-pq. Atm, we have lut_dtype and internal_distance_type as orthogonal parameters, which gives 8 different variants (lut_dtype = {fp8_signed, fp8_unsigned, fp16, fp32 }, internal_distance_type = {fp16, fp32}). I suggested above to remove the variants where sizeof(lut_dtype) > sizeof(internal_distance_type), which is only one combination out of eight (fp32, fp16). What if, instead, we'd introduce just a few configurations which make most sense in terms of performance/recall compromise? i.e. remove these two search parameters in favor of something like

enum class compute_optimization {
  /** Represent the internal scores and the lookup table as float32. */
  LEVEL_0,
  /** Represent the internal scores as float32 and the lookup table as float16. */
  LEVEL_1,
  /** Represent the internal scores as float16 and the lookup table as float8. */
  LEVEL_2
}

This should give us another 60% reduction in number of instances.

ahendriksen commented 1 year ago

On this branch, I have documented how to reduce the compile times of a specific translation unit (l1_float_float_float_int.cu) from 20s to 4.8s and potentially to 2.8s.

I think some of these techniques can be applied to RAFT.

ahendriksen commented 1 year ago

As a result of the investigation, I have opened an issue to consider removing dependency on spdlog: https://github.com/rapidsai/raft/issues/1300

ahendriksen commented 1 year ago

Following up on the investigation I linked above, I have opened a PR #1307 that reduces the compilation times of pairwise distance instantiations (specializations) by up to 5x.

It requires quite some rearchitecting of the code and I am not sure if that is acceptable, but it shows that big wins are possible.

@teju85 , @tfeher for visibility.

cjnolet commented 1 year ago

@ahendriksen I've looked through your PR with the refactors to the pairwise distance impl details and I think it looks great so far from my side. I've held off on submitting formal feedback to it because I noticed a couple things marked TODO in the code still. Let me know if it's ready and I can review it more closely.

ahendriksen commented 1 year ago

The PR is ready for review.