rapidsai / kvikio

KvikIO - High Performance File IO
https://docs.rapids.ai/api/kvikio/stable/
Apache License 2.0
154 stars 57 forks source link

kvikio still segfaults on program termination #497

Open EricKern opened 1 week ago

EricKern commented 1 week ago

Hi everyone,

I'm getting a segfault when my python script terminates. This only happens when kvikio is used.

Reproducer

mamba env create -f img2tensor_kvikio.yaml && mamba clean -afy

// img2tensor_kvikio.yaml
name: img2tensor
channels:
  - pytorch
  - nvidia
  - rapidsai
  - conda-forge
dependencies:
  - notebook
  - tifffile
  - python=3.11
  - pytorch
  - pytorch-cuda=12.4
  - kvikio

bug.py

import kvikio

file_name = 'file0.txt'

fd = kvikio.CuFile(file_name, "w")
fd.close()

I'm running in a kubernetes environment. We use the open kernel driver 535.183.01

I assumed this #462 has fixed the issue but it seems there is more to it.

You can find the concretized environment here: exported_img2tensor_kvikio.txt

It uses kvikio 24.10 which should include the previously mentioned PR.

jakirkham commented 1 week ago

Could you please slim the environment further like so and retry?

# filename: kvikio2410_cuda122.yaml
name: kvikio2410_cuda122
channels:
  - rapidsai
  - conda-forge
dependencies:
  - cuda-version=12.2
  - python=3.11
  - kvikio=24.10

Asking as there are mismatching CUDA versions in the reproducing environment. Plus some extra bits that appear unused in the example. So would like to simplify further to avoid other potential issues

EricKern commented 1 week ago

Unfortunately it still segfaults. I again attached the concretized dependency list kvikio2410_cuda122.txt.

The cuda version mismatch seems resolved. Also the cufile.log seems fine to me. I'm using a MIG slice from an A100 and writing to a weka fs works fine. It only segfaults on program termination

wence- commented 1 week ago

Can you show a backtrace from the segfault. e.g. with gdb:

gdb --args python bug.py
(gdb) run
(gdb) backtrace full
EricKern commented 1 week ago
(gdb) run
Starting program: /opt/conda/envs/kvikio2410_cuda122/bin/python bug.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff47eb700 (LWP 2675)]
[New Thread 0x7ffff3fea700 (LWP 2676)]
[New Thread 0x7fffeb7e9700 (LWP 2677)]
[New Thread 0x7fffdaae0700 (LWP 2678)]
[New Thread 0x7fffcdfff700 (LWP 2679)]
[New Thread 0x7fffcd21d700 (LWP 2691)]
[New Thread 0x7fffcca1c700 (LWP 2692)]
[New Thread 0x7fffc7fff700 (LWP 2693)]
[New Thread 0x7fffc77fe700 (LWP 2694)]
[New Thread 0x7fffc6ffd700 (LWP 2695)]
[New Thread 0x7fffc67fc700 (LWP 2696)]
[New Thread 0x7fffc5ffb700 (LWP 2697)]
[New Thread 0x7fffc57fa700 (LWP 2698)]
[Thread 0x7fffdaae0700 (LWP 2678) exited]
[Thread 0x7fffcd21d700 (LWP 2691) exited]
[Thread 0x7fffc57fa700 (LWP 2698) exited]
[Thread 0x7fffc5ffb700 (LWP 2697) exited]
[Thread 0x7fffc6ffd700 (LWP 2695) exited]
[Thread 0x7fffc77fe700 (LWP 2694) exited]
[Thread 0x7fffc7fff700 (LWP 2693) exited]
[Thread 0x7fffcca1c700 (LWP 2692) exited]
[Thread 0x7fffeb7e9700 (LWP 2677) exited]
[Thread 0x7ffff3fea700 (LWP 2676) exited]
[Thread 0x7ffff47eb700 (LWP 2675) exited]

Thread 1 "python" received signal SIGSEGV, Segmentation fault.
std::basic_streambuf<char, std::char_traits<char> >::xsputn (this=0x7fffffffd7a8, __s=0x5555563aa252 "", __n=93824998875808)
    at /home/conda/feedstock_root/build_artifacts/gcc_compilers_1724798733686/work/build/x86_64-conda-linux-gnu/libstdc++-v3/include/bits/streambuf.tcc:90
90      /home/conda/feedstock_root/build_artifacts/gcc_compilers_1724798733686/work/build/x86_64-conda-linux-gnu/libstdc++-v3/include/bits/streambuf.tcc: No such file or directory.
(gdb) backtrace full
#0  std::basic_streambuf<char, std::char_traits<char> >::xsputn (this=0x7fffffffd7a8, __s=0x5555563aa252 "", __n=93824998875808)
    at /home/conda/feedstock_root/build_artifacts/gcc_compilers_1724798733686/work/build/x86_64-conda-linux-gnu/libstdc++-v3/include/bits/streambuf.tcc:90
        __remaining = <optimized out>
        __len = <optimized out>
        __buf_len = 8388607
        __ret = <optimized out>
#1  0x00007ffff78c169d in std::__ostream_write<char, std::char_traits<char> > (__out=..., __s=<optimized out>, __n=93824998875808)
    at /home/conda/feedstock_root/build_artifacts/gcc_compilers_1724798733686/work/build/x86_64-conda-linux-gnu/libstdc++-v3/include/bits/basic_ios.h:325
        __put = <optimized out>
#2  0x00007ffff78c1774 in std::__ostream_insert<char, std::char_traits<char> > (__out=..., __s=0x555555baa298 "Read", __n=93824998875808)
    at /home/conda/feedstock_root/build_artifacts/gcc_compilers_1724798733686/work/build/x86_64-conda-linux-gnu/libstdc++-v3/include/bits/basic_ios.h:184
        __w = <error reading variable __w (dwarf2_find_location_expression: Corrupted DWARF expression.)>
        __cerb = {_M_ok = true, _M_os = @0x7fffffffd7a0}
#3  0x00007fffda13044f in ?? () from /opt/conda/envs/kvikio2410_cuda122/lib/python3.11/site-packages/kvikio/_lib/../../../../libcufile.so.0
No symbol table info available.
#4  0x00007fffda13206b in ?? () from /opt/conda/envs/kvikio2410_cuda122/lib/python3.11/site-packages/kvikio/_lib/../../../../libcufile.so.0
No symbol table info available.
#5  0x00007fffda080c82 in ?? () from /opt/conda/envs/kvikio2410_cuda122/lib/python3.11/site-packages/kvikio/_lib/../../../../libcufile.so.0
No symbol table info available.
#6  0x00007ffff7fe0f6b in _dl_fini () at dl-fini.c:138
        array = 0x7fffda2bc1d0
        i = <optimized out>
        l = 0x555555efa720
        maps = 0x7fffffffdb80
        i = <optimized out>
        l = <optimized out>
        nmaps = <optimized out>
        nloaded = <optimized out>
        ns = 0
        do_audit = <optimized out>
        __PRETTY_FUNCTION__ = "_dl_fini"
#7  0x00007ffff7c9a8a7 in __run_exit_handlers (status=0, listp=0x7ffff7e40718 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true) at exit.c:108
        atfct = <optimized out>
        onfct = <optimized out>
        cxafct = <optimized out>
        f = <optimized out>
        new_exitfn_called = 262
        cur = 0x7ffff7e41ca0 <initial>
#8  0x00007ffff7c9aa60 in __GI_exit (status=<optimized out>) at exit.c:139
No locals.
#9  0x00007ffff7c7808a in __libc_start_main (main=0x5555557dea20 <main>, argc=2, argv=0x7fffffffdec8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffdeb8) at ../csu/libc-start.c:342
        result = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {93824995523264, -3934155394888934001, 93824994896209, 140737488346816, 0, 0, 3934155393885101455, 3934172503229554063}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x2, 
              0x7fffffffdec8}, data = {prev = 0x0, cleanup = 0x0, canceltype = 2}}}
        not_first_call = <optimized out>
#10 0x00005555557de97a in _start () at /usr/local/src/conda/python-3.11.10/Parser/parser.c:33931
No symbol table info available.
wence- commented 1 week ago

OK, thanks. Something in cufile is running below main. We'll try and reproduce locally and perhaps build with a debug build so we can get a bit more information.

EricKern commented 1 week ago

Thanks a lot for looking into this. If there is something I can do to help you reproduce the error please let me know.

madsbk commented 1 week ago

@EricKern, what if you run with KVIKIO_COMPAT_MODE=ON ?

jakirkham commented 1 week ago

JFYI, to get a debug build of python add the following to channels above conda-forge: conda-forge/label/python_debug

EricKern commented 1 week ago

@EricKern, what if you run with KVIKIO_COMPAT_MODE=ON ?

With compat mode on there is no segmentation fault. If I set it to "off" then it appears again.

JFYI, to get a debug build of python add the following to channels above conda-forge: conda-forge/label/python_debug

Do you think that this might produce a better backtrace from the crash or is there anything else that I could do with a debug build of python?

jakirkham commented 1 week ago

Lawrence mentioned doing a debug build. So wanted to share that resource

If the segfault happens somewhere in KvikIO, it may help. If it happens in cuFile, we likely don't learn much

wence- commented 1 week ago

If Mads can't repro next week, I guess I'll try and figure out how to set up cufile/gds on my workstation and do some spelunking

madsbk commented 4 days ago

If Mads can't repro next week, I guess I'll try and figure out how to set up cufile/gds on my workstation and do some spelunking

I will take a look tomorrow

madsbk commented 3 days ago

I am not able to reproduce, the conda environment works fine for me :/ I have asked the cuFile team for input.

kingcrimsontianyu commented 3 days ago

cuDF is seeing the same issue (https://github.com/rapidsai/cudf/issues/17121) arising from cuFile (here cuFile API is accessed directly from within cuDF not through KvikIO).

Btw, when cuDF did use KvikIO to perform GDS I/O, we observed that the segfault is manifested when KVIKIO_NTHREADS is set to 8, not the default 1. But I think this is a red herring. At the time of crash, backtrace points to some CUDA calls made by cuFile after the main returns. This should be cuFile doing implicit driver closing.

Also, adding cuFileDriverClose() before the main returns seems to prevent the segfault in cuDF's benchmark.

EricKern commented 1 day ago

@madsbk May I ask if you have used a MIG slice or a full GPU in your tests? I'm currently not able to use a full A100 but as soon it's available again I want to try and reproduce the segfault on a full A100. Before using kvikio I have successfully used the cufile C++ API without a problem. Even with a MIG.

madsbk commented 1 day ago

I am running on a full GPU.

https://github.com/rapidsai/kvikio/pull/514 implements Python bindings to cufileDriverOpen() and cufileDriverClose(). The hope is that we can prevent this issue in Python by calling cufileDriverClose() and module exit.