fauxpilot / fauxpilot

FauxPilot - an open-source alternative to GitHub Copilot server
MIT License
14.61k stars 624 forks source link

[FT][ERROR] CUDA runtime error: invalid device function #30

Closed leemgs closed 2 years ago

leemgs commented 2 years ago

I installed and run Fauxpilot on Ubuntu18.04/Nvidia RTX 2080 (192.168.0.201) and Ubuntu18.04/Nvidia Titan Xp (192.168.0.179). Then, in the Ubuntu environment of my laptop, I performed the OpenAI API with the curl command, as shown below. Unfortunately, sending the curl command to Ubuntu18.04/Nvidia Titan Xp (192.168.0.179) throws an error. In the summary. FauxPilot on Ubuntu18.04/Nvidia Titan Xp generates the "CUDA runtime error: invalid device function" error message. Maybe Nvidia Titan Xp is not supported to run FauxPilot?

The configuration file is as follows.

cat ./config.env
MODEL=codegen-2B-multi
NUM_GPUS=1
MODEL_DIR=/work/fauxpilot/models

Case1: When I send a OpenAI API to Ubuntu18.04/Nvidia RTX 2080 (192.168.0.201), it is okay.

fauxpilot$ curl -s -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{"prompt":"def hello","max_tokens":16,"temperature":0.1,"stop":["\n\n"]}' http://192.168.0.201:5000/v1/engines/codegen/completions

{"id": "cmpl-eww3WHuWSjUMdfLb5tBfxVxRoJUIs", "model": "codegen", "object": "text_completion", "created": 1660749662, "choices": [{"text": "(self):\n        return \"Hello World!\"", "index": 0, "finish_reason": "stop", "logprobs": null}], "usage": 

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.103.01   Driver Version: 470.103.01   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| 30%   29C    P8     3W / 225W |   6035MiB /  7982MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1198      G   /usr/lib/xorg/Xorg                  9MiB |
|    0   N/A  N/A      1403      G   /usr/bin/gnome-shell                3MiB |
|    0   N/A  N/A    768980      C   ...onserver/bin/tritonserver     6017MiB |
+-----------------------------------------------------------------------------+

Case2: When I send a OpenAI API to Ubuntu18.04/Nvidia Titan Xp (192.168.0.179), it is failed.

fauxpilot$ curl -s -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{"prompt":"def hello","max_tokens":16,"temperature":0.1,"stop":["\n\n"]}' http://192.168.0.179:5000/v1/engines/codegen/completions

<!doctype html>
<html lang=en>
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01    Driver Version: 465.19.01    CUDA Version: 11.3     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA TITAN Xp     On   | 00000000:01:00.0 Off |                  N/A |
| 23%   39C    P2    61W / 250W |   5919MiB / 12194MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1592      G   /usr/lib/xorg/Xorg                 41MiB |
|    0   N/A  N/A     24177      C   ...onserver/bin/tritonserver     5873MiB |
+-----------------------------------------------------------------------------+

Issue report:

Below is the error log output when running ./launch.sh on Ubuntu18.04/Nvidia Titan Xp (192.168.0.179).

$ ./launch.sh
...... Omission ......
triton_1         |
triton_1         | I0817 15:19:21.682222 96 grpc_server.cc:4587] Started GRPCInferenceService at 0.0.0.0:8001
triton_1         | I0817 15:19:21.682527 96 http_server.cc:3303] Started HTTPService at 0.0.0.0:8000
triton_1         | I0817 15:19:21.724786 96 http_server.cc:178] Started Metrics Service at 0.0.0.0:8002

triton_1         | W0817 15:21:08.354892 96 libfastertransformer.cc:1397] model fastertransformer, instance fastertransformer_0, executing 1 requests
triton_1         | W0817 15:21:08.354910 96 libfastertransformer.cc:638] TRITONBACKEND_ModelExecute: Running fastertransformer_0 with 1 requests
triton_1         | W0817 15:21:08.354916 96 libfastertransformer.cc:693] get total batch_size = 1
triton_1         | W0817 15:21:08.354922 96 libfastertransformer.cc:1051] get input count = 16
triton_1         | W0817 15:21:08.354930 96 libfastertransformer.cc:1117] collect name: start_id size: 4 bytes
triton_1         | W0817 15:21:08.354935 96 libfastertransformer.cc:1117] collect name: input_ids size: 8 bytes
triton_1         | W0817 15:21:08.354939 96 libfastertransformer.cc:1117] collect name: bad_words_list size: 8 bytes
triton_1         | W0817 15:21:08.354944 96 libfastertransformer.cc:1117] collect name: random_seed size: 4 bytes
triton_1         | W0817 15:21:08.354948 96 libfastertransformer.cc:1117] collect name: end_id size: 4 bytes
triton_1         | W0817 15:21:08.354952 96 libfastertransformer.cc:1117] collect name: input_lengths size: 4 bytes
triton_1         | W0817 15:21:08.354956 96 libfastertransformer.cc:1117] collect name: request_output_len size: 4 bytes
triton_1         | W0817 15:21:08.354960 96 libfastertransformer.cc:1117] collect name: runtime_top_k size: 4 bytes
triton_1         | W0817 15:21:08.354964 96 libfastertransformer.cc:1117] collect name: runtime_top_p size: 4 bytes
triton_1         | W0817 15:21:08.354968 96 libfastertransformer.cc:1117] collect name: is_return_log_probs size: 1 bytes
triton_1         | W0817 15:21:08.354972 96 libfastertransformer.cc:1117] collect name: stop_words_list size: 24 bytes
triton_1         | W0817 15:21:08.354976 96 libfastertransformer.cc:1117] collect name: temperature size: 4 bytes
triton_1         | W0817 15:21:08.354979 96 libfastertransformer.cc:1117] collect name: len_penalty size: 4 bytes
triton_1         | W0817 15:21:08.354988 96 libfastertransformer.cc:1117] collect name: beam_width size: 4 bytes
triton_1         | W0817 15:21:08.354998 96 libfastertransformer.cc:1117] collect name: beam_search_diversity_rate size: 4 bytes
triton_1         | W0817 15:21:08.355005 96 libfastertransformer.cc:1117] collect name: repetition_penalty size: 4 bytes
triton_1         | W0817 15:21:08.355010 96 libfastertransformer.cc:1130] the data is in CPU
triton_1         | W0817 15:21:08.355015 96 libfastertransformer.cc:1137] the data is in CPU
triton_1         | W0817 15:21:08.355025 96 libfastertransformer.cc:999] before ThreadForward 0
triton_1         | W0817 15:21:08.355069 96 libfastertransformer.cc:1006] after ThreadForward 0
triton_1         | I0817 15:21:08.355097 96 libfastertransformer.cc:834] Start to forward
triton_1         | terminate called after throwing an instance of 'std::runtime_error'
triton_1         |   what():  [FT][ERROR] CUDA runtime error: invalid device function /workspace/build/fastertransformer_backend/build/_deps/repo-ft-src/src/fastertransformer/kernels/sampling_topp_kernels.cu:1057
triton_1         |
triton_1         | Signal (6) received.
triton_1         |  0# 0x000055ACE072C699 in /opt/tritonserver/bin/tritonserver
triton_1         |  1# 0x00007F0F78E2D090 in /usr/lib/x86_64-linux-gnu/libc.so.6
triton_1         |  2# gsignal in /usr/lib/x86_64-linux-gnu/libc.so.6
triton_1         |  3# abort in /usr/lib/x86_64-linux-gnu/libc.so.6
triton_1         |  4# 0x00007F0F791E6911 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6
triton_1         |  5# 0x00007F0F791F238C in /usr/lib/x86_64-linux-gnu/libstdc++.so.6
triton_1         |  6# 0x00007F0F791F23F7 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6
triton_1         |  7# 0x00007F0F791F26A9 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6
triton_1         |  8# void fastertransformer::check<cudaError>(cudaError, char const*, char const*, int) in /opt/tritonserver/backends/fastertransformer/libtransformer-shared.so
triton_1         |  9# void fastertransformer::invokeTopPSampling<float>(void*, unsigned long&, unsigned long&, int*, int*, bool*, float*, float*, float const*, int const*, int*, int*, curandStateXORWOW*, int, unsigned long, int const*, float, CUstream_st*, cudaDeviceProp*) in /opt/tritonserver/backends/fastertransformer/libtransformer-shared.so
triton_1         | 10# fastertransformer::TopPSamplingLayer<float>::allocateBuffer(unsigned long, unsigned long, float) in /opt/tritonserver/backends/fastertransformer/libtransformer-shared.so
triton_1         | 11# fastertransformer::TopPSamplingLayer<float>::runSampling(std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastertransformer::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastertransformer::Tensor> > >*, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastertransformer::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastertransformer::Tensor> > > const*) in /opt/tritonserver/backends/fastertransformer/libtransformer-shared.so
triton_1         | 12# fastertransformer::BaseSamplingLayer<float>::forward(std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastertransformer::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastertransformer::Tensor> > >*, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastertransformer::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastertransformer::Tensor> > > const*) in /opt/tritonserver/backends/fastertransformer/libtransformer-shared.so
triton_1         | 13# fastertransformer::DynamicDecodeLayer<float>::forward(std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastertransformer::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastertransformer::Tensor> > >*, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastertransformer::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastertransformer::Tensor> > > const*) in /opt/tritonserver/backends/fastertransformer/libtransformer-shared.so
triton_1         | 14# fastertransformer::GptJ<__half>::forward(std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastertransformer::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastertransformer::Tensor> > >*, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastertransformer::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastertransformer::Tensor> > > const*, fastertransformer::GptJWeight<__half> const*) in /opt/tritonserver/backends/fastertransformer/libtransformer-shared.so
triton_1         | 15# GptJTritonModelInstance<__half>::forward(std::shared_ptr<std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, triton::Tensor, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, triton::Tensor> > > >) in /opt/tritonserver/backends/fastertransformer/libtransformer-shared.so
triton_1         | 16# 0x00007F0F700ED40A in /opt/tritonserver/backends/fastertransformer/libtriton_fastertransformer.so
triton_1         | 17# 0x00007F0F7921EDE4 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6
triton_1         | 18# 0x00007F0F7A42D609 in /usr/lib/x86_64-linux-gnu/libpthread.so.0
triton_1         | 19# clone in /usr/lib/x86_64-linux-gnu/libc.so.6
triton_1         |
copilot_proxy_1  | [2022-08-17 15:21:08,929] ERROR in app: Exception on /v1/engines/codegen/completions [POST]
copilot_proxy_1  | Traceback (most recent call last):
copilot_proxy_1  |   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2463, in wsgi_app
copilot_proxy_1  |     response = self.full_dispatch_request()
copilot_proxy_1  |   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1760, in full_dispatch_request
copilot_proxy_1  |     rv = self.handle_user_exception(e)
copilot_proxy_1  |   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1758, in full_dispatch_request
copilot_proxy_1  |     rv = self.dispatch_request()
copilot_proxy_1  |   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1734, in dispatch_request
copilot_proxy_1  |     return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
copilot_proxy_1  |   File "/python-docker/app.py", line 258, in completions
copilot_proxy_1  |     response=codegen(data),
copilot_proxy_1  |   File "/python-docker/app.py", line 234, in __call__
copilot_proxy_1  |     completion, choices = self.generate(data)
copilot_proxy_1  |   File "/python-docker/app.py", line 146, in generate
copilot_proxy_1  |     result = self.client.infer(model_name, inputs)
copilot_proxy_1  |   File "/usr/local/lib/python3.8/site-packages/tritonclient/grpc/__init__.py", line 1322, in infer
copilot_proxy_1  |     raise_error_grpc(rpc_error)
copilot_proxy_1  |   File "/usr/local/lib/python3.8/site-packages/tritonclient/grpc/__init__.py", line 62, in raise_error_grpc
copilot_proxy_1  |     raise get_error_grpc(rpc_error) from None
copilot_proxy_1  | tritonclient.utils.InferenceServerException: [StatusCode.UNAVAILABLE] Socket closed
copilot_proxy_1  | 192.168.0.179 - - [17/Aug/2022 15:21:08] "POST /v1/engines/codegen/completions HTTP/1.1" 500 -
triton_1         | --------------------------------------------------------------------------
triton_1         | Primary job  terminated normally, but 1 process returned
triton_1         | a non-zero exit code. Per user-direction, the job has been aborted.
triton_1         | --------------------------------------------------------------------------
triton_1         | --------------------------------------------------------------------------
triton_1         | mpirun noticed that process rank 0 with PID 0 on node 1f7b69d48c22 exited on signal 6 (Aborted).
triton_1         | --------------------------------------------------------------------------
fauxpilot_triton_1 exited with code 134

What could be causing this issue? Any hints or clues are welcome. Thank you.

leemgs commented 2 years ago

I have figured out the cause of this issue.

Error message:

triton_1 | terminate called after throwing an instance of 'std::runtime_error' triton_1 | what(): [FT][ERROR] CUDA runtime error: invalid device function /workspace/build/fastertransformer_backend/build/_deps/repo-ft-src/src/fastertransformer/kernels/sampling_topp_kernels.cu:1057 triton_1 |

Reason:

Compute Capability 7.0 or higher is required to run FauxPilot. However, the Compute Capability version supported by Nvidia Titan Xp is 6.1.

Discussion:

I am wondering if there is a way to run FauxPilot using an Nvidia GPU card with Compute Capability 6.x. Welcome to any comments. :)

moyix commented 2 years ago

It seems this is probably an issue with the FasterTransformer library. One thing you may want to try is building FasterTransformer on your host machine, and testing if you get the same errors when following GPTJ example from the documentation:

https://github.com/NVIDIA/FasterTransformer/blob/main/docs/gptj_guide.md

Note that if you don't want to download and convert GPT-J just to test this, you can also point FasterTransformer at one of the CodeGen models you've downloaded with FauxPilot by using the configuration file here:

https://github.com/moyix/FasterTransformer/blob/main/examples/cpp/gptj/gptj_config.ini

That will help narrow it down to a bug in FasterTransformer or a problem with the NVIDIA Docker container environment.

leemgs commented 2 years ago

Thank you very much. This is the information I really need. :)

leemgs commented 2 years ago

I trited to buidl FasterTransofrmer.git in order to get the /opt/tritonserver/bin/tritonserver and /opt/tritonserver/lib/libtritonserver.so with -DSM=61 option to support Nvidia Titan Xp.

  1. git clone https://github.com/NVIDIA/FasterTransformer.git
  2. cd FasterTransformer f&& mkdir build && cd build
  3. time cmake -DSM=61 -DCMAKE_BUILD_TYPE=Release .. && make -j12
  4. ./bin/gptj_example
            .......... Omission .................
    After loading model : free:  0.24 GB, total: 11.91 GB, used: 11.67 GB
    After forward       : free:  0.09 GB, total: 11.91 GB, used: 11.82 GB
    Writing 320 elements
    818   262   938  3155   286  1528    11   257     0 39254
    zeroCount = 8
    [INFO] request_batch_size 8 beam_width 1 head_num 16 size_per_head 256 total_output_len 40 decoder_layers 28 vocab_size 50400 FT-CPP-decoding-beamsearch-time 2294.78 ms

The below contents show the ELF binary files and SO libraries that could be generated by cmake/make command. However, I could not get the libtritonserver.so file . How can I get the libtritonserver.so file to support Nvidia Titan Xp (-DSM=61), Maybe, ./lib/libGptJTritonBackend.so can be replaced to libtritonserver.so file? Welcome thos any comments. :)

(base) invain@mymate:/work/qtlab/FasterTransformer/build$ ls -alh ./bin/gptj*
-rwxr-xr-x 1 invain invain  37M Aug 28 19:21 ./bin/gptj_example
-rwxr-xr-x 1 invain invain 235K Aug 28 19:21 ./bin/gptj_triton_example

(base) invain@mymate:/work/qtlab/FasterTransformer/build$ ls -alh ./lib/*.so
-rwxr-xr-x 1 invain invain 15M Aug 28 19:21 ./lib/libBertTritonBackend.so
-rwxr-xr-x 1 invain invain 37M Aug 28 19:21 ./lib/libGptJTritonBackend.so
-rwxr-xr-x 1 invain invain 37M Aug 28 19:21 ./lib/libGptNeoXTritonBackend.so
-rwxr-xr-x 1 invain invain 39M Aug 28 19:21 ./lib/libParallelGptTritonBackend.so
-rwxr-xr-x 1 invain invain 38M Aug 28 19:21 ./lib/libT5TritonBackend.so
-rwxr-xr-x 1 invain invain 35K Aug 28 19:20 ./lib/libTransformerTritonBackend.so
-rwxr-xr-x 1 invain invain 52M Aug 28 19:21 ./lib/libtransformer-shared.so
lucataco commented 2 years ago

I am also interested in running fauxpilot for Compute Capability 6.1/DSM=61 (for a 1080Ti). Haven't tried this yet, but I thought it might be useful to someone else: https://github.com/triton-inference-server/fastertransformer_backend/blob/dev/t5_gptj_blog/notebooks/GPT-J_and_T5_inference.ipynb

moyix commented 2 years ago

I trited to buidl FasterTransofrmer.git in order to get the /opt/tritonserver/bin/tritonserver and /opt/tritonserver/lib/libtritonserver.so with -DSM=61 option to support Nvidia Titan Xp.

  1. git clone https://github.com/NVIDIA/FasterTransformer.git
  2. cd FasterTransformer f&& mkdir build && cd build
  3. time cmake -DSM=61 -DCMAKE_BUILD_TYPE=Release .. && make -j12
  4. ./bin/gptj_example
            .......... Omission .................
After loading model : free:  0.24 GB, total: 11.91 GB, used: 11.67 GB
After forward       : free:  0.09 GB, total: 11.91 GB, used: 11.82 GB
Writing 320 elements
  818   262   938  3155   286  1528    11   257     0 39254
zeroCount = 8
[INFO] request_batch_size 8 beam_width 1 head_num 16 size_per_head 256 total_output_len 40 decoder_layers 28 vocab_size 50400 FT-CPP-decoding-beamsearch-time 2294.78 ms

The below contents show the ELF binary files and SO libraries that could be generated by cmake/make command. However, I could not get the libtritonserver.so file . How can I get the libtritonserver.so file to support Nvidia Titan Xp (-DSM=61), Maybe, ./lib/libGptJTritonBackend.so can be replaced to libtritonserver.so file? Welcome thos any comments. :)

I believe you should be trying to build this repo, which automatically downloads and builds FasterTransformer along with the Triton backend:

https://github.com/triton-inference-server/fastertransformer_backend/

I have my own fork of it here that I used to add in a couple bugfixes and patches that hadn't yet made it into the main branch of the official repository:

https://github.com/moyix/fastertransformer_backend

lucataco commented 2 years ago

Oh cool it works! I changed this line in fastertransformer_backend to be: cmake -DSM=61 \ and built the image with: docker build -t lucataco/triton_with_ft:22.06 -f docker/Dockerfile . If you want to try it out, just change this line in fauxpilot's docker-compose to image: lucataco/triton_with_ft:22.06 and then run the usual ./setup.sh and ./launch.sh

moyix commented 2 years ago

Very nice! I think it should also be possible to do builds with all architectures enabled via -DSM=60,61,70,75,80,86. Will try to get a new image pushed up for that soon :)

moyix commented 2 years ago

I pushed up moyix/triton_with_ft:22.09 to Docker Hub! Could someone give it a try by changing moyix/triton_with_ft:22.06 to moyix/triton_with_ft:22.09 in docker-compose.yaml?

lucataco commented 2 years ago

Nice, I can confirm that moyix/triton_with_ft:22.09 works for both my 1080Ti(DSM:61) and 3080Ti(DSM:86) graphics cards. Tested both the codegen-350M-multi and codegen-2B-multi models. (Any hints on how to fit the codegen-6B-multi or higher onto 12Gb of VRAM? bitsandbytes? GradientAccumulation?)

moyix commented 2 years ago

6B-multi would work in 12GB of VRAM with bitsandbytes I believe, yes (with bitsandbytes it takes about 1 byte per parameter so 6B = 6GB). However, I think right now bitsandbytes is only available for Huggingface Transformers, so we'd need to use Triton's Python backend. This seems doable but I'm not sure when I'll have time to try to implement it (PRs are of course welcome :)).

There's some discussion of using HF models with Triton here, for reference:

https://github.com/triton-inference-server/server/issues/2747

leemgs commented 2 years ago

FYI,

Thank you very much, @Lucataco and @Moyix. By incorporating moyix/triton with ft:22.09 into the main line, I confirmed that we can run it more efficiently on the obsolete Nvidia GPus (e.g., Titan Xp).

On my Ubuntu 18.04 + Nvidia Titan Xp GPU system, I utilized the docker image version moyix/triton with ft:22.09 merged into the mainline.

I have tested the 2B model (codegen-2B-multi) and it functions without issue. The tested experimental outcomes are as follows:

invain@mymate:/work/leemgs/toyroom$ nvidia-smi
Fri Sep  9 11:59:10 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA TITAN Xp     On   | 00000000:01:00.0 Off |                  N/A |
| 28%   41C    P8    11W / 250W |   5933MiB / 12288MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2561      G   /usr/lib/xorg/Xorg                 41MiB |
|    0   N/A  N/A     30883      C   ...onserver/bin/tritonserver     5887MiB |
+-----------------------------------------------------------------------------+
invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$ curl -s -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{"prompt":"def hello","max_tokens":100,"temperature":0.1,"stop":["\n\n"]}' http://localhost:5000/v1/engines/codegen/completions

{"id": "cmpl-7TbowW6B96Itl1UodVvOGk47ROg6a", "model": "codegen", "object": "text_completion", "created": 1662692426, "choices": [{"text": "() {\n        System.out.println(\"Hello World!\");\n    }\n}\n", "index": 0, "finish_reason": "stop", "logprobs": null}], "usage": {"completion_tokens": 21, "prompt_tokens": 2, "total_tokens": 23}}invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$