Closed leemgs closed 2 years ago
I have figured out the cause of this issue.
triton_1 | terminate called after throwing an instance of 'std::runtime_error' triton_1 | what(): [FT][ERROR] CUDA runtime error: invalid device function /workspace/build/fastertransformer_backend/build/_deps/repo-ft-src/src/fastertransformer/kernels/sampling_topp_kernels.cu:1057 triton_1 |
Compute Capability 7.0 or higher is required to run FauxPilot. However, the Compute Capability version supported by Nvidia Titan Xp is 6.1.
I am wondering if there is a way to run FauxPilot using an Nvidia GPU card with Compute Capability 6.x. Welcome to any comments. :)
It seems this is probably an issue with the FasterTransformer library. One thing you may want to try is building FasterTransformer on your host machine, and testing if you get the same errors when following GPTJ example from the documentation:
https://github.com/NVIDIA/FasterTransformer/blob/main/docs/gptj_guide.md
Note that if you don't want to download and convert GPT-J just to test this, you can also point FasterTransformer at one of the CodeGen models you've downloaded with FauxPilot by using the configuration file here:
https://github.com/moyix/FasterTransformer/blob/main/examples/cpp/gptj/gptj_config.ini
That will help narrow it down to a bug in FasterTransformer or a problem with the NVIDIA Docker container environment.
Thank you very much. This is the information I really need. :)
I trited to buidl FasterTransofrmer.git in order to get the /opt/tritonserver/bin/tritonserver and /opt/tritonserver/lib/libtritonserver.so with -DSM=61 option to support Nvidia Titan Xp.
.......... Omission .................
After loading model : free: 0.24 GB, total: 11.91 GB, used: 11.67 GB
After forward : free: 0.09 GB, total: 11.91 GB, used: 11.82 GB
Writing 320 elements
818 262 938 3155 286 1528 11 257 0 39254
zeroCount = 8
[INFO] request_batch_size 8 beam_width 1 head_num 16 size_per_head 256 total_output_len 40 decoder_layers 28 vocab_size 50400 FT-CPP-decoding-beamsearch-time 2294.78 ms
The below contents show the ELF binary files and SO libraries that could be generated by cmake/make command. However, I could not get the libtritonserver.so file . How can I get the libtritonserver.so file to support Nvidia Titan Xp (-DSM=61), Maybe, ./lib/libGptJTritonBackend.so can be replaced to libtritonserver.so file? Welcome thos any comments. :)
(base) invain@mymate:/work/qtlab/FasterTransformer/build$ ls -alh ./bin/gptj*
-rwxr-xr-x 1 invain invain 37M Aug 28 19:21 ./bin/gptj_example
-rwxr-xr-x 1 invain invain 235K Aug 28 19:21 ./bin/gptj_triton_example
(base) invain@mymate:/work/qtlab/FasterTransformer/build$ ls -alh ./lib/*.so
-rwxr-xr-x 1 invain invain 15M Aug 28 19:21 ./lib/libBertTritonBackend.so
-rwxr-xr-x 1 invain invain 37M Aug 28 19:21 ./lib/libGptJTritonBackend.so
-rwxr-xr-x 1 invain invain 37M Aug 28 19:21 ./lib/libGptNeoXTritonBackend.so
-rwxr-xr-x 1 invain invain 39M Aug 28 19:21 ./lib/libParallelGptTritonBackend.so
-rwxr-xr-x 1 invain invain 38M Aug 28 19:21 ./lib/libT5TritonBackend.so
-rwxr-xr-x 1 invain invain 35K Aug 28 19:20 ./lib/libTransformerTritonBackend.so
-rwxr-xr-x 1 invain invain 52M Aug 28 19:21 ./lib/libtransformer-shared.so
I am also interested in running fauxpilot for Compute Capability 6.1/DSM=61 (for a 1080Ti). Haven't tried this yet, but I thought it might be useful to someone else: https://github.com/triton-inference-server/fastertransformer_backend/blob/dev/t5_gptj_blog/notebooks/GPT-J_and_T5_inference.ipynb
I trited to buidl FasterTransofrmer.git in order to get the /opt/tritonserver/bin/tritonserver and /opt/tritonserver/lib/libtritonserver.so with -DSM=61 option to support Nvidia Titan Xp.
- git clone https://github.com/NVIDIA/FasterTransformer.git
- cd FasterTransformer f&& mkdir build && cd build
- time cmake -DSM=61 -DCMAKE_BUILD_TYPE=Release .. && make -j12
- ./bin/gptj_example
.......... Omission ................. After loading model : free: 0.24 GB, total: 11.91 GB, used: 11.67 GB After forward : free: 0.09 GB, total: 11.91 GB, used: 11.82 GB Writing 320 elements 818 262 938 3155 286 1528 11 257 0 39254 zeroCount = 8 [INFO] request_batch_size 8 beam_width 1 head_num 16 size_per_head 256 total_output_len 40 decoder_layers 28 vocab_size 50400 FT-CPP-decoding-beamsearch-time 2294.78 ms
The below contents show the ELF binary files and SO libraries that could be generated by cmake/make command. However, I could not get the libtritonserver.so file . How can I get the libtritonserver.so file to support Nvidia Titan Xp (-DSM=61), Maybe, ./lib/libGptJTritonBackend.so can be replaced to libtritonserver.so file? Welcome thos any comments. :)
I believe you should be trying to build this repo, which automatically downloads and builds FasterTransformer along with the Triton backend:
https://github.com/triton-inference-server/fastertransformer_backend/
I have my own fork of it here that I used to add in a couple bugfixes and patches that hadn't yet made it into the main branch of the official repository:
Oh cool it works!
I changed this line in fastertransformer_backend to be: cmake -DSM=61 \
and built the image with:
docker build -t lucataco/triton_with_ft:22.06 -f docker/Dockerfile .
If you want to try it out, just change this line in fauxpilot's docker-compose to image: lucataco/triton_with_ft:22.06
and then run the usual ./setup.sh
and ./launch.sh
Very nice! I think it should also be possible to do builds with all architectures enabled via -DSM=60,61,70,75,80,86
. Will try to get a new image pushed up for that soon :)
I pushed up moyix/triton_with_ft:22.09
to Docker Hub! Could someone give it a try by changing moyix/triton_with_ft:22.06
to moyix/triton_with_ft:22.09
in docker-compose.yaml?
Nice, I can confirm that moyix/triton_with_ft:22.09
works for both my 1080Ti
(DSM:61) and 3080Ti
(DSM:86) graphics cards.
Tested both the codegen-350M-multi
and codegen-2B-multi
models. (Any hints on how to fit the codegen-6B-multi
or higher onto 12Gb of VRAM? bitsandbytes? GradientAccumulation?)
6B-multi would work in 12GB of VRAM with bitsandbytes I believe, yes (with bitsandbytes it takes about 1 byte per parameter so 6B = 6GB). However, I think right now bitsandbytes is only available for Huggingface Transformers, so we'd need to use Triton's Python backend. This seems doable but I'm not sure when I'll have time to try to implement it (PRs are of course welcome :)).
There's some discussion of using HF models with Triton here, for reference:
https://github.com/triton-inference-server/server/issues/2747
FYI,
Thank you very much, @Lucataco and @Moyix. By incorporating moyix/triton with ft:22.09 into the main line, I confirmed that we can run it more efficiently on the obsolete Nvidia GPus (e.g., Titan Xp).
On my Ubuntu 18.04 + Nvidia Titan Xp GPU system, I utilized the docker image version moyix/triton with ft:22.09 merged into the mainline.
I have tested the 2B model (codegen-2B-multi) and it functions without issue. The tested experimental outcomes are as follows:
invain@mymate:/work/leemgs/toyroom$ nvidia-smi
Fri Sep 9 11:59:10 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA TITAN Xp On | 00000000:01:00.0 Off | N/A |
| 28% 41C P8 11W / 250W | 5933MiB / 12288MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2561 G /usr/lib/xorg/Xorg 41MiB |
| 0 N/A N/A 30883 C ...onserver/bin/tritonserver 5887MiB |
+-----------------------------------------------------------------------------+
invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$ curl -s -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{"prompt":"def hello","max_tokens":100,"temperature":0.1,"stop":["\n\n"]}' http://localhost:5000/v1/engines/codegen/completions
{"id": "cmpl-7TbowW6B96Itl1UodVvOGk47ROg6a", "model": "codegen", "object": "text_completion", "created": 1662692426, "choices": [{"text": "() {\n System.out.println(\"Hello World!\");\n }\n}\n", "index": 0, "finish_reason": "stop", "logprobs": null}], "usage": {"completion_tokens": 21, "prompt_tokens": 2, "total_tokens": 23}}invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$
invain@mymate:/work/leemgs/toyroom$
I installed and run Fauxpilot on Ubuntu18.04/Nvidia RTX 2080 (192.168.0.201) and Ubuntu18.04/Nvidia Titan Xp (192.168.0.179). Then, in the Ubuntu environment of my laptop, I performed the OpenAI API with the curl command, as shown below. Unfortunately, sending the curl command to Ubuntu18.04/Nvidia Titan Xp (192.168.0.179) throws an error. In the summary. FauxPilot on Ubuntu18.04/Nvidia Titan Xp generates the "CUDA runtime error: invalid device function" error message. Maybe Nvidia Titan Xp is not supported to run FauxPilot?
The configuration file is as follows.
Case1: When I send a OpenAI API to Ubuntu18.04/Nvidia RTX 2080 (192.168.0.201), it is okay.
Case2: When I send a OpenAI API to Ubuntu18.04/Nvidia Titan Xp (192.168.0.179), it is failed.
Issue report:
Below is the error log output when running ./launch.sh on Ubuntu18.04/Nvidia Titan Xp (192.168.0.179).
What could be causing this issue? Any hints or clues are welcome. Thank you.