nnstreamer / nntrainer

NNtrainer is Software Framework for Training Neural Network Models on Devices.
Apache License 2.0
135 stars 71 forks source link

Issues and Questions about Execution of LLaMA using NNTrainer #2561

Open Deeksha-20-99 opened 2 months ago

Deeksha-20-99 commented 2 months ago
  1. File changes made before running the LLaMA model
-file path: nntrainer/Applications/LLaMA/PyTorch
run the llama_weights_converter.py file to generate the "./llama_fp16.bin" files. (Hugging face LLaMA model) Save the file in the nntrainer/jni directory. file path: nntrainer/Applications/LLaMA/jni/main.cpp
add #define ENABLE_ENCODER2 in the beginning file path:nntrainer/meson.build
add "message ('platform: @0@'.format(get_option('platform')))" in the 28th line of the code.
add "message ('enable-fp16: @0@'.format(get_option('enable-fp16')))" in the 68th line of the code file path:nntrainer/meson_options.txt
-enable the fp16 option as true in the 39th line "option('enable-fp16', type: 'boolean', value: true)"
  2. Run the "meson build" and "ninja -C build" command in the NNTrainer directory
  3. enter the jni directory inside NNTrainer and run "../build/Applications/LLaMA/jni/nntrainer_llama"
Korean locale Screenshot 2024-04-29 at 4 26 29 PM Screenshot 2024-04-29 at 4 30 13 PM

Progress update by - Professor Hokeun Kim (https://github.com/hokeun) and his student Deeksha Prahlad (https://github.com/Deeksha-20-99)

taos-ci commented 2 months ago

:octocat: cibot: Thank you for posting issue #2561. The person in charge will reply soon.

myungjoo commented 2 months ago

We also wanted to ask if we could run NNTrainer on a commercial off-the-shelf GPU. We currently have the NVIDIA A 6000.

GPU support of NNTrainer is WIP. I expect to see running LLMs on GPU around May~June. (e.g., https://github.com/nnstreamer/nntrainer/pull/2535 / https://github.com/nnstreamer/nntrainer/pulls?q=is%3Apr+author%3As-debadri ) @s-debadri has been actively contributing GPU-related codes.

This is based on OpenCL because we target GPUs of embedded devices (mobile, TV, home appliances, ...), not servers with powerful A100/H100/B100.

As long as they support OpenCL, they would work; however, not as efficient as CUDA on NVidia GPUs.

myungjoo commented 2 months ago

Do you have any recommendation for benchmarks to run to test results from LLaMA execution using NNTrainer?

Must-have metric: peak memory concumption, first-token latency, per-token latency after the first token output (or "throughput") Good-to-have metric: energy consumption (J) per given number of input tokens, throughput with given power (W) and thermal budgets, computation resource (CPU, GPU) utilization statistics, average and peak memory (DRAM) traffic. These additional metrics provide idea on how it would behave in actual user devices; battery consumption, throttled performance due to temperature, performance when there are other apps running, and so on.

myungjoo commented 2 months ago

Here we are not able to find the correlation between the input and output sequence, hence we wanted to check the way we can infer the results. With setting the locale we are encountering the segmentation error and wanted to know what could be done to resolve this.

@lhs8928 @baek2sm ?

Deeksha-20-99 commented 2 months ago

We would like to thank the team for fixing the issue through the commit. We were able to overcome the segmentation fault and run the LLaMA model. We got the output as seen in the images but we are still not able to understand the output that is printed.

Screenshot 2024-04-30 at 5 36 46 PM Screenshot 2024-04-30 at 5 39 11 PM
jijoongmoon commented 2 months ago

I wonder whether you changed the configuration for the 7b in HuggingFace. The current implementation is for the 1B. Do you want to use the Application/LLaMA as a kind of chatbot? then I think it needs some fixes as well. As you can see in the code, it just takes the prefill context and generates the output one time. For chatbot kind of task, we need a kind of iteration ( it is not difficult though) to keep the KV cache alive.

Here we are not able to find the correlation between the input and output sequence, hence we wanted to check the way we can infer the results. With setting the locale we are encountering the segmentation error and wanted to know what could be done to resolve this.

We will check and let you know.

Deeksha-20-99 commented 2 months ago

Thank you for the clarification. We have been using the "meta-llama/Llama-2-7b-chat-hf", which is 7B. We planned to change the model to "TinyLlama/TinyLlama-1.1B-Chat-v1.0", is this the recommended one? If not is there any recommended model to be used for the LLaMA application?

jijoongmoon commented 2 months ago

We will check the model including TinyLlam. The current implementation is for the kind of tasks like summarization, tone conversion, etc. But TinyLlama seems like it does not have tokenizer compatibility with our implementation. Let us check and we will let you know.