intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.66k stars 1.26k forks source link

IndexError: list index out of range when ipex_fp16_gpu test_api is used in all-in-one #10914

Open Kpeacef opened 6 months ago

Kpeacef commented 6 months ago

Experienced an issue when ipex_fp16_gpu test_api is used.

ipex-llm/python/llm/dev/benchmark/all-in-one/run.py", line 126, in run_model

    result[in_out_pair][-1][6] if any(keyword in test_api for keyword in ['int4_gpu', 'int4_fp16_gpu_win', 'int4_loadlowbit_gpu', 'fp16_gpu', 'deepspeed_optimize_model_gpu']) else 'N/A',

    ~~~~~~~^^^

IndexError: list index out of range

qiuxin2012 commented 6 months ago

ipex_fp_gpu is not a right API, see https://github.com/intel-analytics/ipex-llm/blob/2c64754eb0b5375ab635ad8b6edad98e8e330275/python/llm/dev/benchmark/all-in-one/config.yaml#L15C1-L35C61. The error message is not friendly, I will change it later.

Kpeacef commented 6 months ago

Sorry, typo ipex_fp16_gpu instead. Thank you for pointing it out.