openvinotoolkit / model_server

A scalable inference server for models optimized with OpenVINO™
https://docs.openvino.ai/2024/ovms_what_is_openvino_model_server.html
Apache License 2.0
657 stars 206 forks source link

optimum-intel cli converted openvino ir models does not load on the model-server #2316

Closed 0x33taji closed 6 months ago

0x33taji commented 7 months ago

Describe the bug optimum-intel cli converted openvino ir models does not load on the model-server

To Reproduce

$ uname -r
5.15.0-92-generic

$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

$ cd; mkdir -p modeladm; cd modeladm
$ python3 -m venv venv
$ source venv/bin/activate

$ pip install --upgrade pip
$ pip install --upgrade \
  "optimum-intel[ipex,neural-compressor,openvino,nncf]"@git+https://github.com/huggingface/optimum-intel.git

$ mkdir -p repo/phi-2-fp16/1
$ optimum-cli export openvino --model microsoft/phi-2 --weight-format fp16 repo/phi-2-fp16/1/

Framework not specified. Using pt to export to ONNX.
Loading checkpoint shards: 100%|                                                                                                                                                                                                         | 2/2 [00:19<00:00,  9.73s/it]
Automatic task detection to text-generation-with-past (possible synonyms are: causal-lm-with-past).
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using the export variant default. Available variants are:
    - default: The default ONNX variant.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using framework PyTorch: 2.2.0+cu121
WARNING:root:Cannot apply model.to_bettertransformer because of the exception:
The model type phi is not yet supported to be used with BetterTransformer. Feel free to open an issue at https://github.com/huggingface/optimum/issues if you would like this model type to be supported. Currently supported models are: dict_keys(['albert', 'bark', 'bart', 'bert', 'bert-generation', 'blenderbot', 'bloom', 'camembert', 'blip-2', 'clip', 'codegen', 'data2vec-text', 'deit', 'distilbert', 'electra', 'ernie', 'fsmt', 'gpt2', 'gptj', 'gpt_neo', 'gpt_neox', 'hubert', 'layoutlm', 'm2m_100', 'marian', 'markuplm', 'mbart', 'opt', 'pegasus', 'rembert', 'prophetnet', 'roberta', 'roc_bert', 'roformer', 'splinter', 'tapas', 't5', 'vilt', 'vit', 'vit_mae', 'vit_msn', 'wav2vec2', 'xlm-roberta', 'yolos']).. Usage model with stateful=True may be non-effective if model does not contain torch.functional.scaled_dot_product_attention
Overriding 1 configuration item(s)
        - use_cache -> True
/home/jj/modeladm/venv/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py:114: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if (input_shape[-1] > 1 or self.sliding_window is not None) and self.is_causal:
/home/jj/modeladm/venv/lib/python3.10/site-packages/optimum/exporters/onnx/model_patcher.py:299: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if past_key_values_length > 0:
/home/jj/modeladm/venv/lib/python3.10/site-packages/transformers/models/phi/modeling_phi.py:109: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if seq_len > self.max_seq_len_cached:
/home/jj/modeladm/venv/lib/python3.10/site-packages/transformers/models/phi/modeling_phi.py:367: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
/home/jj/modeladm/venv/lib/python3.10/site-packages/transformers/models/phi/modeling_phi.py:374: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
/home/jj/modeladm/venv/lib/python3.10/site-packages/transformers/models/phi/modeling_phi.py:386: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
WARNING:root:Failed to send event with the following error: <urlopen error [Errno -5] No address associated with hostname>
WARNING:root:Failed to send event with the following error: <urlopen error [Errno -5] No address associated with hostname>

$ tree repo
repo
    phi-2-fp16
        1
            added_tokens.json
            config.json
            generation_config.json
            merges.txt
            openvino_model.bin
            openvino_model.xml
            special_tokens_map.json
            tokenizer_config.json
            tokenizer.json
            vocab.json

2 directories, 10 files

$ cat << EOF > repo/model_config_list.json
{
   "model_config_list":[
      {
         "config":{
            "name":"phi-2-fp16",
            "base_path":"/opt/model/phi-2-fp16",
            "target_device": "HETERO:GPU,CPU"
         }
      }
   ]
}
EOF

$ cat docker-compose.yaml
version: "3.7"

services:
  model-server:
    image: openvino/model_server:latest-gpu
    devices:
      - /dev/dri
    group_add:
      - ${GROUP_ID}
    user: "${USER_ID}:${GROUP_ID}"
    volumes:
      - ${PWD}/repo/:/opt/model:ro
    ports:
      - "9001:9001"
    command:
      - "--log_level"
      - "DEBUG"
      - "--config_path"
      - "/opt/model/model_config_list.json"
      - "--port"
      - "9001"

$ export GROUP_ID=$(stat -c "%g" /dev/dri/render* | head -n 1)
$ export USER_ID=$(id -u)
$ docker compose up
[+] Running 1/1
   Container modeladm-model-server-1  Recreated                                                                                                                                                                                                                  11.1s
Attaching to model-server-1
model-server-1  | [2024-01-31 14:57:55.720][1][serving][info][server.cpp:82] OpenVINO Model Server 2023.3.4e91aac76
model-server-1  | [2024-01-31 14:57:55.721][1][serving][info][server.cpp:83] OpenVINO backend 2023.3.0.13775.ceeafaf64f3
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:84] CLI parameters passed to ovms server
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:101] config_path: /opt/model/model_config_list.json
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:103] gRPC port: 9001
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:104] REST port: 0
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:105] gRPC bind address: 0.0.0.0
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:106] REST bind address: 0.0.0.0
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:107] REST workers: 32
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:108] gRPC workers: 1
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:109] gRPC channel arguments:
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:110] log level: DEBUG
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:111] log path:
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:112] file system poll wait seconds: 1
model-server-1  | [2024-01-31 14:57:55.721][1][serving][debug][server.cpp:113] sequence cleaner poll wait minutes: 5
model-server-1  | [2024-01-31 14:57:55.721][1][serving][info][pythoninterpretermodule.cpp:34] PythonInterpreterModule starting
model-server-1  | [2024-01-31 14:57:55.846][1][serving][debug][python_backend.cpp:43] Creating python backend
model-server-1  | [2024-01-31 14:57:55.848][1][serving][info][pythoninterpretermodule.cpp:45] PythonInterpreterModule started
model-server-1  | [2024-01-31 14:57:55.849][1][modelmanager][debug][mediapipefactory.cpp:47] Registered Calculators: AddHeaderCalculator, AlignmentPointsRectsCalculator, AnnotationOverlayCalculator, AssociationNormRectCalculator, BeginLoopDetectionCalculator, BeginLoopFloatCalculator, BeginLoopGpuBufferCalculator, BeginLoopImageCalculator, BeginLoopImageFrameCalculator, BeginLoopIntCalculator, BeginLoopMatrixCalculator, BeginLoopMatrixVectorCalculator, BeginLoopNormalizedLandmarkListVectorCalculator, BeginLoopNormalizedRectCalculator, BeginLoopTensorCalculator, BeginLoopUint64tCalculator, BoxDetectorCalculator, BoxTrackerCalculator, CallbackCalculator, CallbackPacketCalculator, CallbackWithHeaderCalculator, ClassificationListVectorHasMinSizeCalculator, ClassificationListVectorSizeCalculator, ClipDetectionVectorSizeCalculator, ClipNormalizedRectVectorSizeCalculator, ColorConvertCalculator, ConcatenateBoolVectorCalculator, ConcatenateClassificationListCalculator, ConcatenateClassificationListVectorCalculator, ConcatenateDetectionVectorCalculator, ConcatenateFloatVectorCalculator, ConcatenateImageVectorCalculator, ConcatenateInt32VectorCalculator, ConcatenateLandmarListVectorCalculator, ConcatenateLandmarkListCalculator, ConcatenateLandmarkListVectorCalculator, ConcatenateLandmarkVectorCalculator, ConcatenateNormalizedLandmarkListCalculator, ConcatenateNormalizedLandmarkListVectorCalculator, ConcatenateRenderDataVectorCalculator, ConcatenateStringVectorCalculator, ConcatenateTensorVectorCalculator, ConcatenateTfLiteTensorVectorCalculator, ConcatenateUInt64VectorCalculator, ConstantSidePacketCalculator, CountingSourceCalculator, DefaultSidePacketCalculator, DequantizeByteArrayCalculator, DetectionLabelIdToTextCalculator, DetectionLetterboxRemovalCalculator, DetectionProjectionCalculator, DetectionsToRectsCalculator, DetectionsToRenderDataCalculator, EndLoopAffineMatrixCalculator, EndLoopBooleanCalculator, EndLoopClassificationListCalculator, EndLoopDetectionCalculator, EndLoopFloatCalculator, EndLoopGpuBufferCalculator, EndLoopImageCalculator, EndLoopImageFrameCalculator, EndLoopLandmarkListVectorCalculator, EndLoopMatrixCalculator, EndLoopNormalizedLandmarkListVectorCalculator, EndLoopNormalizedRectCalculator, EndLoopRenderDataCalculator, EndLoopTensorCalculator, EndLoopTfLiteTensorCalculator, FaceLandmarksToRenderDataCalculator, FeatureDetectorCalculator, FlowLimiterCalculator, FlowPackagerCalculator, FlowToImageCalculator, FromImageCalculator, GateCalculator, GetClassificationListVectorItemCalculator, GetDetectionVectorItemCalculator, GetLandmarkListVectorItemCalculator, GetNormalizedLandmarkListVectorItemCalculator, GetNormalizedRectVectorItemCalculator, GetRectVectorItemCalculator, GraphProfileCalculator, HandDetectionsFromPoseToRectsCalculator, HandLandmarksToRectCalculator, ImageCloneCalculator, ImageCroppingCalculator, ImagePropertiesCalculator, ImageToTensorCalculator, ImageTransformationCalculator, ImmediateMuxCalculator, InferenceCalculatorCpu, InverseMatrixCalculator, IrisToRenderDataCalculator, LandmarkLetterboxRemovalCalculator, LandmarkListVectorSizeCalculator, LandmarkProjectionCalculator, LandmarkVisibilityCalculator, LandmarksRefinementCalculator, LandmarksSmoothingCalculator, LandmarksToDetectionCalculator, LandmarksToRenderDataCalculator, LocalFileContentsCalculator, MakePairCalculator, MatrixMultiplyCalculator, MatrixSubtractCalculator, MatrixToVectorCalculator, MediaPipeInternalSidePacketToPacketStreamCalculator, MergeCalculator, MergeDetectionsToVectorCalculator, MergeGpuBuffersToVectorCalculator, MergeImagesToVectorCalculator, MotionAnalysisCalculator, MuxCalculator, NonMaxSuppressionCalculator, NonZeroCalculator, NormalizedLandmarkListVectorHasMinSizeCalculator, NormalizedRectVectorHasMinSizeCalculator, OpenCvEncodedImageToImageFrameCalculator, OpenCvImageEncoderCalculator, OpenCvPutTextCalculator, OpenCvVideoDecoderCalculator, OpenCvVideoEncoderCalculator, OpenVINOConverterCalculator, OpenVINOInferenceCalculator, OpenVINOModelServerSessionCalculator, OpenVINOTensorsToClassificationCalculator, OpenVINOTensorsToDetectionsCalculator, PacketClonerCalculator, PacketGeneratorWrapperCalculator, PacketInnerJoinCalculator, PacketPresenceCalculator, PacketResamplerCalculator, PacketSequencerCalculator, PacketThinnerCalculator, PassThroughCalculator, PreviousLoopbackCalculator, PyTensorOvTensorConverterCalculator, PythonExecutorCalculator, QuantizeFloatVectorCalculator, RectToRenderDataCalculator, RectToRenderScaleCalculator, RectTransformationCalculator, RefineLandmarksFromHeatmapCalculator, RoiTrackingCalculator, RoundRobinDemuxCalculator, SegmentationSmoothingCalculator, SequenceShiftCalculator, SetLandmarkVisibilityCalculator, SidePacketToStreamCalculator, SplitAffineMatrixVectorCalculator, SplitClassificationListVectorCalculator, SplitDetectionVectorCalculator, SplitFloatVectorCalculator, SplitImageVectorCalculator, SplitLandmarkListCalculator, SplitLandmarkVectorCalculator, SplitMatrixVectorCalculator, SplitNormalizedLandmarkListCalculator, SplitNormalizedLandmarkListVectorCalculator, SplitNormalizedRectVectorCalculator, SplitTensorVectorCalculator, SplitTfLiteTensorVectorCalculator, SplitUint64tVectorCalculator, SsdAnchorsCalculator, StreamToSidePacketCalculator, StringToInt32Calculator, StringToInt64Calculator, StringToIntCalculator, StringToUint32Calculator, StringToUint64Calculator, StringToUintCalculator, SwitchDemuxCalculator, SwitchMuxCalculator, TensorsToClassificationCalculator, TensorsToDetectionsCalculator, TensorsToFloatsCalculator, TensorsToLandmarksCalculator, TensorsToSegmentationCalculator, TfLiteConverterCalculator, TfLiteCustomOpResolverCalculator, TfLiteInferenceCalculator, TfLiteModelCalculator, TfLiteTensorsToDetectionsCalculator, TfLiteTensorsToFloatsCalculator, TfLiteTensorsToLandmarksCalculator, ThresholdingCalculator, ToImageCalculator, TrackedDetectionManagerCalculator, Tvl1OpticalFlowCalculator, UpdateFaceLandmarksCalculator, VideoPreStreamCalculator, VisibilityCopyCalculator, VisibilitySmoothingCalculator, WarpAffineCalculator, WarpAffineCalculatorCpu, WorldLandmarkProjectionCalculator
model-server-1  |
model-server-1  | [2024-01-31 14:57:55.849][1][modelmanager][debug][mediapipefactory.cpp:47] Registered Subgraphs: FaceDetection, FaceDetectionFrontDetectionToRoi, FaceDetectionFrontDetectionsToRoi, FaceDetectionShortRange, FaceDetectionShortRangeByRoiCpu, FaceDetectionShortRangeCpu, FaceLandmarkCpu, FaceLandmarkFrontCpu, FaceLandmarkLandmarksToRoi, FaceLandmarksFromPoseCpu, FaceLandmarksFromPoseToRecropRoi, FaceLandmarksModelLoader, FaceLandmarksToRoi, FaceTracking, HandLandmarkCpu, HandLandmarkModelLoader, HandLandmarksFromPoseCpu, HandLandmarksFromPoseToRecropRoi, HandLandmarksLeftAndRightCpu, HandLandmarksToRoi, HandRecropByRoiCpu, HandTracking, HandVisibilityFromHandLandmarksFromPose, HandWristForPose, HolisticLandmarkCpu, HolisticTrackingToRenderData, InferenceCalculator, IrisLandmarkCpu, IrisLandmarkLandmarksToRoi, IrisLandmarkLeftAndRightCpu, IrisRendererCpu, PoseDetectionCpu, PoseDetectionToRoi, PoseLandmarkByRoiCpu, PoseLandmarkCpu, PoseLandmarkFiltering, PoseLandmarkModelLoader, PoseLandmarksAndSegmentationInverseProjection, PoseLandmarksToRoi, PoseSegmentationFiltering, SwitchContainer, TensorsToFaceLandmarks, TensorsToFaceLandmarksWithAttention, TensorsToPoseLandmarksAndSegmentation
model-server-1  |
model-server-1  | [2024-01-31 14:57:55.849][1][modelmanager][debug][mediapipefactory.cpp:47] Registered InputStreamHandlers: BarrierInputStreamHandler, DefaultInputStreamHandler, EarlyCloseInputStreamHandler, FixedSizeInputStreamHandler, ImmediateInputStreamHandler, MuxInputStreamHandler, SyncSetInputStreamHandler, TimestampAlignInputStreamHandler
model-server-1  |
model-server-1  | [2024-01-31 14:57:55.849][1][modelmanager][debug][mediapipefactory.cpp:47] Registered OutputStreamHandlers: InOrderOutputStreamHandler
model-server-1  |
model-server-1  | [2024-01-31 14:57:59.135][1][modelmanager][info][modelmanager.cpp:128] Available devices for Open VINO: CPU, GNA, GPU
model-server-1  | [2024-01-31 14:57:59.135][1][modelmanager][debug][ov_utils.hpp:54] Logging OpenVINO Core plugin: CPU; plugin configuration
model-server-1  | [2024-01-31 14:57:59.136][1][modelmanager][debug][ov_utils.hpp:89] OpenVINO Core plugin: CPU; plugin configuration: { AFFINITY: CORE, AVAILABLE_DEVICES: , CPU_DENORMALS_OPTIMIZATION: NO, CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1, DEVICE_ID: , ENABLE_CPU_PINNING: YES, ENABLE_HYPER_THREADING: YES, EXECUTION_MODE_HINT: PERFORMANCE, FULL_DEVICE_NAME: Intel(R) Xeon(R) CPU E3-1270 v3 @ 3.50GHz, INFERENCE_NUM_THREADS: 0, INFERENCE_PRECISION_HINT: f32, NUM_STREAMS: 1, OPTIMIZATION_CAPABILITIES: FP32 FP16 INT8 BIN EXPORT_IMPORT, PERFORMANCE_HINT: LATENCY, PERFORMANCE_HINT_NUM_REQUESTS: 0, PERF_COUNT: NO, RANGE_FOR_ASYNC_INFER_REQUESTS: 1 1 1, RANGE_FOR_STREAMS: 1 8, SCHEDULING_CORE_TYPE: ANY_CORE }
model-server-1  | [2024-01-31 14:57:59.136][1][modelmanager][debug][ov_utils.hpp:54] Logging OpenVINO Core plugin: GNA; plugin configuration
model-server-1  | [2024-01-31 14:57:59.138][1][modelmanager][debug][ov_utils.hpp:89] OpenVINO Core plugin: GNA; plugin configuration: { AVAILABLE_DEVICES: GNA_SW, EXECUTION_DEVICES: GNA, EXECUTION_MODE_HINT: ACCURACY, FULL_DEVICE_NAME: GNA_SW, GNA_DEVICE_MODE: GNA_SW_EXACT, GNA_FIRMWARE_MODEL_IMAGE: , GNA_HW_COMPILE_TARGET: UNDEFINED, GNA_HW_EXECUTION_TARGET: UNDEFINED, GNA_LIBRARY_FULL_VERSION: 3.5.0.2116, GNA_PWL_DESIGN_ALGORITHM: UNDEFINED, GNA_PWL_MAX_ERROR_PERCENT: 1.000000, GNA_SCALE_FACTOR_PER_INPUT: , INFERENCE_PRECISION_HINT: undefined, LOG_LEVEL: LOG_NONE, OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1, OPTIMIZATION_CAPABILITIES: INT16 INT8 EXPORT_IMPORT, PERFORMANCE_HINT: LATENCY, PERFORMANCE_HINT_NUM_REQUESTS: 1, RANGE_FOR_ASYNC_INFER_REQUESTS: 1 1 1 }
model-server-1  | [2024-01-31 14:57:59.138][1][modelmanager][debug][ov_utils.hpp:54] Logging OpenVINO Core plugin: GPU; plugin configuration
model-server-1  | [2024-01-31 14:57:59.141][1][modelmanager][debug][ov_utils.hpp:89] OpenVINO Core plugin: GPU; plugin configuration: { AVAILABLE_DEVICES: 0, CACHE_DIR: , CACHE_MODE: optimize_speed, COMPILATION_NUM_THREADS: 8, DEVICE_ARCHITECTURE: GPU: vendor=0x8086 arch=v12.7.1, DEVICE_GOPS: {f16:157286,f32:19660.8,i8:314573,u8:314573}, DEVICE_ID: 0, DEVICE_LUID: 907e9d7ffc7f0000, DEVICE_TYPE: discrete, DEVICE_UUID: 86800000a05600000000000000000000, ENABLE_CPU_PINNING: NO, EXECUTION_MODE_HINT: PERFORMANCE, FULL_DEVICE_NAME: Intel(R) Graphics [0x56a0] (dGPU), GPU_DEVICE_TOTAL_MEM_SIZE: 255012864, GPU_DISABLE_WINOGRAD_CONVOLUTION: NO, GPU_ENABLE_LOOP_UNROLLING: YES, GPU_EXECUTION_UNITS_COUNT: 512, GPU_HOST_TASK_PRIORITY: MEDIUM, GPU_MEMORY_STATISTICS: , GPU_QUEUE_PRIORITY: MEDIUM, GPU_QUEUE_THROTTLE: MEDIUM, GPU_UARCH_VERSION: 12.7.1, INFERENCE_PRECISION_HINT: f16, MAX_BATCH_SIZE: 1, MODEL_PRIORITY: MEDIUM, NUM_STREAMS: 1, OPTIMAL_BATCH_SIZE: 1, OPTIMIZATION_CAPABILITIES: FP32 BIN FP16 INT8 GPU_HW_MATMUL EXPORT_IMPORT, PERFORMANCE_HINT: LATENCY, PERFORMANCE_HINT_NUM_REQUESTS: 0, PERF_COUNT: NO, RANGE_FOR_ASYNC_INFER_REQUESTS: 1 2 1, RANGE_FOR_STREAMS: 1 4 }
model-server-1  | [2024-01-31 14:57:59.142][1][serving][info][grpcservermodule.cpp:122] GRPCServerModule starting
model-server-1  | [2024-01-31 14:57:59.142][1][serving][debug][grpcservermodule.cpp:146] setting grpc channel argument grpc.max_concurrent_streams: 8
model-server-1  | [2024-01-31 14:57:59.146][1][serving][debug][grpcservermodule.cpp:159] setting grpc MaxThreads ResourceQuota 64
model-server-1  | [2024-01-31 14:57:59.146][1][serving][debug][grpcservermodule.cpp:163] setting grpc Memory ResourceQuota 2147483648
model-server-1  | [2024-01-31 14:57:59.146][1][serving][debug][grpcservermodule.cpp:170] Starting gRPC servers: 1
model-server-1  | [2024-01-31 14:57:59.147][1][serving][info][grpcservermodule.cpp:191] GRPCServerModule started
model-server-1  | [2024-01-31 14:57:59.147][1][serving][info][grpcservermodule.cpp:192] Started gRPC server on port 9001
model-server-1  | [2024-01-31 14:57:59.147][1][serving][info][servablemanagermodule.cpp:51] ServableManagerModule starting
model-server-1  | [2024-01-31 14:57:59.147][1][modelmanager][debug][modelmanager.cpp:877] Loading configuration from /opt/model/model_config_list.json for: 1 time
model-server-1  | [2024-01-31 14:57:59.149][1][modelmanager][debug][modelmanager.cpp:681] Configuration file doesn't have monitoring property.
model-server-1  | [2024-01-31 14:57:59.149][1][modelmanager][debug][modelmanager.cpp:929] Reading metric config only once per server start.
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:675] Specified model parameters:
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:676] model_basepath: /opt/model/phi-2-fp16
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:677] model_name: phi-2-fp16
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:678] batch_size: not configured
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:682] shape:
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:688] model_version_policy: latest: 1
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:690] nireq: 0
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:691] target_device: HETERO:GPU,CPU
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:692] plugin_config:
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:700] Batch size set: false, shape set: false
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][modelconfig.cpp:707] stateful: false
model-server-1  | [2024-01-31 14:57:59.149][1][modelmanager][debug][ov_utils.cpp:94] Validating plugin: HETERO; configuration
model-server-1  | [2024-01-31 14:57:59.149][1][modelmanager][debug][ov_utils.cpp:94] Validating plugin: GPU; configuration
model-server-1  | [2024-01-31 14:57:59.149][1][modelmanager][debug][ov_utils.cpp:94] Validating plugin: CPU; configuration
model-server-1  | [2024-01-31 14:57:59.149][1][serving][info][model.cpp:41] Getting model from /opt/model/phi-2-fp16
model-server-1  | [2024-01-31 14:57:59.149][1][serving][info][model.cpp:48] Model downloaded to /opt/model/phi-2-fp16
model-server-1  | [2024-01-31 14:57:59.149][1][serving][info][model.cpp:148] Will add model: phi-2-fp16; version: 1 ...
model-server-1  | [2024-01-31 14:57:59.149][1][modelmanager][debug][modelconfig.cpp:467] Parsing model: phi-2-fp16 mapping from path: /opt/model/phi-2-fp16/1
model-server-1  | [2024-01-31 14:57:59.149][1][serving][debug][model.cpp:122] Creating new model instance - model name: phi-2-fp16; model version: 1;
model-server-1  | [2024-01-31 14:57:59.150][1][serving][info][modelversionstatus.cpp:109] STATUS CHANGE: Version 1 of model phi-2-fp16 status change. New status: ( "state": "START", "error_code": "OK" )
model-server-1  | [2024-01-31 14:57:59.150][1][serving][info][modelinstance.cpp:933] Loading model: phi-2-fp16, version: 1, from path: /opt/model/phi-2-fp16/1, with target device: HETERO:GPU,CPU ...
model-server-1  | [2024-01-31 14:57:59.150][1][serving][info][modelversionstatus.cpp:109] STATUS CHANGE: Version 1 of model phi-2-fp16 status change. New status: ( "state": "START", "error_code": "OK" )
model-server-1  | [2024-01-31 14:57:59.150][1][serving][debug][modelversionstatus.cpp:81] setLoading: phi-2-fp16 - 1 (previous state: START) -> error: OK
model-server-1  | [2024-01-31 14:57:59.150][1][serving][info][modelversionstatus.cpp:109] STATUS CHANGE: Version 1 of model phi-2-fp16 status change. New status: ( "state": "LOADING", "error_code": "OK" )
model-server-1  | [2024-01-31 14:57:59.150][1][serving][debug][modelinstance.cpp:795] Getting model files from path: /opt/model/phi-2-fp16/1
model-server-1  | [2024-01-31 14:57:59.150][1][serving][debug][modelinstance.cpp:653] Try reading model file: /opt/model/phi-2-fp16/1/openvino_model.xml
model-server-1  | [2024-01-31 14:57:59.337][1][modelmanager][debug][modelinstance.cpp:217] Applying layout configuration:
model-server-1  | [2024-01-31 14:57:59.337][1][modelmanager][debug][modelinstance.cpp:259] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); input name: input_ids
model-server-1  | [2024-01-31 14:57:59.337][1][modelmanager][debug][modelinstance.cpp:259] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); input name: attention_mask
model-server-1  | [2024-01-31 14:57:59.337][1][modelmanager][debug][modelinstance.cpp:259] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); input name: position_ids
model-server-1  | [2024-01-31 14:57:59.337][1][modelmanager][debug][modelinstance.cpp:259] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); input name: beam_idx
model-server-1  | [2024-01-31 14:57:59.337][1][modelmanager][debug][modelinstance.cpp:312] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); output name: logits
model-server-1  | [2024-01-31 14:57:59.337][1][serving][debug][modelinstance.cpp:461] model: phi-2-fp16, version: 1; reshaping inputs is not required
model-server-1  | [2024-01-31 14:57:59.337][1][modelmanager][debug][modelinstance.cpp:181] Reporting input layout from RTMap: [N,...]; for tensor name: input_ids
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][info][modelinstance.cpp:490] Input name: input_ids; mapping_name: input_ids; shape: (-1,-1); precision: I64; layout: N...
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][debug][modelinstance.cpp:181] Reporting input layout from RTMap: [N,...]; for tensor name: attention_mask
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][info][modelinstance.cpp:490] Input name: attention_mask; mapping_name: attention_mask; shape: (-1,-1); precision: I64; layout: N...
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][debug][modelinstance.cpp:181] Reporting input layout from RTMap: [N,...]; for tensor name: position_ids
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][info][modelinstance.cpp:490] Input name: position_ids; mapping_name: position_ids; shape: (-1,-1); precision: I64; layout: N...
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][debug][modelinstance.cpp:181] Reporting input layout from RTMap: [N,...]; for tensor name: beam_idx
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][info][modelinstance.cpp:490] Input name: beam_idx; mapping_name: beam_idx; shape: (-1); precision: I32; layout: N...
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][debug][modelinstance.cpp:192] Reporting output layout from RTMap: [N,...]; for tensor name: logits
model-server-1  | [2024-01-31 14:57:59.338][1][modelmanager][info][modelinstance.cpp:542] Output name: logits; mapping_name: logits; shape: (-1,-1,51200); precision: FP32; layout: N...
model-server-1  | [2024-01-31 14:58:27.798][1][modelmanager][error][modelinstance.cpp:736] Cannot compile model into target device; error: Exception from src/inference/src/core.cpp:99:
model-server-1  | [ GENERAL_ERROR ] Exception from src/plugins/hetero/src/compiled_model.cpp:34:
model-server-1  | Standard exception from compilation library: [ GENERAL_ERROR ] Check 'allocatable' failed at src/plugins/intel_gpu/src/runtime/ocl/ocl_engine.cpp:149:
model-server-1  | [GPU] Exceeded max size of memory allocation, check debug message for size info
model-server-1  |
model-server-1  |
model-server-1  | ; model: phi-2-fp16; version: 1; device: HETERO:GPU,CPU
model-server-1  | [2024-01-31 14:58:27.798][1][serving][debug][modelversionstatus.cpp:81] setLoading: phi-2-fp16 - 1 (previous state: LOADING) -> error: UNKNOWN
model-server-1  | [2024-01-31 14:58:27.798][1][serving][info][modelversionstatus.cpp:109] STATUS CHANGE: Version 1 of model phi-2-fp16 status change. New status: ( "state": "LOADING", "error_code": "UNKNOWN" )
model-server-1  | [2024-01-31 14:58:27.799][1][serving][error][model.cpp:153] Error occurred while loading model: phi-2-fp16; version: 1; error: Cannot compile model into target device
model-server-1  | [2024-01-31 14:58:27.799][1][modelmanager][error][modelmanager.cpp:1366] Error occurred while loading model: phi-2-fp16 versions; error: Cannot compile model into target device
model-server-1  | [2024-01-31 14:58:27.799][1][modelmanager][debug][modelmanager.cpp:1468] Removing available version 1 due to load failure;
model-server-1  | [2024-01-31 14:58:27.799][1][serving][info][model.cpp:190] Will clean up model: phi-2-fp16; version: 1 ...
model-server-1  | [2024-01-31 14:58:27.799][1][serving][info][model.cpp:88] Updating default version for model: phi-2-fp16, from: 0
model-server-1  | [2024-01-31 14:58:27.799][1][serving][info][model.cpp:100] Model: phi-2-fp16 will not have default version since no version is available.
model-server-1  | [2024-01-31 14:58:27.799][1][serving][debug][modelversionstatus.cpp:81] setLoading: phi-2-fp16 - 1 (previous state: LOADING) -> error: UNKNOWN
model-server-1  | [2024-01-31 14:58:27.799][1][serving][info][modelversionstatus.cpp:109] STATUS CHANGE: Version 1 of model phi-2-fp16 status change. New status: ( "state": "LOADING", "error_code": "UNKNOWN" )
model-server-1  | [2024-01-31 14:58:27.855][1][modelmanager][debug][modelmanager.cpp:741] Cannot reload model: phi-2-fp16 with versions due to error: Cannot compile model into target device
model-server-1  | [2024-01-31 14:58:27.856][1][modelmanager][error][modelmanager.cpp:776] Loading main OVMS config models failed.
model-server-1  | [2024-01-31 14:58:27.856][1][modelmanager][info][modelmanager.cpp:539] Configuration file doesn't have custom node libraries property.
model-server-1  | [2024-01-31 14:58:27.856][1][modelmanager][info][modelmanager.cpp:582] Configuration file doesn't have pipelines property.
model-server-1  | [2024-01-31 14:58:27.856][1][modelmanager][info][modelmanager.cpp:556] Configuration file doesn't have mediapipe property.
model-server-1  | [2024-01-31 14:58:27.856][85][modelmanager][info][modelmanager.cpp:1071] Started model manager thread
model-server-1  | [2024-01-31 14:58:27.856][1][serving][info][servablemanagermodule.cpp:55] ServableManagerModule started
model-server-1  | [2024-01-31 14:58:27.856][86][modelmanager][info][modelmanager.cpp:1090] Started cleaner thread
model-server-1  | [2024-01-31 14:58:28.859][85][modelmanager][debug][modelmanager.cpp:1379] Reloading model versions
model-server-1  | [2024-01-31 14:58:28.859][85][serving][info][model.cpp:252] Will reload model: phi-2-fp16; version: 1 ...
model-server-1  | [2024-01-31 14:58:28.859][85][serving][info][model.cpp:41] Getting model from /opt/model/phi-2-fp16
model-server-1  | [2024-01-31 14:58:28.859][85][serving][info][model.cpp:48] Model downloaded to /opt/model/phi-2-fp16
model-server-1  | [2024-01-31 14:58:28.859][85][modelmanager][debug][modelconfig.cpp:467] Parsing model: phi-2-fp16 mapping from path: /opt/model/phi-2-fp16/1
model-server-1  | [2024-01-31 14:58:28.859][85][serving][debug][modelversionstatus.cpp:81] setLoading: phi-2-fp16 - 1 (previous state: LOADING) -> error: OK
model-server-1  | [2024-01-31 14:58:28.859][85][serving][info][modelversionstatus.cpp:109] STATUS CHANGE: Version 1 of model phi-2-fp16 status change. New status: ( "state": "LOADING", "error_code": "OK" )
model-server-1  | [2024-01-31 14:58:28.859][85][serving][debug][modelinstance.cpp:795] Getting model files from path: /opt/model/phi-2-fp16/1
model-server-1  | [2024-01-31 14:58:28.860][85][serving][debug][modelinstance.cpp:653] Try reading model file: /opt/model/phi-2-fp16/1/openvino_model.xml
model-server-1  | [2024-01-31 14:58:29.117][85][modelmanager][debug][modelinstance.cpp:217] Applying layout configuration:
model-server-1  | [2024-01-31 14:58:29.117][85][modelmanager][debug][modelinstance.cpp:259] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); input name: input_ids
model-server-1  | [2024-01-31 14:58:29.117][85][modelmanager][debug][modelinstance.cpp:259] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); input name: attention_mask
model-server-1  | [2024-01-31 14:58:29.117][85][modelmanager][debug][modelinstance.cpp:259] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); input name: position_ids
model-server-1  | [2024-01-31 14:58:29.117][85][modelmanager][debug][modelinstance.cpp:259] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); input name: beam_idx
model-server-1  | [2024-01-31 14:58:29.117][85][modelmanager][debug][modelinstance.cpp:312] model: phi-2-fp16, version: 1; Configuring layout: Tensor Layout:; Network Layout:[N,...] (default); output name: logits
model-server-1  | [2024-01-31 14:58:29.117][85][serving][debug][modelinstance.cpp:461] model: phi-2-fp16, version: 1; reshaping inputs is not required
model-server-1  | [2024-01-31 14:58:29.117][85][modelmanager][debug][modelinstance.cpp:181] Reporting input layout from RTMap: [N,...]; for tensor name: input_ids
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][info][modelinstance.cpp:490] Input name: input_ids; mapping_name: input_ids; shape: (-1,-1); precision: I64; layout: N...
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][debug][modelinstance.cpp:181] Reporting input layout from RTMap: [N,...]; for tensor name: attention_mask
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][info][modelinstance.cpp:490] Input name: attention_mask; mapping_name: attention_mask; shape: (-1,-1); precision: I64; layout: N...
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][debug][modelinstance.cpp:181] Reporting input layout from RTMap: [N,...]; for tensor name: position_ids
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][info][modelinstance.cpp:490] Input name: position_ids; mapping_name: position_ids; shape: (-1,-1); precision: I64; layout: N...
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][debug][modelinstance.cpp:181] Reporting input layout from RTMap: [N,...]; for tensor name: beam_idx
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][info][modelinstance.cpp:490] Input name: beam_idx; mapping_name: beam_idx; shape: (-1); precision: I32; layout: N...
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][debug][modelinstance.cpp:192] Reporting output layout from RTMap: [N,...]; for tensor name: logits
model-server-1  | [2024-01-31 14:58:29.120][85][modelmanager][info][modelinstance.cpp:542] Output name: logits; mapping_name: logits; shape: (-1,-1,51200); precision: FP32; layout: N...
model-server-1  | [2024-01-31 14:59:03.696][85][modelmanager][error][modelinstance.cpp:736] Cannot compile model into target device; error: Exception from src/inference/src/core.cpp:99:
model-server-1  | [ GENERAL_ERROR ] Exception from src/plugins/hetero/src/compiled_model.cpp:34:
model-server-1  | Standard exception from compilation library: [ GENERAL_ERROR ] Check 'allocatable' failed at src/plugins/intel_gpu/src/runtime/ocl/ocl_engine.cpp:149:
model-server-1  | [GPU] Exceeded max size of memory allocation, check debug message for size info
^CGracefully stopping... (press Ctrl+C again to force)
[+] Stopping 1/1
   Container modeladm-model-server-1  Stopped                                                                                                                                                                                                                    18.4s
canceled

Expected behavior My GPU is Arc A770 with 16GB of VRAM. The model should run flawlessly. So, the GPU memory should not be exceeded for microsoft/phi-2 with weights compressed to fp16.

Maybe, I am doing something wrong.

0x33taji commented 7 months ago

The issue is with the intel driver dkms for arc GPU. Not the model server.

intel-i915-dkms on the 5.15+ kernel with the latest update to 5.15.0-92-generic was reporting Maximum Addressible memory of 248MB instead of 4GB usual.

In other words intel-i915-dkms is broken for Arc GPU as of this time of writing.

Steps to fix issue: apt purge the dkms driver apt install the ubuntu hwe kernel (in the dgpu docs it is stated to use the hwe kernel which currently is 6+ on which dkms do not compile; documentation needs to be fixed)

The tree kernel driver was correctly reporting the Max addressible memory to 4GB again and the issue resolved from the model-server side. However, the tree driver broke xpu-smi (Almost every stat in xpu-smi stats -d 0 says N/A)

So the issue is resolved from the model-server end.

Please forward the issue to the driver team, if possible.