SeldonIO / MLServer

An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
https://mlserver.readthedocs.io/en/latest/
Apache License 2.0
720 stars 183 forks source link

Proto descriptor definition collision with KServe #1034

Open jinserk opened 1 year ago

jinserk commented 1 year ago

Hi,

I'm using MLServer with KServe, and found that the proto descriptor in grpc has a collision between them:

File ~/.cache/pypoetry/virtualenvs/example-mlflow-lZ2hGP5g-py3.10/lib/python3.10/site-packages/mlserver/__init__.py:2
      1 from .version import __version__
----> 2 from .server import MLServer
      3 from .model import MLModel
      4 from .settings import Settings, ModelSettings

File ~/.cache/pypoetry/virtualenvs/example-mlflow-lZ2hGP5g-py3.10/lib/python3.10/site-packages/mlserver/server.py:16
     14 from .batching import load_batching
     15 from .rest import RESTServer
---> 16 from .grpc import GRPCServer
     17 from .metrics import MetricsServer
     18 from .kafka import KafkaServer

File ~/.cache/pypoetry/virtualenvs/example-mlflow-lZ2hGP5g-py3.10/lib/python3.10/site-packages/mlserver/grpc/__init__.py:1
----> 1 from .server import GRPCServer
      3 __all__ = ["GRPCServer"]

File ~/.cache/pypoetry/virtualenvs/example-mlflow-lZ2hGP5g-py3.10/lib/python3.10/site-packages/mlserver/grpc/server.py:8
      5 from ..handlers import DataPlane, ModelRepositoryHandlers
      6 from ..settings import Settings
----> 8 from .servicers import InferenceServicer
      9 from .model_repository import ModelRepositoryServicer
     10 from .dataplane_pb2_grpc import add_GRPCInferenceServiceServicer_to_server

File ~/.cache/pypoetry/virtualenvs/example-mlflow-lZ2hGP5g-py3.10/lib/python3.10/site-packages/mlserver/grpc/servicers.py:3
      1 import grpc
----> 3 from . import dataplane_pb2 as pb
      4 from .dataplane_pb2_grpc import GRPCInferenceServiceServicer
      5 from .converters import (
      6     ModelInferRequestConverter,
      7     ModelInferResponseConverter,
   (...)
     11     RepositoryIndexResponseConverter,
     12 )

File ~/.cache/pypoetry/virtualenvs/example-mlflow-lZ2hGP5g-py3.10/lib/python3.10/site-packages/mlserver/grpc/dataplane_pb2.py:16
     11 # @@protoc_insertion_point(imports)
     13 _sym_db = _symbol_database.Default()
---> 16 DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(
     17     b'\n\x0f\x64\x61taplane.proto\x12\tinference"\x13\n\x11ServerLiveRequest""\n\x12ServerLiveResponse\x12\x0c\n\x04live\x18\x01 \x01(\x08"\x14\n\x12ServerReadyRequest"$\n\x13ServerReadyResponse\x12\r\n\x05ready\x18\x01 \x01(\x08"2\n\x11ModelReadyRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t"#\n\x12ModelReadyResponse\x12\r\n\x05ready\x18\x01 \x01(\x08"\x17\n\x15ServerMetadataRequest"K\n\x16ServerMetadataResponse\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12\x12\n\nextensions\x18\x03 \x03(\t"5\n\x14ModelMetadataRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t"\xc5\x04\n\x15ModelMetadataResponse\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08versions\x18\x02 \x03(\t\x12\x10\n\x08platform\x18\x03 \x01(\t\x12?\n\x06inputs\x18\x04 \x03(\x0b\x32/.inference.ModelMetadataResponse.TensorMetadata\x12@\n\x07outputs\x18\x05 \x03(\x0b\x32/.inference.ModelMetadataResponse.TensorMetadata\x12\x44\n\nparameters\x18\x06 \x03(\x0b\x32\x30.inference.ModelMetadataResponse.ParametersEntry\x1a\xe2\x01\n\x0eTensorMetadata\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12S\n\nparameters\x18\x04 \x03(\x0b\x32?.inference.ModelMetadataResponse.TensorMetadata.ParametersEntry\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01"\xd2\x06\n\x11ModelInferRequest\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x15\n\rmodel_version\x18\x02 \x01(\t\x12\n\n\x02id\x18\x03 \x01(\t\x12@\n\nparameters\x18\x04 \x03(\x0b\x32,.inference.ModelInferRequest.ParametersEntry\x12=\n\x06inputs\x18\x05 \x03(\x0b\x32-.inference.ModelInferRequest.InferInputTensor\x12H\n\x07outputs\x18\x06 \x03(\x0b\x32\x37.inference.ModelInferRequest.InferRequestedOutputTensor\x1a\x94\x02\n\x10InferInputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12Q\n\nparameters\x18\x04 \x03(\x0b\x32=.inference.ModelInferRequest.InferInputTensor.ParametersEntry\x12\x30\n\x08\x63ontents\x18\x05 \x01(\x0b\x32\x1e.inference.InferTensorContents\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1a\xd5\x01\n\x1aInferRequestedOutputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12[\n\nparameters\x18\x02 \x03(\x0b\x32G.inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01"\xb8\x04\n\x12ModelInferResponse\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x15\n\rmodel_version\x18\x02 \x01(\t\x12\n\n\x02id\x18\x03 \x01(\t\x12\x41\n\nparameters\x18\x04 \x03(\x0b\x32-.inference.ModelInferResponse.ParametersEntry\x12@\n\x07outputs\x18\x05 \x03(\x0b\x32/.inference.ModelInferResponse.InferOutputTensor\x1a\x97\x02\n\x11InferOutputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12S\n\nparameters\x18\x04 \x03(\x0b\x32?.inference.ModelInferResponse.InferOutputTensor.ParametersEntry\x12\x30\n\x08\x63ontents\x18\x05 \x01(\x0b\x32\x1e.inference.InferTensorContents\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01"i\n\x0eInferParameter\x12\x14\n\nbool_param\x18\x01 \x01(\x08H\x00\x12\x15\n\x0bint64_param\x18\x02 \x01(\x03H\x00\x12\x16\n\x0cstring_param\x18\x03 \x01(\tH\x00\x42\x12\n\x10parameter_choice"\xd0\x01\n\x13InferTensorContents\x12\x15\n\rbool_contents\x18\x01 \x03(\x08\x12\x14\n\x0cint_contents\x18\x02 \x03(\x05\x12\x16\n\x0eint64_contents\x18\x03 \x03(\x03\x12\x15\n\ruint_contents\x18\x04 \x03(\r\x12\x17\n\x0fuint64_contents\x18\x05 \x03(\x04\x12\x15\n\rfp32_contents\x18\x06 \x03(\x02\x12\x15\n\rfp64_contents\x18\x07 \x03(\x01\x12\x16\n\x0e\x62ytes_contents\x18\x08 \x03(\x0c"\x8a\x01\n\x18ModelRepositoryParameter\x12\x14\n\nbool_param\x18\x01 \x01(\x08H\x00\x12\x15\n\x0bint64_param\x18\x02 \x01(\x03H\x00\x12\x16\n\x0cstring_param\x18\x03 \x01(\tH\x00\x12\x15\n\x0b\x62ytes_param\x18\x04 \x01(\x0cH\x00\x42\x12\n\x10parameter_choice"@\n\x16RepositoryIndexRequest\x12\x17\n\x0frepository_name\x18\x01 \x01(\t\x12\r\n\x05ready\x18\x02 \x01(\x08"\xa4\x01\n\x17RepositoryIndexResponse\x12=\n\x06models\x18\x01 \x03(\x0b\x32-.inference.RepositoryIndexResponse.ModelIndex\x1aJ\n\nModelIndex\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12\r\n\x05state\x18\x03 \x01(\t\x12\x0e\n\x06reason\x18\x04 \x01(\t"\xec\x01\n\x1aRepositoryModelLoadRequest\x12\x17\n\x0frepository_name\x18\x01 \x01(\t\x12\x12\n\nmodel_name\x18\x02 \x01(\t\x12I\n\nparameters\x18\x03 \x03(\x0b\x32\x35.inference.RepositoryModelLoadRequest.ParametersEntry\x1aV\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x32\n\x05value\x18\x02 \x01(\x0b\x32#.inference.ModelRepositoryParameter:\x02\x38\x01"\x1d\n\x1bRepositoryModelLoadResponse"\xf0\x01\n\x1cRepositoryModelUnloadRequest\x12\x17\n\x0frepository_name\x18\x01 \x01(\t\x12\x12\n\nmodel_name\x18\x02 \x01(\t\x12K\n\nparameters\x18\x03 \x03(\x0b\x32\x37.inference.RepositoryModelUnloadRequest.ParametersEntry\x1aV\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x32\n\x05value\x18\x02 \x01(\x0b\x32#.inference.ModelRepositoryParameter:\x02\x38\x01"\x1f\n\x1dRepositoryModelUnloadResponse2\xae\x06\n\x14GRPCInferenceService\x12K\n\nServerLive\x12\x1c.inference.ServerLiveRequest\x1a\x1d.inference.ServerLiveResponse"\x00\x12N\n\x0bServerReady\x12\x1d.inference.ServerReadyRequest\x1a\x1e.inference.ServerReadyResponse"\x00\x12K\n\nModelReady\x12\x1c.inference.ModelReadyRequest\x1a\x1d.inference.ModelReadyResponse"\x00\x12W\n\x0eServerMetadata\x12 .inference.ServerMetadataRequest\x1a!.inference.ServerMetadataResponse"\x00\x12T\n\rModelMetadata\x12\x1f.inference.ModelMetadataRequest\x1a .inference.ModelMetadataResponse"\x00\x12K\n\nModelInfer\x12\x1c.inference.ModelInferRequest\x1a\x1d.inference.ModelInferResponse"\x00\x12Z\n\x0fRepositoryIndex\x12!.inference.RepositoryIndexRequest\x1a".inference.RepositoryIndexResponse"\x00\x12\x66\n\x13RepositoryModelLoad\x12%.inference.RepositoryModelLoadRequest\x1a&.inference.RepositoryModelLoadResponse"\x00\x12l\n\x15RepositoryModelUnload\x12\'.inference.RepositoryModelUnloadRequest\x1a(.inference.RepositoryModelUnloadResponse"\x00\x62\x06proto3'
     18 )
     21 _SERVERLIVEREQUEST = DESCRIPTOR.message_types_by_name["ServerLiveRequest"]
     22 _SERVERLIVERESPONSE = DESCRIPTOR.message_types_by_name["ServerLiveResponse"]

TypeError: Couldn't build proto file into descriptor pool!
Invalid proto descriptor for file "dataplane.proto":
  inference.ServerLiveRequest: "inference.ServerLiveRequest" is already defined in file "grpc_predict_v2.proto".
  inference.ServerLiveResponse.live: "inference.ServerLiveResponse.live" is already defined in file "grpc_predict_v2.proto".
  inference.ServerLiveResponse: "inference.ServerLiveResponse" is already defined in file "grpc_predict_v2.proto".
  inference.ServerReadyRequest: "inference.ServerReadyRequest" is already defined in file "grpc_predict_v2.proto".
  inference.ServerReadyResponse.ready: "inference.ServerReadyResponse.ready" is already defined in file "grpc_predict_v2.proto".
  inference.ServerReadyResponse: "inference.ServerReadyResponse" is already defined in file "grpc_predict_v2.proto".
  inference.ModelReadyRequest.name: "inference.ModelReadyRequest.name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelReadyRequest.version: "inference.ModelReadyRequest.version" is already defined in file "grpc_predict_v2.proto".
  inference.ModelReadyRequest: "inference.ModelReadyRequest" is already defined in file "grpc_predict_v2.proto".
  inference.ModelReadyResponse.ready: "inference.ModelReadyResponse.ready" is already defined in file "grpc_predict_v2.proto".
  inference.ModelReadyResponse: "inference.ModelReadyResponse" is already defined in file "grpc_predict_v2.proto".
  inference.ServerMetadataRequest: "inference.ServerMetadataRequest" is already defined in file "grpc_predict_v2.proto".
  inference.ServerMetadataResponse.name: "inference.ServerMetadataResponse.name" is already defined in file "grpc_predict_v2.proto".
  inference.ServerMetadataResponse.version: "inference.ServerMetadataResponse.version" is already defined in file "grpc_predict_v2.proto".
  inference.ServerMetadataResponse.extensions: "inference.ServerMetadataResponse.extensions" is already defined in file "grpc_predict_v2.proto".
  inference.ServerMetadataResponse: "inference.ServerMetadataResponse" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataRequest.name: "inference.ModelMetadataRequest.name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataRequest.version: "inference.ModelMetadataRequest.version" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataRequest: "inference.ModelMetadataRequest" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.name: "inference.ModelMetadataResponse.name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.versions: "inference.ModelMetadataResponse.versions" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.platform: "inference.ModelMetadataResponse.platform" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.inputs: "inference.ModelMetadataResponse.inputs" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.outputs: "inference.ModelMetadataResponse.outputs" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.TensorMetadata.name: "inference.ModelMetadataResponse.TensorMetadata.name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.TensorMetadata.datatype: "inference.ModelMetadataResponse.TensorMetadata.datatype" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.TensorMetadata.shape: "inference.ModelMetadataResponse.TensorMetadata.shape" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.TensorMetadata: "inference.ModelMetadataResponse.TensorMetadata" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse: "inference.ModelMetadataResponse" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.model_name: "inference.ModelInferRequest.model_name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.model_version: "inference.ModelInferRequest.model_version" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.id: "inference.ModelInferRequest.id" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.parameters: "inference.ModelInferRequest.parameters" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.inputs: "inference.ModelInferRequest.inputs" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.outputs: "inference.ModelInferRequest.outputs" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor.name: "inference.ModelInferRequest.InferInputTensor.name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor.datatype: "inference.ModelInferRequest.InferInputTensor.datatype" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor.shape: "inference.ModelInferRequest.InferInputTensor.shape" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor.parameters: "inference.ModelInferRequest.InferInputTensor.parameters" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor.contents: "inference.ModelInferRequest.InferInputTensor.contents" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor.ParametersEntry.key: "inference.ModelInferRequest.InferInputTensor.ParametersEntry.key" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor.ParametersEntry.value: "inference.ModelInferRequest.InferInputTensor.ParametersEntry.value" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor.ParametersEntry: "inference.ModelInferRequest.InferInputTensor.ParametersEntry" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferInputTensor: "inference.ModelInferRequest.InferInputTensor" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferRequestedOutputTensor.name: "inference.ModelInferRequest.InferRequestedOutputTensor.name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferRequestedOutputTensor.parameters: "inference.ModelInferRequest.InferRequestedOutputTensor.parameters" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry.key: "inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry.key" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry.value: "inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry.value" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry: "inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.InferRequestedOutputTensor: "inference.ModelInferRequest.InferRequestedOutputTensor" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.ParametersEntry.key: "inference.ModelInferRequest.ParametersEntry.key" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.ParametersEntry.value: "inference.ModelInferRequest.ParametersEntry.value" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest.ParametersEntry: "inference.ModelInferRequest.ParametersEntry" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferRequest: "inference.ModelInferRequest" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.model_name: "inference.ModelInferResponse.model_name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.model_version: "inference.ModelInferResponse.model_version" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.id: "inference.ModelInferResponse.id" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.parameters: "inference.ModelInferResponse.parameters" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.outputs: "inference.ModelInferResponse.outputs" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor.name: "inference.ModelInferResponse.InferOutputTensor.name" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor.datatype: "inference.ModelInferResponse.InferOutputTensor.datatype" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor.shape: "inference.ModelInferResponse.InferOutputTensor.shape" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor.parameters: "inference.ModelInferResponse.InferOutputTensor.parameters" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor.contents: "inference.ModelInferResponse.InferOutputTensor.contents" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor.ParametersEntry.key: "inference.ModelInferResponse.InferOutputTensor.ParametersEntry.key" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor.ParametersEntry.value: "inference.ModelInferResponse.InferOutputTensor.ParametersEntry.value" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor.ParametersEntry: "inference.ModelInferResponse.InferOutputTensor.ParametersEntry" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.InferOutputTensor: "inference.ModelInferResponse.InferOutputTensor" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.ParametersEntry.key: "inference.ModelInferResponse.ParametersEntry.key" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.ParametersEntry.value: "inference.ModelInferResponse.ParametersEntry.value" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse.ParametersEntry: "inference.ModelInferResponse.ParametersEntry" is already defined in file "grpc_predict_v2.proto".
  inference.ModelInferResponse: "inference.ModelInferResponse" is already defined in file "grpc_predict_v2.proto".
  inference.InferParameter.parameter_choice: "inference.InferParameter.parameter_choice" is already defined in file "grpc_predict_v2.proto".
  inference.InferParameter.bool_param: "inference.InferParameter.bool_param" is already defined in file "grpc_predict_v2.proto".
  inference.InferParameter.int64_param: "inference.InferParameter.int64_param" is already defined in file "grpc_predict_v2.proto".
  inference.InferParameter.string_param: "inference.InferParameter.string_param" is already defined in file "grpc_predict_v2.proto".
  inference.InferParameter: "inference.InferParameter" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents.bool_contents: "inference.InferTensorContents.bool_contents" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents.int_contents: "inference.InferTensorContents.int_contents" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents.int64_contents: "inference.InferTensorContents.int64_contents" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents.uint_contents: "inference.InferTensorContents.uint_contents" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents.uint64_contents: "inference.InferTensorContents.uint64_contents" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents.fp32_contents: "inference.InferTensorContents.fp32_contents" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents.fp64_contents: "inference.InferTensorContents.fp64_contents" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents.bytes_contents: "inference.InferTensorContents.bytes_contents" is already defined in file "grpc_predict_v2.proto".
  inference.InferTensorContents: "inference.InferTensorContents" is already defined in file "grpc_predict_v2.proto".
  inference.RepositoryModelLoadRequest.model_name: "inference.RepositoryModelLoadRequest.model_name" is already defined in file "grpc_predict_v2.proto".
  inference.RepositoryModelLoadRequest: "inference.RepositoryModelLoadRequest" is already defined in file "grpc_predict_v2.proto".
  inference.RepositoryModelLoadResponse: "inference.RepositoryModelLoadResponse" is already defined in file "grpc_predict_v2.proto".
  inference.RepositoryModelUnloadRequest.model_name: "inference.RepositoryModelUnloadRequest.model_name" is already defined in file "grpc_predict_v2.proto".
  inference.RepositoryModelUnloadRequest: "inference.RepositoryModelUnloadRequest" is already defined in file "grpc_predict_v2.proto".
  inference.RepositoryModelUnloadResponse: "inference.RepositoryModelUnloadResponse" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService.ServerLive: "inference.GRPCInferenceService.ServerLive" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService.ServerReady: "inference.GRPCInferenceService.ServerReady" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService.ModelReady: "inference.GRPCInferenceService.ModelReady" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService.ServerMetadata: "inference.GRPCInferenceService.ServerMetadata" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService.ModelMetadata: "inference.GRPCInferenceService.ModelMetadata" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService.ModelInfer: "inference.GRPCInferenceService.ModelInfer" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService.RepositoryModelLoad: "inference.GRPCInferenceService.RepositoryModelLoad" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService.RepositoryModelUnload: "inference.GRPCInferenceService.RepositoryModelUnload" is already defined in file "grpc_predict_v2.proto".
  inference.GRPCInferenceService: "inference.GRPCInferenceService" is already defined in file "grpc_predict_v2.proto".
  inference.ModelMetadataResponse.TensorMetadata.ParametersEntry.value: "inference.InferParameter" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelMetadataResponse.ParametersEntry.value: "inference.InferParameter" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelMetadataResponse.inputs: "inference.ModelMetadataResponse.TensorMetadata" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelMetadataResponse.outputs: "inference.ModelMetadataResponse.TensorMetadata" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.InferInputTensor.ParametersEntry.value: "inference.InferParameter" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.InferInputTensor.parameters: "inference.ModelInferRequest.InferInputTensor.ParametersEntry" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.InferInputTensor.contents: "inference.InferTensorContents" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry.value: "inference.InferParameter" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.InferRequestedOutputTensor.parameters: "inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.ParametersEntry.value: "inference.InferParameter" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.parameters: "inference.ModelInferRequest.ParametersEntry" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.inputs: "inference.ModelInferRequest.InferInputTensor" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferRequest.outputs: "inference.ModelInferRequest.InferRequestedOutputTensor" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferResponse.InferOutputTensor.ParametersEntry.value: "inference.InferParameter" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferResponse.InferOutputTensor.parameters: "inference.ModelInferResponse.InferOutputTensor.ParametersEntry" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferResponse.InferOutputTensor.contents: "inference.InferTensorContents" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferResponse.ParametersEntry.value: "inference.InferParameter" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferResponse.parameters: "inference.ModelInferResponse.ParametersEntry" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.ModelInferResponse.outputs: "inference.ModelInferResponse.InferOutputTensor" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ServerLive: "inference.ServerLiveRequest" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ServerLive: "inference.ServerLiveResponse" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ServerReady: "inference.ServerReadyRequest" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ServerReady: "inference.ServerReadyResponse" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ModelReady: "inference.ModelReadyRequest" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ModelReady: "inference.ModelReadyResponse" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ServerMetadata: "inference.ServerMetadataRequest" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ServerMetadata: "inference.ServerMetadataResponse" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ModelMetadata: "inference.ModelMetadataRequest" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ModelMetadata: "inference.ModelMetadataResponse" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ModelInfer: "inference.ModelInferRequest" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.ModelInfer: "inference.ModelInferResponse" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.RepositoryModelLoad: "inference.RepositoryModelLoadRequest" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.RepositoryModelLoad: "inference.RepositoryModelLoadResponse" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.RepositoryModelUnload: "inference.RepositoryModelUnloadRequest" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.
  inference.GRPCInferenceService.RepositoryModelUnload: "inference.RepositoryModelUnloadResponse" seems to be defined in "grpc_predict_v2.proto", which is not imported by "dataplane.proto".  To use it here, please add the necessary import.

Any ideas?

adriangonz commented 1 year ago

Hey @jinserk ,

gRPC is known to detect clashes when multiple protobufs coming from the same *.proto are registered.

Could you share more info on your environment and intended use case?

jinserk commented 1 year ago

@adriangonz

I'm using MLServer to use MLFlow on Kserve. Their versions are:

mlserver-mlflow==1.2.4 ; python_version >= "3.10" and python_version < "4.0" 
mlserver-sklearn==1.2.4 ; python_version >= "3.10" and python_version < "4.0" 
mlserver==1.2.4 ; python_version >= "3.10" and python_version < "4.0" 
mlflow==2.2.1 ; python_version >= "3.10" and python_version < "4.0" 
kserve==0.10.1 ; python_version >= "3.10" and python_version < "4.0"

One comment is that currently kserve has a dependency of tritonclient==2.18.0, which doesn't meet to mlserver>1.2.0, which depends on tritonclient[http]>=2.24. So I install the latest mlserver forcely ignoring the warning from pip.

This example could be simpler:

[ins] In [2]: import mlserver                                                                                                                                                                                                                                                  
         ...: import kserve                                                                                                                                                                                                                                                    ---------------------------------------------------------------------------                                                                                                                                                                                                    
TypeError                                 Traceback (most recent call last)                                                                                                                                                                                                    Cell In[2], line 2                                                                                                                                                                                                                                                             
      1 import mlserver                                                                                                                                                                                                                                                        ----> 2 import kserve                                                                                                                                                                                                                                                          
                                                                                                                                                                                                                                                                               File ~/.pyenv/versions/3.10.10/envs/kai/lib/python3.10/site-packages/kserve/__init__.py:17                                                                                                                                                                                     
      1 # Copyright 2021 The KServe Authors.                                                                                                                                                                                                                                         2 #                                                                                                                                                                                                                                                                      
      3 # Licensed under the Apache License, Version 2.0 (the "License");                                                                                                                                                                                                         (...)                                                                                                                                                                                                                                                                       
     12 # See the License for the specific language governing permissions and                                                                                                                                                                                                       13 # limitations under the License.                                                                                                                                                                                                                                       
     15 from __future__ import absolute_import                                                                                                                                                                                                                                 ---> 17 from .model import Model                                                                                                                                                                                                                                               
     18 from .model_server import ModelServer                                                                                                                                                                                                                                       19 from .inference_client import InferenceServerClient                                                                                                                                                                                                                    

File ~/.pyenv/versions/3.10.10/envs/kai/lib/python3.10/site-packages/kserve/model.py:28                                                                                                                                                                                        
     25 import orjson                                                                                                                                                                                                                                                          
     26 from cloudevents.http import CloudEvent                                                                                                                                                                                                                                
---> 28 from .protocol.infer_type import InferRequest, InferResponse                                                                                                                                                                                                           
     29 from .metrics import PRE_HIST_TIME, POST_HIST_TIME, PREDICT_HIST_TIME, EXPLAIN_HIST_TIME, get_labels                                                                                                                                                                   
     30 from .protocol.grpc import grpc_predict_v2_pb2_grpc                                                                                                                                                                                                                    

File ~/.pyenv/versions/3.10.10/envs/kai/lib/python3.10/site-packages/kserve/protocol/infer_type.py:23                                                                                                                                                                          
     21 from ..constants.constants import GRPC_CONTENT_DATATYPE_MAPPINGS                                                                                                                                                                                                       
     22 from ..errors import InvalidInput                                                                                                                                                                                                                                      
---> 23 from ..protocol.grpc.grpc_predict_v2_pb2 import ModelInferRequest, InferTensorContents, ModelInferResponse                                                                                                                                                             
     24 from ..utils.numpy_codec import to_np_dtype, from_np_dtype                                                                                                                                                                                                             
     27 class InferInput:                                                                                                                                                                                                                                                      

File ~/.pyenv/versions/3.10.10/envs/kai/lib/python3.10/site-packages/kserve/protocol/grpc/grpc_predict_v2_pb2.py:31                                                                                                                                                            
     24 # @@protoc_insertion_point(imports)                                                                                                                                                                                                                                    
     26 _sym_db = _symbol_database.Default()
---> 31 DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x15grpc_predict_v2.proto\x12\tinference\"\x13\n\x11ServerLiveRequest\"\"\n\x12ServerLiveResponse\x12\x0c\n\x04live\x18\x01 \x01(\x08\"\x14\n\x12ServerReadyRequest\"$\n\x13ServerReadyResponse\x12\r\n\x
05ready\x18\x01 \x01(\x08\"2\n\x11ModelReadyRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\"#\n\x12ModelReadyResponse\x12\r\n\x05ready\x18\x01 \x01(\x08\"\x17\n\x15ServerMetadataRequest\"K\n\x16ServerMetadataResponse\x12\x0c\n\x04name\x18\
x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12\x12\n\nextensions\x18\x03 \x03(\t\"5\n\x14ModelMetadataRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\"\x8d\x02\n\x15ModelMetadataResponse\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08
versions\x18\x02 \x03(\t\x12\x10\n\x08platform\x18\x03 \x01(\t\x12?\n\x06inputs\x18\x04 \x03(\x0b\x32/.inference.ModelMetadataResponse.TensorMetadata\x12@\n\x07outputs\x18\x05 \x03(\x0b\x32/.inference.ModelMetadataResponse.TensorMetadata\x1a?\n\x0eTensorMetadata\x12\x0c\
n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\"\xee\x06\n\x11ModelInferRequest\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x15\n\rmodel_version\x18\x02 \x01(\t\x12\n\n\x02id\x18\x03 \x01(\t\x12@\n\nparameters\x18\x
04 \x03(\x0b\x32,.inference.ModelInferRequest.ParametersEntry\x12=\n\x06inputs\x18\x05 \x03(\x0b\x32-.inference.ModelInferRequest.InferInputTensor\x12H\n\x07outputs\x18\x06 \x03(\x0b\x32\x37.inference.ModelInferRequest.InferRequestedOutputTensor\x12\x1a\n\x12raw_input_co
ntents\x18\x07 \x03(\x0c\x1a\x94\x02\n\x10InferInputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12Q\n\nparameters\x18\x04 \x03(\x0b\x32=.inference.ModelInferRequest.InferInputTensor.ParametersEn
try\x12\x30\n\x08\x63ontents\x18\x05 \x01(\x0b\x32\x1e.inference.InferTensorContents\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1a\xd5\x01\n\x1aInferRequestedOutputTensor\x12
\x0c\n\x04name\x18\x01 \x01(\t\x12[\n\nparameters\x18\x02 \x03(\x0b\x32G.inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\
x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\"\xd5\x04\n\x12ModelInferResponse\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x15\n\rmodel_version\x18\x02 \x01(\t\x12\n\n
\x02id\x18\x03 \x01(\t\x12\x41\n\nparameters\x18\x04 \x03(\x0b\x32-.inference.ModelInferResponse.ParametersEntry\x12@\n\x07outputs\x18\x05 \x03(\x0b\x32/.inference.ModelInferResponse.InferOutputTensor\x12\x1b\n\x13raw_output_contents\x18\x06 \x03(\x0c\x1a\x97\x02\n\x11In
ferOutputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12S\n\nparameters\x18\x04 \x03(\x0b\x32?.inference.ModelInferResponse.InferOutputTensor.ParametersEntry\x12\x30\n\x08\x63ontents\x18\x05 \x01
(\x0b\x32\x1e.inference.InferTensorContents\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(
\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\"i\n\x0eInferParameter\x12\x14\n\nbool_param\x18\x01 \x01(\x08H\x00\x12\x15\n\x0bint64_param\x18\x02 \x01(\x03H\x00\x12\x16\n\x0cstring_param\x18\x03 \x01(\tH\x00\x42\x12\n\x10parameter_choice\"\xd0\x01\n\x13InferTensorC
ontents\x12\x15\n\rbool_contents\x18\x01 \x03(\x08\x12\x14\n\x0cint_contents\x18\x02 \x03(\x05\x12\x16\n\x0eint64_contents\x18\x03 \x03(\x03\x12\x15\n\ruint_contents\x18\x04 \x03(\r\x12\x17\n\x0fuint64_contents\x18\x05 \x03(\x04\x12\x15\n\rfp32_contents\x18\x06 \x03(\x02
\x12\x15\n\rfp64_contents\x18\x07 \x03(\x01\x12\x16\n\x0e\x62ytes_contents\x18\x08 \x03(\x0c\"0\n\x1aRepositoryModelLoadRequest\x12\x12\n\nmodel_name\x18\x01 \x01(\t\"C\n\x1bRepositoryModelLoadResponse\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x10\n\x08isLoaded\x18\x02 \
x01(\x08\"2\n\x1cRepositoryModelUnloadRequest\x12\x12\n\nmodel_name\x18\x01 \x01(\t\"G\n\x1dRepositoryModelUnloadResponse\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x12\n\nisUnloaded\x18\x02 \x01(\x08\x32\xd2\x05\n\x14GRPCInferenceService\x12K\n\nServerLive\x12\x1c.infere
nce.ServerLiveRequest\x1a\x1d.inference.ServerLiveResponse\"\x00\x12N\n\x0bServerReady\x12\x1d.inference.ServerReadyRequest\x1a\x1e.inference.ServerReadyResponse\"\x00\x12K\n\nModelReady\x12\x1c.inference.ModelReadyRequest\x1a\x1d.inference.ModelReadyResponse\"\x00\x12W\
n\x0eServerMetadata\x12 .inference.ServerMetadataRequest\x1a!.inference.ServerMetadataResponse\"\x00\x12T\n\rModelMetadata\x12\x1f.inference.ModelMetadataRequest\x1a .inference.ModelMetadataResponse\"\x00\x12K\n\nModelInfer\x12\x1c.inference.ModelInferRequest\x1a\x1d.inf
erence.ModelInferResponse\"\x00\x12\x66\n\x13RepositoryModelLoad\x12%.inference.RepositoryModelLoadRequest\x1a&.inference.RepositoryModelLoadResponse\"\x00\x12l\n\x15RepositoryModelUnload\x12\'.inference.RepositoryModelUnloadRequest\x1a(.inference.RepositoryModelUnloadRe
sponse\"\x00\x62\x06proto3')
     35 _SERVERLIVEREQUEST = DESCRIPTOR.message_types_by_name['ServerLiveRequest']
     36 _SERVERLIVERESPONSE = DESCRIPTOR.message_types_by_name['ServerLiveResponse']

File ~/.pyenv/versions/3.10.10/envs/kai/lib/python3.10/site-packages/google/protobuf/descriptor_pool.py:219, in DescriptorPool.AddSerializedFile(self, serialized_file_desc_proto)
    216 from google.protobuf import descriptor_pb2
    217 file_desc_proto = descriptor_pb2.FileDescriptorProto.FromString(
    218     serialized_file_desc_proto)
--> 219 file_desc = self._ConvertFileProtoToFileDescriptor(file_desc_proto)
    220 file_desc.serialized_pb = serialized_file_desc_proto
    221 return file_desc

File ~/.pyenv/versions/3.10.10/envs/kai/lib/python3.10/site-packages/google/protobuf/descriptor_pool.py:774, in DescriptorPool._ConvertFileProtoToFileDescriptor(self, file_proto)
    770   scope.update((_PrefixWithDot(enum.full_name), enum)
    771                for enum in dependency.enum_types_by_name.values())
    773 for message_type in file_proto.message_type:
--> 774   message_desc = self._ConvertMessageDescriptor(
    775       message_type, file_proto.package, file_descriptor, scope,
    776       file_proto.syntax)
    777   file_descriptor.message_types_by_name[message_desc.name] = (
    778       message_desc)
    780 for enum_type in file_proto.enum_type:

File ~/.pyenv/versions/3.10.10/envs/kai/lib/python3.10/site-packages/google/protobuf/descriptor_pool.py:918, in DescriptorPool._ConvertMessageDescriptor(self, desc_proto, package, file_desc, scope, syntax)
    915     fields[field_index].containing_oneof = oneofs[oneof_index]
    917 scope[_PrefixWithDot(desc_name)] = desc
--> 918 self._CheckConflictRegister(desc, desc.full_name, desc.file.name)
    919 self._descriptors[desc_name] = desc
    920 return desc

File ~/.pyenv/versions/3.10.10/envs/kai/lib/python3.10/site-packages/google/protobuf/descriptor_pool.py:191, in DescriptorPool._CheckConflictRegister(self, desc, desc_name, file_name)
    186   if isinstance(desc, descriptor.EnumValueDescriptor):
    187     error_msg += ('\nNote: enum values appear as '
    188                   'siblings of the enum type instead of '
    189                   'children of it.')
--> 191   raise TypeError(error_msg)
    193 return

TypeError: Conflict register for file "grpc_predict_v2.proto": inference.ServerLiveRequest is already defined in file "dataplane.proto". Please fix the conflict by adding package name on the proto file, or use different name for the duplication.

Just the import of two mlserver and kserve triggered this collision. I guess one or a few descriptor's name happen to be the same from these packages. Maybe either of them should rename to avoid the conflicts.

jinserk commented 1 year ago

I found the descriptor of named "ServerLiveRequest" in mlserver:

(kyu) jinserk@pluto ~/kyu/MLServer/mlserver (master) $ grep -Irn ServerLiveRequest 
./grpc/dataplane_pb2_grpc.py:22:            request_serializer=dataplane__pb2.ServerLiveRequest.SerializeToString,
./grpc/dataplane_pb2_grpc.py:132:            request_deserializer=dataplane__pb2.ServerLiveRequest.FromString,
./grpc/dataplane_pb2_grpc.py:206:            dataplane__pb2.ServerLiveRequest.SerializeToString,
./grpc/servicers.py:28:        self, request: pb.ServerLiveRequest, context
./grpc/dataplane_pb2.py:17:    b'\n\x0f\x64\x61taplane.proto\x12\tinference"\x13\n\x11ServerLiveRequest""\n\x12ServerLiveResponse\x12\x0c\n\x04live\x18\x01 \x01(\x08"\x14\n\x12ServerReadyRequest"$\n\x13ServerReadyResponse\x12\r\n\x05ready\x18\x01 \x01(\x08"2\n\x11ModelReadyRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t"#\n\x12ModelReadyResponse\x12\r\n\x05ready\x18\x01 \x01(\x08"\x17\n\x15ServerMetadataRequest"K\n\x16ServerMetadataResponse\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12\x12\n\nextensions\x18\x03 \x03(\t"5\n\x14ModelMetadataRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t"\xc5\x04\n\x15ModelMetadataResponse\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08versions\x18\x02 \x03(\t\x12\x10\n\x08platform\x18\x03 \x01(\t\x12?\n\x06inputs\x18\x04 \x03(\x0b\x32/.inference.ModelMetadataResponse.TensorMetadata\x12@\n\x07outputs\x18\x05 \x03(\x0b\x32/.inference.ModelMetadataResponse.TensorMetadata\x12\x44\n\nparameters\x18\x06 \x03(\x0b\x32\x30.inference.ModelMetadataResponse.ParametersEntry\x1a\xe2\x01\n\x0eTensorMetadata\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12S\n\nparameters\x18\x04 \x03(\x0b\x32?.inference.ModelMetadataResponse.TensorMetadata.ParametersEntry\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01"\xee\x06\n\x11ModelInferRequest\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x15\n\rmodel_version\x18\x02 \x01(\t\x12\n\n\x02id\x18\x03 \x01(\t\x12@\n\nparameters\x18\x04 \x03(\x0b\x32,.inference.ModelInferRequest.ParametersEntry\x12=\n\x06inputs\x18\x05 \x03(\x0b\x32-.inference.ModelInferRequest.InferInputTensor\x12H\n\x07outputs\x18\x06 \x03(\x0b\x32\x37.inference.ModelInferRequest.InferRequestedOutputTensor\x12\x1a\n\x12raw_input_contents\x18\x07 \x03(\x0c\x1a\x94\x02\n\x10InferInputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12Q\n\nparameters\x18\x04 \x03(\x0b\x32=.inference.ModelInferRequest.InferInputTensor.ParametersEntry\x12\x30\n\x08\x63ontents\x18\x05 \x01(\x0b\x32\x1e.inference.InferTensorContents\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1a\xd5\x01\n\x1aInferRequestedOutputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12[\n\nparameters\x18\x02 \x03(\x0b\x32G.inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01"\xd5\x04\n\x12ModelInferResponse\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x15\n\rmodel_version\x18\x02 \x01(\t\x12\n\n\x02id\x18\x03 \x01(\t\x12\x41\n\nparameters\x18\x04 \x03(\x0b\x32-.inference.ModelInferResponse.ParametersEntry\x12@\n\x07outputs\x18\x05 \x03(\x0b\x32/.inference.ModelInferResponse.InferOutputTensor\x12\x1b\n\x13raw_output_contents\x18\x06 \x03(\x0c\x1a\x97\x02\n\x11InferOutputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12S\n\nparameters\x18\x04 \x03(\x0b\x32?.inference.ModelInferResponse.InferOutputTensor.ParametersEntry\x12\x30\n\x08\x63ontents\x18\x05 \x01(\x0b\x32\x1e.inference.InferTensorContents\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01"i\n\x0eInferParameter\x12\x14\n\nbool_param\x18\x01 \x01(\x08H\x00\x12\x15\n\x0bint64_param\x18\x02 \x01(\x03H\x00\x12\x16\n\x0cstring_param\x18\x03 \x01(\tH\x00\x42\x12\n\x10parameter_choice"\xd0\x01\n\x13InferTensorContents\x12\x15\n\rbool_contents\x18\x01 \x03(\x08\x12\x14\n\x0cint_contents\x18\x02 \x03(\x05\x12\x16\n\x0eint64_contents\x18\x03 \x03(\x03\x12\x15\n\ruint_contents\x18\x04 \x03(\r\x12\x17\n\x0fuint64_contents\x18\x05 \x03(\x04\x12\x15\n\rfp32_contents\x18\x06 \x03(\x02\x12\x15\n\rfp64_contents\x18\x07 \x03(\x01\x12\x16\n\x0e\x62ytes_contents\x18\x08 \x03(\x0c"\x8a\x01\n\x18ModelRepositoryParameter\x12\x14\n\nbool_param\x18\x01 \x01(\x08H\x00\x12\x15\n\x0bint64_param\x18\x02 \x01(\x03H\x00\x12\x16\n\x0cstring_param\x18\x03 \x01(\tH\x00\x12\x15\n\x0b\x62ytes_param\x18\x04 \x01(\x0cH\x00\x42\x12\n\x10parameter_choice"@\n\x16RepositoryIndexRequest\x12\x17\n\x0frepository_name\x18\x01 \x01(\t\x12\r\n\x05ready\x18\x02 \x01(\x08"\xa4\x01\n\x17RepositoryIndexResponse\x12=\n\x06models\x18\x01 \x03(\x0b\x32-.inference.RepositoryIndexResponse.ModelIndex\x1aJ\n\nModelIndex\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12\r\n\x05state\x18\x03 \x01(\t\x12\x0e\n\x06reason\x18\x04 \x01(\t"\xec\x01\n\x1aRepositoryModelLoadRequest\x12\x17\n\x0frepository_name\x18\x01 \x01(\t\x12\x12\n\nmodel_name\x18\x02 \x01(\t\x12I\n\nparameters\x18\x03 \x03(\x0b\x32\x35.inference.RepositoryModelLoadRequest.ParametersEntry\x1aV\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x32\n\x05value\x18\x02 \x01(\x0b\x32#.inference.ModelRepositoryParameter:\x02\x38\x01"\x1d\n\x1bRepositoryModelLoadResponse"\xf0\x01\n\x1cRepositoryModelUnloadRequest\x12\x17\n\x0frepository_name\x18\x01 \x01(\t\x12\x12\n\nmodel_name\x18\x02 \x01(\t\x12K\n\nparameters\x18\x03 \x03(\x0b\x32\x37.inference.RepositoryModelUnloadRequest.ParametersEntry\x1aV\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x32\n\x05value\x18\x02 \x01(\x0b\x32#.inference.ModelRepositoryParameter:\x02\x38\x01"\x1f\n\x1dRepositoryModelUnloadResponse2\xae\x06\n\x14GRPCInferenceService\x12K\n\nServerLive\x12\x1c.inference.ServerLiveRequest\x1a\x1d.inference.ServerLiveResponse"\x00\x12N\n\x0bServerReady\x12\x1d.inference.ServerReadyRequest\x1a\x1e.inference.ServerReadyResponse"\x00\x12K\n\nModelReady\x12\x1c.inference.ModelReadyRequest\x1a\x1d.inference.ModelReadyResponse"\x00\x12W\n\x0eServerMetadata\x12 .inference.ServerMetadataRequest\x1a!.inference.ServerMetadataResponse"\x00\x12T\n\rModelMetadata\x12\x1f.inference.ModelMetadataRequest\x1a .inference.ModelMetadataResponse"\x00\x12K\n\nModelInfer\x12\x1c.inference.ModelInferRequest\x1a\x1d.inference.ModelInferResponse"\x00\x12Z\n\x0fRepositoryIndex\x12!.inference.RepositoryIndexRequest\x1a".inference.RepositoryIndexResponse"\x00\x12\x66\n\x13RepositoryModelLoad\x12%.inference.RepositoryModelLoadRequest\x1a&.inference.RepositoryModelLoadResponse"\x00\x12l\n\x15RepositoryModelUnload\x12\'.inference.RepositoryModelUnloadRequest\x1a(.inference.RepositoryModelUnloadResponse"\x00\x62\x06proto3'
./grpc/dataplane_pb2.py:21:_SERVERLIVEREQUEST = DESCRIPTOR.message_types_by_name["ServerLiveRequest"]
./grpc/dataplane_pb2.py:94:ServerLiveRequest = _reflection.GeneratedProtocolMessageType(
./grpc/dataplane_pb2.py:95:    "ServerLiveRequest",
./grpc/dataplane_pb2.py:100:        # @@protoc_insertion_point(class_scope:inference.ServerLiveRequest)
./grpc/dataplane_pb2.py:103:_sym_db.RegisterMessage(ServerLiveRequest)
./grpc/dataplane_pb2.pyi:14:class ServerLiveRequest(google.protobuf.message.Message):
./grpc/dataplane_pb2.pyi:24:global___ServerLiveRequest = ServerLiveRequest

and in the kserve:

(kyu) jinserk@pluto ~/kyu/kserve/python/kserve (master) $ grep -Irn ServerLiveRequest                                                                                                                                                                                          
./kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py:34:                request_serializer=grpc__predict__v2__pb2.ServerLiveRequest.SerializeToString,                                                                                                                           
./kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py:146:                    request_deserializer=grpc__predict__v2__pb2.ServerLiveRequest.FromString,                                                                                                                           
./kserve/protocol/grpc/grpc_predict_v2_pb2_grpc.py:207:            grpc__predict__v2__pb2.ServerLiveRequest.SerializeToString,                                                                                                                                                 
./kserve/protocol/grpc/servicer.py:48:        self, request: pb.ServerLiveRequest, context                                                                                                                                                                                     
./kserve/protocol/grpc/grpc_predict_v2_pb2.pyi:213:class ServerLiveRequest(_message.Message):                                                                                                                                                                                  
./kserve/protocol/grpc/grpc_predict_v2.proto:23:  rpc ServerLive(ServerLiveRequest) returns (ServerLiveResponse) {}                                                                                                                                                            
./kserve/protocol/grpc/grpc_predict_v2.proto:53:message ServerLiveRequest {}                                                                                                                                                                                                   
./kserve/protocol/grpc/grpc_predict_v2_pb2.py:31:DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x15grpc_predict_v2.proto\x12\tinference\"\x13\n\x11ServerLiveRequest\"\"\n\x12ServerLiveResponse\x12\x0c\n\x04live\x18\x01 \x01(\x08\"\x14\n\x12ServerReadyRequ
est\"$\n\x13ServerReadyResponse\x12\r\n\x05ready\x18\x01 \x01(\x08\"2\n\x11ModelReadyRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\"#\n\x12ModelReadyResponse\x12\r\n\x05ready\x18\x01 \x01(\x08\"\x17\n\x15ServerMetadataRequest\"K\n\x16Serv
erMetadataResponse\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12\x12\n\nextensions\x18\x03 \x03(\t\"5\n\x14ModelMetadataRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\"\x8d\x02\n\x15ModelMetadataResponse\x12\x0
c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08versions\x18\x02 \x03(\t\x12\x10\n\x08platform\x18\x03 \x01(\t\x12?\n\x06inputs\x18\x04 \x03(\x0b\x32/.inference.ModelMetadataResponse.TensorMetadata\x12@\n\x07outputs\x18\x05 \x03(\x0b\x32/.inference.ModelMetadataResponse.TensorM
etadata\x1a?\n\x0eTensorMetadata\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\"\xee\x06\n\x11ModelInferRequest\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x15\n\rmodel_version\x18\x02 \x01(\t\x12\n\n\x02id
\x18\x03 \x01(\t\x12@\n\nparameters\x18\x04 \x03(\x0b\x32,.inference.ModelInferRequest.ParametersEntry\x12=\n\x06inputs\x18\x05 \x03(\x0b\x32-.inference.ModelInferRequest.InferInputTensor\x12H\n\x07outputs\x18\x06 \x03(\x0b\x32\x37.inference.ModelInferRequest.InferReques
tedOutputTensor\x12\x1a\n\x12raw_input_contents\x18\x07 \x03(\x0c\x1a\x94\x02\n\x10InferInputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12Q\n\nparameters\x18\x04 \x03(\x0b\x32=.inference.ModelI
nferRequest.InferInputTensor.ParametersEntry\x12\x30\n\x08\x63ontents\x18\x05 \x01(\x0b\x32\x1e.inference.InferTensorContents\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1a\xd
5\x01\n\x1aInferRequestedOutputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12[\n\nparameters\x18\x02 \x03(\x0b\x32G.inference.ModelInferRequest.InferRequestedOutputTensor.ParametersEntry\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x0
1(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\"\xd5\x04\n\x12ModelInferResponse\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x15
\n\rmodel_version\x18\x02 \x01(\t\x12\n\n\x02id\x18\x03 \x01(\t\x12\x41\n\nparameters\x18\x04 \x03(\x0b\x32-.inference.ModelInferResponse.ParametersEntry\x12@\n\x07outputs\x18\x05 \x03(\x0b\x32/.inference.ModelInferResponse.InferOutputTensor\x12\x1b\n\x13raw_output_conte
nts\x18\x06 \x03(\x0c\x1a\x97\x02\n\x11InferOutputTensor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tatype\x18\x02 \x01(\t\x12\r\n\x05shape\x18\x03 \x03(\x03\x12S\n\nparameters\x18\x04 \x03(\x0b\x32?.inference.ModelInferResponse.InferOutputTensor.ParametersEn
try\x12\x30\n\x08\x63ontents\x18\x05 \x01(\x0b\x32\x1e.inference.InferTensorContents\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\x1aL\n\x0fParametersEntry\x12\x0b\n\x03key\x18\
x01 \x01(\t\x12(\n\x05value\x18\x02 \x01(\x0b\x32\x19.inference.InferParameter:\x02\x38\x01\"i\n\x0eInferParameter\x12\x14\n\nbool_param\x18\x01 \x01(\x08H\x00\x12\x15\n\x0bint64_param\x18\x02 \x01(\x03H\x00\x12\x16\n\x0cstring_param\x18\x03 \x01(\tH\x00\x42\x12\n\x10par
ameter_choice\"\xd0\x01\n\x13InferTensorContents\x12\x15\n\rbool_contents\x18\x01 \x03(\x08\x12\x14\n\x0cint_contents\x18\x02 \x03(\x05\x12\x16\n\x0eint64_contents\x18\x03 \x03(\x03\x12\x15\n\ruint_contents\x18\x04 \x03(\r\x12\x17\n\x0fuint64_contents\x18\x05 \x03(\x04\x
12\x15\n\rfp32_contents\x18\x06 \x03(\x02\x12\x15\n\rfp64_contents\x18\x07 \x03(\x01\x12\x16\n\x0e\x62ytes_contents\x18\x08 \x03(\x0c\"0\n\x1aRepositoryModelLoadRequest\x12\x12\n\nmodel_name\x18\x01 \x01(\t\"C\n\x1bRepositoryModelLoadResponse\x12\x12\n\nmodel_name\x18\x0
1 \x01(\t\x12\x10\n\x08isLoaded\x18\x02 \x01(\x08\"2\n\x1cRepositoryModelUnloadRequest\x12\x12\n\nmodel_name\x18\x01 \x01(\t\"G\n\x1dRepositoryModelUnloadResponse\x12\x12\n\nmodel_name\x18\x01 \x01(\t\x12\x12\n\nisUnloaded\x18\x02 \x01(\x08\x32\xd2\x05\n\x14GRPCInference
Service\x12K\n\nServerLive\x12\x1c.inference.ServerLiveRequest\x1a\x1d.inference.ServerLiveResponse\"\x00\x12N\n\x0bServerReady\x12\x1d.inference.ServerReadyRequest\x1a\x1e.inference.ServerReadyResponse\"\x00\x12K\n\nModelReady\x12\x1c.inference.ModelReadyRequest\x1a\x1d
.inference.ModelReadyResponse\"\x00\x12W\n\x0eServerMetadata\x12 .inference.ServerMetadataRequest\x1a!.inference.ServerMetadataResponse\"\x00\x12T\n\rModelMetadata\x12\x1f.inference.ModelMetadataRequest\x1a .inference.ModelMetadataResponse\"\x00\x12K\n\nModelInfer\x12\x1
c.inference.ModelInferRequest\x1a\x1d.inference.ModelInferResponse\"\x00\x12\x66\n\x13RepositoryModelLoad\x12%.inference.RepositoryModelLoadRequest\x1a&.inference.RepositoryModelLoadResponse\"\x00\x12l\n\x15RepositoryModelUnload\x12\'.inference.RepositoryModelUnloadReque
st\x1a(.inference.RepositoryModelUnloadResponse\"\x00\x62\x06proto3')                                                                                                                                                                                                          
./kserve/protocol/grpc/grpc_predict_v2_pb2.py:35:_SERVERLIVEREQUEST = DESCRIPTOR.message_types_by_name['ServerLiveRequest']                                                                                                                                                    
./kserve/protocol/grpc/grpc_predict_v2_pb2.py:62:ServerLiveRequest = _reflection.GeneratedProtocolMessageType('ServerLiveRequest', (_message.Message,), {                                                                                                                      
./kserve/protocol/grpc/grpc_predict_v2_pb2.py:65:  # @@protoc_insertion_point(class_scope:inference.ServerLiveRequest)                                                                                                                                                         
./kserve/protocol/grpc/grpc_predict_v2_pb2.py:67:_sym_db.RegisterMessage(ServerLiveRequest)
adriangonz commented 1 year ago

Hey @jinserk ,

gRPC is quite strict on names. In this case, both KServe and MLServer implement the same dataplane, which means neither of them can rename it.

Ideally, you shouldn't require both packages within the same environment. Would this be an option for your use case?

jinserk commented 1 year ago

@adriangonz Hmm. If I have to do, I need to separate my application into two different packages. At the moment I cannot divide them. You mean that this is the gPRC spec which cannot be renamed. Correct? Is there any way to disable gRPC on MLServer? Honestly I only use REST.

adriangonz commented 1 year ago

Yeah, both KServe and MLServer implement the Open Inference Protocol spec, so names will be the same. And both seem to import their generated gRPC stubs as a default import.

Is this a case of a KServe model using MLServer, or the other way around? Depending on what parts you need from each, a potential workaround is to use deep imports that skip the gRPC imports.