Open seanstory opened 6 months ago
Pinging @elastic/platform-deployment-management (Team:Deployment Management)
This wasn't really a bug with Ingest Pipelines as we treat the model id
simply as a string and we dont do any sort of validation on it besides that. The issue only occurs when creating the pipeline from the Search Pipelines -> Add ML Inference Pipeline
UI.
I've created a new issue for adding support for input_output
option in ingest pipelines.
Kibana version: 8.14.0-SNAPSHOT
Elasticsearch version: 8.14.0-SNAPSHOT
Server OS version: OSX 14.3
Original install method (e.g. download page, yum, from source, etc.): source
Describe the bug: When using the UI to create an ingest pipeline to do inference, you can only select hosted models (like ones added by eland) but you cannot use the Inference Endpoints created by the Inference API.
You can create such a pipeline directly with:
Steps to reproduce:
Expected behavior: The UI should let you create either type of inference processor