Add pipelines able to load a model and convert it to the openvino format if needed
from optimum.intel.pipelines import pipeline
# Load openvino model
ov_pipe = pipeline("text-generation", "helenai/gpt2-ov", accelerator="openvino")
# Load pytorch model and convert it to openvino before inference
pipe = pipeline("text-generation", "gpt2", accelerator="openvino")
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
Add pipelines able to load a model and convert it to the openvino format if needed
@helena-intel @eaidova