IBM / text-generation-inference

IBM development fork of https://github.com/huggingface/text-generation-inference
Apache License 2.0
57 stars 30 forks source link

deps: bump optimum to 1.16.1 #21

Closed dtrifiro closed 10 months ago

dtrifiro commented 10 months ago

Docker build is currently failing because of optimum=^1.14.1 being a conflicting dependency while installing onnx-gpu.

In particular, this is the failing line:

RUN cd server && make gen-server && pip install ".[accelerate, onnx-gpu, quantize]" --no-cache-dir

With the following error:

The conflict is caused by:
    text-generation-server[accelerate,onnx-gpu,quantize] 0.1.0 depends on optimum<2.0.0 and >=1.14.1; extra == "onnx" or extra == "onnx-gpu"
    optimum[onnxruntime-gpu] 1.16.1 depends on optimum 1.16.1 (from https://files.pythonhosted.org/packages/64/03/df00e4553653ae038e8869e1bd6999398112be41f50b19acf999d7c706c0/optimum-1.16.1-py3-none-any.whl (from https://pypi.org/simple/optimum/) (requires-python:>=3.7.0))
njhill commented 10 months ago

Thanks @dtrifiro! This should be addressed now in the latest sync from our internal changes.