Open chymian opened 10 months ago
I got it running on bare-metal in a conda environment. I love that pice of work! kudos!!
anyway, would be nice if the docker would work.
Thank you for your feedback @chymian . Pytorch is not necessary for all scenarios, you can drop it. I have pytorch installed mainly for RAG scenarios.
I'll check the dockerfile later. Thanks.
I just found out about your interesting projects here and try to run the Panel in docker.
The build throws folling error:
peeking arround, autogen will be installed with local LLM support as well as pytorch-cpu. the latter pulls in
typing_extensions 4.4.0
which gives this error. since in my usecase, I have a litellm.ai API-proxy towards my local running LLMs on another machine, this is unneccessary. can I just drop torch and LLM and bump autogen to the latest version? will that work?OT: and what is this ernie-bot about?