h2oai / h2ogpt

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
http://h2o.ai
Apache License 2.0
11.47k stars 1.25k forks source link

[HELM] Refactor Chart #1872

Open lakinduakash opened 1 month ago

lakinduakash commented 1 month ago

Reference : https://github.com/h2oai/h2ogpt/issues/1871

EshamAaqib commented 1 month ago

@achraf-mer Just wondering if we could remove the Stack mode, is this in use ? Ideally on K8s vLLM should run separately instead of the same pod as h2oGPT I think. WDYT ?

achraf-mer commented 1 month ago

@achraf-mer Just wondering if we could remove the Stack mode, is this in use ? Ideally on K8s vLLM should run separately instead of the same pod as h2oGPT I think. WDYT ?

yes, we can do separate and keep the help straightforward, let's do, I think we might have used the same pod for latency considerations, but since vLLM can be resource intensive, it is best IMO to have on a separate pod. (more isolation and we can scale separately)

EshamAaqib commented 1 month ago

@lakinduakash Lets remove Stack mode from h2oGPT and the checks as well, similar to what was done with Agents

lakinduakash commented 1 month ago

@lakinduakash Lets remove Stack mode from h2oGPT and the checks as well, similar to what was done with Agents

Stack is removed