pors / langchain-chat-websockets

LangChain LLM chat with streaming response over websockets
Apache License 2.0
89 stars 9 forks source link

Async generation not implemented for this LLM. #2

Open akashAD98 opened 1 year ago

akashAD98 commented 1 year ago

i tried mistral & llama7b from ctransofrmer & getting this issue,is there any way to add support for this? how can we implement it with websocket?

    streaming_llm = CTransformers(model='TheBloke/Mistral-7B-v0.1-GGUF', model_file='mistral-7b-v0.1.Q4_K_M.gguf',model_type="mistral")

    streaming_llm = CTransformers(model="llama-2-7b.ggmlv3.q5_0.bin",model_type="llama",
                    config={'max_new_tokens':128,'temperature':0.01})

can we use TGI here for doing this ? by passing the url link will it support?

janfilips commented 1 year ago

Mistral 7B rulez

Imho it's the langchain that is outdated on this project that's causing an issue

Did you find a workaround or a different solution? I too am in need to proxy SSE over to Websocket in my app..