-
Hi, thanks for the great work!
I'm curious why you choose to set cpu requirement to 0.0 in a bunch of ray tasks/actors during dataset caching, for example:
https://github.com/stanford-crfm/levante…
-
##Question
- If I run the vllm offline and can I set the batch size ? I mean I want to test the its e2e latency for different batch size.
-
Hello, because of the company's network policy, the service can only be deployed on an offline server. Is it possible for lida to call the locally deployed open source LLM, such as chatglm2, which pro…
-
### Describe the bug
I am trying to push a model to my model repo and it is giving me an error saying I dont have write access. I have created a write token and I am passing that into hugging face …
-
> 为了提高交流效率,我们设立了官方 QQ 群和 QQ 频道,如果你在使用或者搭建过程中遇到了任何问题,请先第一时间加群或者频道咨询解决,除非是可以稳定复现的 Bug 或者较为有创意的功能建议,否则请不要随意往 Issue 区发送低质无意义帖子。
> [点击加入官方群聊](https://github.com/Yidadaa/ChatGPT-Next-Web/discussions/1724…
-
(chat007) C:\AI\chat007\Langchain-Chatchat>python startup.py -a
==============================Langchain-Chatchat Configuration==============================
操作系统:Windows-10-10.0.22631-SP0.
pyth…
-
### Time line
- [Maxwell's demon: Does life violate the 2nd law of thermodynamics? | Neil Gershenfeld and Lex Fridman](https://www.youtube.com/embed/eS0JXViv0cU?start=81&end=281&version=3), start=81&…
-
I have multiple files in userData and specify a particular document to use for context before doing a query, however, the model still provides results from other documents that are not explicitly sele…
-
Hi,
Basically title. THe intro suggests that openai-access can be replaced with locally running models (maybe with oobabooga-openai-api?) Anyway, can't seem to find instructions / env settings for …
-
**Project description**
To make open-source large langue models (LLM) accessible there are projects like [Ollama](ollama.ai) that make it almost trivial to download and run them locally on a consum…