-
### Bug Description
Installed langflow as explained in document - https://docs.langflow.org/getting-started-installation with the only difference of creating conda env, as I didn't want it to instal…
kha84 updated
2 weeks ago
-
### Steps to reproduce
- Clone the repository
- Create a new venv environment `python -m venv .venv`
- Activate venv `source .venv/bin/activate`
- Run `pip install 'litgpt[all]'`
- Run `litgpt do…
-
### System Info
Official docker image, v2.0.4
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Rep…
-
Hi, i am trying to load test vllm on a single gpu with 20 concurrent request. Each request would pass through the llm engine twice. Once to change the prompt, the other to generate the output.
Howe…
-
### Issue with current documentation:
https://langchain-ai.github.io/langgraph/tutorials/customer-support/customer-support/#utility
Running the notebook throws the following error:
```
--…
-
also part of error: " File "/mmfs1/scratch/anamaria/privateGPT2/privateGPT/privategpt/components/llm/llm_component.py", line 37, in __init
logger.warning(
Message: 'Failed to download tokenizer…
-
not work
-
HI folks,
I have configured my application as here i wanted to change the parameters of the model could you please suggest as how i can pass my own parameters here. I am trying to implement this in R…
-
### Your current environment
The output of `python collect_env.py`
(vllm code copied from this PR (@84789334a) was used: https://github.com/vllm-project/vllm/pull/8574)
```text
Collecting…
-
Add the ability to pass custom headers to the client.
This is useful for LLM telemetry tools like [OpenPipe](https://openpipe.ai/) or [ObserveAPI](https://observeapi.ashishb.net/).
OpenAI & Anthro…