-
### Describe the bug
`interpreter --local
:88: SyntaxWarning: "is" with a literal. Did you mean "=="?
▌ Open Interpreter is compatible with several local model providers.
[?] What one would yo…
-
### Your current environment
```text
Collecting environment information...
WARNING 07-23 19:11:42 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm.…
-
Traceback (most recent call last):
File "/root/anaconda3/lib/python3.11/site-packages/vllm/transformers_utils/config.py", line 30, in get_confi g
config = AutoConfig.from_pretra…
-
The newest drivers are in use, the system is a Ryzen 2700x CPU with 16GB of RAM and a 16GB A770 GPU on Windows 11.
The instructions in the docs were followed precisely.
Upon attempting to execut…
-
This is highly speculative in terms of usefulness, and the UI would need to be considered carefully. Use case would be for summarizing articles retrieved from the ZIM. Over time, it might be possible …
-
- [ ] [evidently/README.md at main · evidentlyai/evidently](https://github.com/evidentlyai/evidently/blob/main/README.md?plain=1)
# evidently/README.md at main · evidentlyai/evidently
## Evidently
…
-
# Proposed Feature
Add an efficient interface for generation probabilities on fixed prompt and completion pairs. For example:
```python
# ... load LLM or engine
prompt_completion_pairs = [
…
-
I love the idea of using this plugin on an offline LLM instead of giving my data to the cloud. Are there any suggestions on where to look in this code and other resources to kludge something together …
-
Some users may need to send batch requests with several prompt/schema pairs. It is possible to do this with the vLLM server integration using `aiohttp`, and we should document this.
rlouf updated
5 months ago
-
I'm running the tutorial [vllm/offline_inference_with_prefix.py](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_with_prefix.py) and measuring the generation times, again bel…