-
Hi everyone,
I am currently researching uncertainties in large language models and have taken a keen interest in your UE benchmarking framework. Due to the nature of my access to LLMs, I can only i…
-
It connects automatically to Ollama on http://localhost:11434 which is great,
but it would be perfect if I was able to connect it to my local LM Studio server on http://localhost:1234.
The API's o…
-
```
pocketsphinx/build/pocketsphinx_lm_convert -i etc/visemes.lm -o etc/visemes.lm.DMP
ERROR: "cmd_ln.c", line 146: Unknown argument: lw
ERROR: "cmd_ln.c", line 146: Unknown argument: wip
```
-
I am trying to run this setup:
```
lm_eval --model vllm \
--model_args pretrained="Qwen/Qwen2.5-0.5B-Instruct",tensor_parallel_size=2,dtype=auto,gpu_memory_utilization=0.8 \
--tasks bbh_…
-
the instructions aren't clear on how to run it on lm studio server
-
Hi @hudson-ai!
### Concerning the TypeAdapter constrained generation, here are some example of the issue mentioned [here](https://github.com/guidance-ai/guidance/issues/1051#issuecomment-2427632185…
-
### Describe the issue:
I am unable to build numpy on s390x host. I tried building numpy 2.0.1 and 2.1.3, both build ran into same compilation errors.
### Reproduce the code example:
```pytho…
-
I can't get the endpoint to work properly with LM Studio. I have tried adding /v1 and /v1/chat/completions. Both http://localhost:1234 and http://localhost:1234/v1 return the same output. /v1/chat/com…
-
How to run LM Studio on linux from console?
I have an AppImage on my server, but it cannot run without X server session. Do LM Studio has a console API server at all?
I tried to run this command
…
-
I am trying to convert the checkpoint obtained from asynchronous saving of torch_dist to the original torch format, but using convert.py directly results in an error. Could there be an issue with my u…