Open antonio-castellon opened 1 year ago
Is there a plan to support publishing to 0.0.0.0
instead of localhost
?
+1.
Any updates on this? Definitely +1
I worked around this problem with Caddy. Download it to anywhere and run it as a reverse-proxy in front of LMStudio like this:
caddy reverse-proxy --from :12345 --to 127.0.0.1:1234
Then connect to the host computer at port 12345 as if it was OpenAI.
There are lots of workarounds for this like using simple firewall rules on Linux, but it isn't something we should have to do when it is such an easy change to implement.
+1
Just tested, and it works as expected. I'm using LM Studio
on a computer with hostname (e.g.): compabc.local
and ip address: 192.168.1.12
.
On another host (for example an Ubuntu, a Mac or even a Windows) refer to the LM Studio server host as:
http://compabc.local:1234/v1
or
http://192.168.1.12:1234/v1
Thus just don't use the http://localhost:1234/v1
when working remotely. Nothing has to be changed within LM Studio
except to run LM Studio as a server.
Make sure to allow port 1234 and/or hostname/ip to be accessible on the private network.
目前测试下来,在 Win10 系统下相对容易成功的方法是 proxychains
这里手动指定了配置文件的路径,LM Studio 版本为 0.2.20 proxychains -f "%USERPROFILE%\proxychains-windows\proxychains.conf" -lv "%USERPROFILE%\AppData\Local\LM-Studio\app-0.2.20\LM Studio.exe"
It will we great if we can interact with other services that are not running locally, using the LM Studio only as GUI by default (of course, we will loose the capability to load and run locally in a dynamic way all models, but it could be interesting in some cases)