nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
70.35k stars 7.68k forks source link

Chat UI Server: Support listening on address other than localhost #1304

Open lesonquan opened 1 year ago

lesonquan commented 1 year ago

Issue you'd like to raise.

Hi all, Could you please guide me on changing the localhost:4891 to another IP address, like the PC's IP 192.168.x.x:4891? I've attempted to search online, but unfortunately, I couldn't find a solution. My ultimate goal is to access GPT4All from outside my network. Any assistance you could provide would be greatly appreciated. Thank you!

https://docs.gpt4all.io/gpt4all_chat.html#server-mode

import openai

openai.api_base = "http://localhost:4891/v1"
#openai.api_base = "https://api.openai.com/v1"

openai.api_key = "not needed for a local LLM"

# Set up the prompt and other parameters for the API request
prompt = "Who is Michael Jordan?"

# model = "gpt-3.5-turbo"
#model = "mpt-7b-chat"
model = "gpt4all-j-v1.3-groovy"

# Make the API request
response = openai.Completion.create(
    model=model,
    prompt=prompt,
    max_tokens=50,
    temperature=0.28,
    top_p=0.95,
    n=1,
    echo=True,
    stream=False
)

# Print the generated completion
print(response)

Suggestion:

No response

cosmic-snow commented 1 year ago

Currently, the chat GUI's server mode is localhost only.

If you want to use a web API that's accessible over the network, maybe instead have a look at the API subproject, which uses Docker: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-api

lesonquan commented 1 year ago

Thank @cosmic-snow for your response. As I check the Docker, it also uses the localhost for the server. Do we have any file to change the config of the host?

amichelis commented 1 year ago

Agree that access outside from localhost it would be amazing!

amichelis commented 1 year ago

I have to agree that this is very important, for many reasons.

Docker has several drawbacks. Firstly, it consumes a lot of memory.

The API for localhost only works if you have a server that supports GPT4All. In my case, my Xeon processor was not capable of running it. However, I can send the request to a newer computer with a newer CPU. I'm not sure about the internals of GPT4All, but this issue seems quite simple to fix.

Lets be hones, gpt4all makes complex things easy, Install and Run.. can we keep it simple??

risharde commented 11 months ago

I need to the server port exposed when I enable the WEB API server as well

erlebach commented 9 months ago

What is the latest on this issue? I am trying to get client/server to work (on my local system for now). Ideally, both client and server should be python scripts started from the command line, with arguments that clearly specify options such as port, etc. Is this even possible? I have done this using llama.cpp directly. Thanks. Gordon.

kelvinq commented 8 months ago

@cebtenzzre Thanks for this new feature! Has this already been built and released? Couldn't find the env file in the latest MacOS build nor in main. Apologies in advance if I'm looking at the wrong place! :)

cebtenzzre commented 8 months ago

Thanks for this new feature! Has this already been built and released?

I think you misinterpreted the title of the issue that I closed.

kelvinq commented 8 months ago

Thanks for this new feature! Has this already been built and released?

I think you misinterpreted the title of the issue that I closed.

Yikes, you're right. Anyhow, I worked around this by using a tunnel and reverse proxy. Thanks @cebtenzzre

amichelis commented 1 week ago

This problem is still not being addressed?