jjleng / sensei

Yet another open source Perplexity
https://www.heysensei.app
Apache License 2.0
328 stars 24 forks source link

Unable to get it working #2

Closed arsaboo closed 2 months ago

arsaboo commented 2 months ago

I tried to get this working but failed. I have Ollama installed as a regular app on my Mac. I updated the docker to add extra_hosts (also had to change some ports):


version: '3.9'

services:
  frontend:
    volumes:
      - ./frontend/app:/app/app
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - 3015:3000
    depends_on:
      - backend
  backend:
    volumes:
      - ./backend:/app
    build:
      context: ./backend
      dockerfile: Dockerfile
    ports:
      - 8004:8000
    depends_on:
      - redis
    extra_hosts:
      - "host.docker.internal:host-gateway"
  redis:
    image: redis:alpine
  searxng:
    image: searxng/searxng:latest
    volumes:
      - ./ops/searxng:/etc/searxng
    ports:
      - 8081:8080
    environment:
      - SEARXNG_URL=http://localhost:8081/
      - SEARXNG_SECRET=SQRXTEUTa%SMuWDiF_kyH&,:EqBpTV
      - SEARXNG_REDIS_URL=redis://redis:6379/0
      - SEARXNG_SETTINGS_PATH=/etc/searxng/settings.yml
    depends_on:
      - redis

I see the UI, but get no search results. I see the following in the browser console:

image

BTW...strongly recommend that you consolidate all the env variables in one file in the root.

jjleng commented 2 months ago

my bad. Forgot to copy my local .env.development to the .env.development.example. .env.development.example now looks like

LOGURU_LEVEL=DEBUG
SEARXNG_URL="http://searxng:8080"
REDIS_HOST="redis"

# Small model
SM_MODLE_URL="http://host.docker.internal:11434/v1/" # host.docker.internal rather than localhost
SM_MODEL="llama3:instruct"
SM_MODEL_API_KEY="whatever"

# Medium model
MD_MODLE_URL="http://host.docker.internal:11434/v1/" # same
MD_MODEL="command-r"
MD_MODEL_API_KEY="whatever"
arsaboo commented 2 months ago

Still not working for me:

image


LOGURU_LEVEL=DEBUG
SEARXNG_URL="http://searxng:8081"
REDIS_HOST="redis"

# Small model
SM_MODLE_URL="http://host.docker.internal:11434"
SM_MODEL="llama3:8b-instruct-q5_0"
SM_MODEL_API_KEY="whatever"

# Medium model
MD_MODLE_URL="http://host.docker.internal:11434"
MD_MODEL="llama3:8b-instruct-q5_0"
MD_MODEL_API_KEY="whatever"

Docker compose:

services:
  frontend:
    volumes:
      - ./frontend/app:/app/app
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - 3015:3000
    depends_on:
      - backend
  backend:
    volumes:
      - ./backend:/app
    build:
      context: ./backend
      dockerfile: Dockerfile
    ports:
      - 8004:8000
    depends_on:
      - redis
    extra_hosts:
      - "host.docker.internal:host-gateway"
  redis:
    image: redis:alpine
  searxng:
    image: searxng/searxng:latest
    volumes:
      - ./ops/searxng:/etc/searxng
    ports:
      - 8081:8080
    environment:
      - SEARXNG_URL=http://localhost:8081/
      - SEARXNG_SECRET=SQRXTEUTa%SMuWDiF_kyH&,:EqBpTV
      - SEARXNG_REDIS_URL=redis://redis:6379/0
      - SEARXNG_SETTINGS_PATH=/etc/searxng/settings.yml
    depends_on:
      - redis
jjleng commented 2 months ago

I suspect that you had a CORS issue. https://github.com/jjleng/sensei/blob/7db5252d5172730b1455639aa2d6cb841fedaca7/backend/sensei_search/server.py#L12.

Because the server assumes the client runs on port 3000. Since you have changed port, the browser blocked the connection because the request violated CORS policy. I will try to make the CORS policy work with any ports for dev environment.

Thanks for reporting!

jjleng commented 2 months ago

https://github.com/jjleng/sensei/pull/11 should have disabled CORS for the docker compose.

arsaboo commented 2 months ago

Still not working:

image

Do I need to change the env in frontend? This is what I have right now:

NEXT_PUBLIC_SOCKET_HOST=http://localhost:8000
jjleng commented 2 months ago

ah yes, it should be 8004. because you are mapping the docker port to be 8004 on your host

  backend:
    volumes:
      - ./backend:/app
    build:
      context: ./backend
      dockerfile: Dockerfile
    ports:
      - 8004:8000
arsaboo commented 2 months ago

More reasons....for all config variables to be in one place.

Anyways, it is still not working for me:

image

Here's docker-compose:

services:
  frontend:
    volumes:
      - ./frontend/app:/app/app
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - 3015:3000
    depends_on:
      - backend
  backend:
    volumes:
      - ./backend:/app
    build:
      context: ./backend
      dockerfile: Dockerfile
    ports:
      - 8004:8000
    depends_on:
      - redis
    extra_hosts:
      - "host.docker.internal:host-gateway"
  redis:
    image: redis:alpine
  searxng:
    image: searxng/searxng:latest
    volumes:
      - ./ops/searxng:/etc/searxng
    ports:
      - 8081:8080
    environment:
      - SEARXNG_URL=http://localhost:8081/
      - SEARXNG_SECRET=SQRXTEUTa%SMuWDiF_kyH&,:EqBpTV
      - SEARXNG_REDIS_URL=redis://redis:6379/0
      - SEARXNG_SETTINGS_PATH=/etc/searxng/settings.yml
    depends_on:
      - redis

Frontend env:

NEXT_PUBLIC_SOCKET_HOST=http://localhost:8004

Backend env:

LOGURU_LEVEL=DEBUG
SEARXNG_URL="http://searxng:8081"
REDIS_HOST="redis"

# Small model
SM_MODLE_URL="http://host.docker.internal:11434"
SM_MODEL="llama3:8b-instruct-q5_0"
SM_MODEL_API_KEY="whatever"

# Medium model
MD_MODLE_URL="http://host.docker.internal:11434"
MD_MODEL="llama3:8b-instruct-q5_0"
MD_MODEL_API_KEY="whatever"
jjleng commented 2 months ago

Sorry for your frustration. I updated my local env to be the same as yours. Here are the changes you need to make

docker compose

services:
  frontend:
    volumes:
      - ./frontend/app:/app/app
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - 3015:3000
    depends_on:
      - backend
  backend:
    volumes:
      - ./backend:/app
    build:
      context: ./backend
      dockerfile: Dockerfile
    ports:
      - 8004:8000
    depends_on:
      - redis
  redis:
    image: redis:alpine
  searxng:
    image: searxng/searxng:latest
    volumes:
      - ./ops/searxng:/etc/searxng
    ports:
      - 8081:8080
    environment:
      - SEARXNG_URL=http://localhost:8081/
      - SEARXNG_SECRET=SQRXTEUTa%SMuWDiF_kyH&,:EqBpTV
      - SEARXNG_REDIS_URL=redis://redis:6379/0
      - SEARXNG_SETTINGS_PATH=/etc/searxng/settings.yml
    depends_on:
      - redis

I removed from your original docker compose

    extra_hosts:
      - "host.docker.internal:host-gateway"

Backend env:

LOGURU_LEVEL=DEBUG
SEARXNG_URL="http://searxng:8080" # Use 8080 not 8081 since we updated the above docker-compose
REDIS_HOST="redis"

# Small model
SM_MODLE_URL="http://host.docker.internal:11434/v1" # You need to add /v1. This is Ollama's OpenAI API endpoint
SM_MODEL="llama3:instruct"
SM_MODEL_API_KEY="whatever"

# Medium model
MD_MODLE_URL="http://host.docker.internal:11434/v1" # You need to add /v1. This is Ollama's OpenAI API endpoint
MD_MODEL="llama3:instruct"
MD_MODEL_API_KEY="whatever"

See my comments, you need to add /v1 to the URLs.

Frontend dev:

NEXT_PUBLIC_SOCKET_HOST=http://localhost:8004

Now go to browser http://localhost:3015/. However, my prompt is too long for llama3 8b to handle. You can play with the prompts. Happy coding :)

Screenshot 2024-06-21 at 12 02 17 PM

arsaboo commented 2 months ago

Ok some progress.....so at least it worked on the same device on which the container is running (localhost:3015). However, here the results were not what I expected (but that is a separate issue)

image

On any other computer on the network, it is not working:

image

arsaboo commented 2 months ago

Ok....so, I changed NEXT_PUBLIC_SOCKET_HOST=http://localhost:8004 to NEXT_PUBLIC_SOCKET_HOST=http://192.168.2.168.2.162:8004 and now it works on other computers on my network 🎉

Now, need to figure out how to get better results. Are you using any specific models that give better results?

jjleng commented 2 months ago

For the medium sized model I use command-r. You need a good gpu for it. My Mac struggled a lot. So I rented a GPU in AWS

jjleng commented 2 months ago

Like I mentioned in the Reddit post, a lot of open source LLMs having troubles on instruction following.

arsaboo commented 2 months ago

I have a Mac M2 ultra....that works very well. with other similar projects like Farfalle, Perplexica, etc.

In any case, I will close this issue for now.

arsaboo commented 2 months ago

Just an FYI....

with Farfalle:

image

With Perplexica:

image

jjleng commented 2 months ago

Just looked at Farfalle. really awesome! Did not notice such a project exists. Yeah, it looks like it uses a much shorter prompt and the search context is just shorter too. It uses the search summary directly https://github.com/rashadphz/farfalle/blob/main/src/backend/search/providers/searxng.py#L31-L38. Therefore, it is going to generate a far shallow answer. Since the prompt is short, you have a lot more models to pick. If you ask hacker news today, you will see Farfalle generates something wrong.

So is Perplexica, https://github.com/ItzCrazyKns/Perplexica/blob/476303f52bacca7c270f3b7ca86045ff5e1baaa7/src/agents/webSearchAgent.ts#L119-L129

Instead, Sensei passes in 5 whole web pages to the LLM, which is a much longer context. Therefore, requires the LLMs be capable on following instructions and be able to find the needle in a haystack.