c0sogi / LLMChat

A full-stack Webui implementation of Large Language model, such as ChatGPT or LLaMA.
MIT License
245 stars 39 forks source link

Can't login in web UI #42

Closed oppokui closed 10 months ago

oppokui commented 10 months ago

Anyone used it? I launch it through docker command, there is user/password login page in left sidebar. I can't register a new user, it always report XMLHTTPRequest error.

Then I go to mysql docker, and create the table info, and find users/api_keys table, so I insert one row into users table:

INSERT INTO users(status,email,password, marketing_agree, created_at, updated_at)
VALUES ('admin','oppokui@gmail.com','12341234', 1, now(), now() );

I saw new user exists, but I still can't login from UI. Any detail instruction?

c0sogi commented 10 months ago

When you see an XMLHTTPRequest error, it means that your web application is sending requests to the wrong address. Can you tell us exactly what settings (host address, port) you ran the application with? By default, if you're running in docker, you should be connected to localhost:8000.

And in the .env file, API_ENV should be local in this case.

oppokui commented 10 months ago

I run most of components in docker, and only api server outside in host machine, by:

docker-compose -f docker-compose-local.yaml up
docker-compose -f docker-compose-local.yaml down api
python -m main

I access the host machine remotely through its IP like: http://18.211.48.230:8001/

The .env:


# API_ENV can be "local", "test", "prod"
API_ENV="local"

# Port for Docker. If you run this app without Docker, this will be ignored to 8001
PORT=8000

# Default LLM model for each chat.
DEFAULT_LLM_MODEL="gpt_3_5_turbo"

# Your MySQL DB info
MYSQL_DATABASE="traffic"
MYSQL_TEST_DATABASE="testing_db"
MYSQL_ROOT_PASSWORD="khuang"
MYSQL_USER="khuang"
MYSQL_PASSWORD="khuang"

# Your Redis DB info
REDIS_DATABASE="0"
REDIS_PASSWORD="khuang"

# Your JWT secret key
JWT_SECRET="khuang"

# Your OpenAI API key
OPENAI_API_KEY="sk-xxxxxx"

# Chatbot settings
# Summarize for chat: Do token summarization for message more than SUMMARIZATION_THRESHOLD
SUMMARIZE_FOR_CHAT=True
SUMMARIZATION_THRESHOLD=512

# Embedding text will be chunked by EMBEDDING_TOKEN_CHUNK_SIZE with EMBEDDING_TOKEN_CHUNK_OVERLAP
# overlap means how many tokens will be overlapped between each chunk.
EMBEDDING_TOKEN_CHUNK_SIZE=512
EMBEDDING_TOKEN_CHUNK_OVERLAP=128

# The shared vector collection name. This will be shared for all users.
QDRANT_COLLECTION="SharedCollection"

# If you want to set prefix or suffix for all prompt to LLM, set these.
GLOBAL_PREFIX=""
GLOBAL_SUFFIX=""

# If you want to use local embedding instead of OpenAI's Ada-002,
# set LOCAL_EMBEDDING_MODEL as "intfloat/e5-large-v2" or other huggingface embedding model repo.
# Warning: Local embedding needs a lot of computing resources!!!
LOCAL_EMBEDDING_MODEL=None

# Define these if you want to open production server with API_ENV="prod"
HOST_IP="0.0.0.0"
HOST_MAIN="OPTIONAL_YOUR_DOMAIN_HERE e.g. yourdomain.com, if you are running API_ENV as production, this will be needed for TLS certificate registration"
HOST_SUB="OPTIONAL_YOUR_SUB_DOMAIN_HERE e.g. mobile.yourdomain.com"
MY_EMAIL="OPTIONAL_YOUR_DOMAIN_HERE e.g. yourdomain.com, if you are running API_ENV as production, this will be needed for TLS certificate registration"

# Not used.
AWS_ACCESS_KEY="OPTIONAL_IF_YOU_NEED"
AWS_SECRET_KEY="OPTIONAL_IF_YOU_NEED"
AWS_AUTHORIZED_EMAIL="OPTIONAL_IF_YOU_NEED"
SAMPLE_JWT_TOKEN="OPTIONAL_IF_YOU_NEED_FOR_TESTING e.g. Bearer XXXXX"
SAMPLE_ACCESS_KEY="OPTIONAL_IF_YOU_NEED_FOR_TESTING"
SAMPLE_SECRET_KEY="OPTIONAL_IF_YOU_NEED_FOR_TESTING"
KAKAO_RESTAPI_TOKEN="OPTIONAL_IF_YOU_NEED e.g. Bearer XXXXX"
WEATHERBIT_API_KEY="OPTIONAL_IF_YOU_NEED"
NASA_API_KEY="OPTIONAL_IF_YOU_NEED"

# For translation. If you don't need translation, you can ignore these.
PAPAGO_CLIENT_ID="OPTIONAL_FOR_TRANSTLATION"
PAPAGO_CLIENT_SECRET="OPTIONAL_FOR_TRANSTLATION"
GOOGLE_CLOUD_PROJECT_ID="OPTIONAL_FOR_TRANSTLATION e.g. top-abcd-01234"
GOOGLE_TRANSLATE_API_KEY ="OPTIONAL_FOR_TRANSTLATION"
GOOGLE_TRANSLATE_OAUTH_ID="OPTIONAL_FOR_TRANSTLATION"
GOOGLE_TRANSLATE_OAUTH_SECRET="OPTIONAL_FOR_TRANSTLATION"
RAPIDAPI_KEY="OPTIONAL_FOR_TRANSLATION"
CUSTOM_TRANSLATE_URL="OPTIONAL_FOR_TRANSLATION"

The log message of "python -m main":

$ python -m main
- Loaded .env file successfully.
- API_ENV: local
- DOCKER_MODE: False
- Parsing function for function calling: control_browser
- Parsing function for function calling: control_web_page
- Parsing function for function calling: web_search
- Parsing function for function calling: vectorstore_search
[2023-09-19 01:26:10,840] SQLAlchemy:CRITICAL - Current DB connection of LocalConfig: localhost/traffic@khuang
Using openai embeddings
INFO:     Started server process [31215]
INFO:     Waiting for application startup.
[2023-09-19 01:26:12,154] ApiLogger:CRITICAL - โš™๏ธ Booting up...
[2023-09-19 01:26:12,154] ApiLogger:CRITICAL - MySQL DB connected!
[2023-09-19 01:26:12,157] ApiLogger:CRITICAL - Redis CACHE connected!
[2023-09-19 01:26:12,157] ApiLogger:CRITICAL - uvloop installed!
[2023-09-19 01:26:12,157] ApiLogger:CRITICAL - Llama CPP server monitoring started!
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)
[2023-09-19 01:26:12,159] ApiLogger:ERROR - Llama CPP server is not available
[2023-09-19 01:26:12,159] ApiLogger:CRITICAL - Starting Llama CPP server
๐Ÿฆ™ llama.cpp DLL not found, building it...
๐Ÿฆ™ Trying to build llama.cpp DLL: /data/ai/llmchat/repositories/llama_cpp/llama_cpp/build-llama-cpp-cublas.sh
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.25.1") 
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE  
-- Found CUDAToolkit: /usr/local/cuda-12.2/include (found version "12.2.128") 
-- cuBLAS found
-- The CUDA compiler identification is NVIDIA 12.2.128
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda-12.2/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Using CUDA architectures: 52;61
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring done (3.7s)
-- Generating done (0.0s)
-- Build files have been written to: /data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/build
[  2%] Generating build details from Git
-- Found Git: /usr/bin/git (found version "2.25.1") 
[  2%] Built target BUILD_INFO
[  4%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[  6%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda.cu.o
[  8%] Building C object CMakeFiles/ggml.dir/k_quants.c.o
[  8%] Built target ggml
[ 10%] Linking CUDA static library libggml_static.a
[ 10%] Built target ggml_static
[ 12%] Linking CUDA shared library libggml_shared.so
[ 12%] Built target ggml_shared
[ 14%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/llama.cpp: In function โ€˜void llama_sample_classifier_free_guidance(llama_context*, llama_token_data_array*, llama_context*, float, float)โ€™:
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/llama.cpp:2208:51: warning: operation on โ€˜t_start_sample_usโ€™ may be undefined [-Wsequence-point]
 2208 |     int64_t t_start_sample_us = t_start_sample_us = ggml_time_us();
      |                                 ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
[ 16%] Linking CXX shared library libllama.so
[ 16%] Built target llama
[ 18%] Building CXX object tests/CMakeFiles/test-quantize-fns.dir/test-quantize-fns.cpp.o
[ 20%] Linking CXX executable ../bin/test-quantize-fns
[ 20%] Built target test-quantize-fns
[ 22%] Building CXX object tests/CMakeFiles/test-quantize-perf.dir/test-quantize-perf.cpp.o
[ 24%] Linking CXX executable ../bin/test-quantize-perf
[ 24%] Built target test-quantize-perf
[ 26%] Building CXX object tests/CMakeFiles/test-sampling.dir/test-sampling.cpp.o
[ 28%] Linking CXX executable ../bin/test-sampling
[ 28%] Built target test-sampling
[ 30%] Building CXX object tests/CMakeFiles/test-tokenizer-0.dir/test-tokenizer-0.cpp.o
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/tests/test-tokenizer-0.cpp:19:2: warning: extra โ€˜;โ€™ [-Wpedantic]
   19 | };
      |  ^
[ 32%] Linking CXX executable ../bin/test-tokenizer-0
[ 32%] Built target test-tokenizer-0
[ 34%] Building C object tests/CMakeFiles/test-grad0.dir/test-grad0.c.o
[ 36%] Linking C executable ../bin/test-grad0
[ 36%] Built target test-grad0
[ 38%] Building CXX object examples/CMakeFiles/common.dir/common.cpp.o
[ 38%] Built target common
[ 40%] Building CXX object examples/main/CMakeFiles/main.dir/main.cpp.o
[ 42%] Linking CXX executable ../../bin/main
[ 42%] Built target main
[ 44%] Building CXX object examples/quantize/CMakeFiles/quantize.dir/quantize.cpp.o
[ 46%] Linking CXX executable ../../bin/quantize
[ 46%] Built target quantize
[ 48%] Building CXX object examples/quantize-stats/CMakeFiles/quantize-stats.dir/quantize-stats.cpp.o
[ 51%] Linking CXX executable ../../bin/quantize-stats
[ 51%] Built target quantize-stats
[ 53%] Building CXX object examples/perplexity/CMakeFiles/perplexity.dir/perplexity.cpp.o
[ 55%] Linking CXX executable ../../bin/perplexity
[ 55%] Built target perplexity
[ 57%] Building CXX object examples/embedding/CMakeFiles/embedding.dir/embedding.cpp.o
[ 59%] Linking CXX executable ../../bin/embedding
[ 59%] Built target embedding
[ 61%] Building CXX object examples/save-load-state/CMakeFiles/save-load-state.dir/save-load-state.cpp.o
[ 63%] Linking CXX executable ../../bin/save-load-state
[ 63%] Built target save-load-state
[ 65%] Building CXX object examples/benchmark/CMakeFiles/benchmark.dir/benchmark-matmult.cpp.o
INFO:     218.241.231.66:15338 - "GET / HTTP/1.1" 200 OK
INFO:     218.241.231.66:15338 - "GET /flutter.js HTTP/1.1" 304 Not Modified
[ 67%] Linking CXX executable ../../bin/benchmark
[ 67%] Built target benchmark
[ 69%] Building CXX object examples/baby-llama/CMakeFiles/baby-llama.dir/baby-llama.cpp.o
INFO:     218.241.231.66:15338 - "GET /main.dart.js HTTP/1.1" 200 OK
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/examples/baby-llama/baby-llama.cpp: In function โ€˜int main(int, char**)โ€™:
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/examples/baby-llama/baby-llama.cpp:1614:32: warning: variable โ€˜opt_params_adamโ€™ set but not used [-Wunused-but-set-variable]
 1614 |         struct ggml_opt_params opt_params_adam = ggml_opt_default_params(GGML_OPT_ADAM);
      |                                ^~~~~~~~~~~~~~~
[ 71%] Linking CXX executable ../../bin/baby-llama
[ 71%] Built target baby-llama
[ 73%] Building CXX object examples/train-text-from-scratch/CMakeFiles/train-text-from-scratch.dir/train-text-from-scratch.cpp.o
INFO:     218.241.231.66:15338 - "GET /assets/FontManifest.json HTTP/1.1" 304 Not Modified
INFO:     218.241.231.66:15338 - "GET /assets/fonts/MaterialIcons-Regular.otf HTTP/1.1" 304 Not Modified
INFO:     218.241.231.66:29230 - "GET /assets/packages/cupertino_icons/assets/CupertinoIcons.ttf HTTP/1.1" 304 Not Modified
[ 75%] Linking CXX executable ../../bin/train-text-from-scratch
[ 75%] Built target train-text-from-scratch
[ 77%] Building CXX object examples/simple/CMakeFiles/simple.dir/simple.cpp.o
[ 79%] Linking CXX executable ../../bin/simple
[ 79%] Built target simple
[ 81%] Building CXX object examples/embd-input/CMakeFiles/embdinput.dir/embd-input-lib.cpp.o
[ 83%] Linking CXX shared library libembdinput.so
[ 83%] Built target embdinput
[ 85%] Building CXX object examples/embd-input/CMakeFiles/embd-input-test.dir/embd-input-test.cpp.o
[ 87%] Linking CXX executable ../../bin/embd-input-test
[ 87%] Built target embd-input-test
[ 89%] Building CXX object examples/server/CMakeFiles/server.dir/server.cpp.o
[ 91%] Linking CXX executable ../../bin/server
[ 91%] Built target server
[ 93%] Building CXX object pocs/vdot/CMakeFiles/vdot.dir/vdot.cpp.o
[ 95%] Linking CXX executable ../../bin/vdot
[ 95%] Built target vdot
[ 97%] Building CXX object pocs/vdot/CMakeFiles/q8dot.dir/q8dot.cpp.o
[100%] Linking CXX executable ../../bin/q8dot
[100%] Built target q8dot
cp: cannot stat '/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/build/bin/Release/libllama.so': No such file or directory
๐Ÿฆ™ Could not build llama.cpp DLL!
๐Ÿฆ™ Trying to build llama.cpp DLL: /data/ai/llmchat/repositories/llama_cpp/llama_cpp/build-llama-cpp-default.sh
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.25.1") 
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE  
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring done (0.4s)
-- Generating done (0.0s)
-- Build files have been written to: /data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/build
[  2%] Generating build details from Git
-- Found Git: /usr/bin/git (found version "2.25.1") 
[  2%] Built target BUILD_INFO
[  4%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[  6%] Building C object CMakeFiles/ggml.dir/k_quants.c.o
[  6%] Built target ggml
[  8%] Linking C static library libggml_static.a
[  8%] Built target ggml_static
[ 10%] Linking C shared library libggml_shared.so
[ 10%] Built target ggml_shared
[ 12%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/llama.cpp: In function โ€˜void llama_sample_classifier_free_guidance(llama_context*, llama_token_data_array*, llama_context*, float, float)โ€™:
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/llama.cpp:2208:51: warning: operation on โ€˜t_start_sample_usโ€™ may be undefined [-Wsequence-point]
 2208 |     int64_t t_start_sample_us = t_start_sample_us = ggml_time_us();
      |                                 ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
[ 14%] Linking CXX shared library libllama.so
[ 14%] Built target llama
[ 16%] Building CXX object tests/CMakeFiles/test-quantize-fns.dir/test-quantize-fns.cpp.o
[ 18%] Linking CXX executable ../bin/test-quantize-fns
[ 18%] Built target test-quantize-fns
[ 20%] Building CXX object tests/CMakeFiles/test-quantize-perf.dir/test-quantize-perf.cpp.o
[ 22%] Linking CXX executable ../bin/test-quantize-perf
[ 22%] Built target test-quantize-perf
[ 25%] Building CXX object tests/CMakeFiles/test-sampling.dir/test-sampling.cpp.o
[ 27%] Linking CXX executable ../bin/test-sampling
[ 27%] Built target test-sampling
[ 29%] Building CXX object tests/CMakeFiles/test-tokenizer-0.dir/test-tokenizer-0.cpp.o
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/tests/test-tokenizer-0.cpp:19:2: warning: extra โ€˜;โ€™ [-Wpedantic]
   19 | };
      |  ^
[ 31%] Linking CXX executable ../bin/test-tokenizer-0
[ 31%] Built target test-tokenizer-0
[ 33%] Building C object tests/CMakeFiles/test-grad0.dir/test-grad0.c.o
[ 35%] Linking C executable ../bin/test-grad0
[ 35%] Built target test-grad0
[ 37%] Building CXX object examples/CMakeFiles/common.dir/common.cpp.o
[ 37%] Built target common
[ 39%] Building CXX object examples/main/CMakeFiles/main.dir/main.cpp.o
[ 41%] Linking CXX executable ../../bin/main
[ 41%] Built target main
[ 43%] Building CXX object examples/quantize/CMakeFiles/quantize.dir/quantize.cpp.o
[ 45%] Linking CXX executable ../../bin/quantize
[ 45%] Built target quantize
[ 47%] Building CXX object examples/quantize-stats/CMakeFiles/quantize-stats.dir/quantize-stats.cpp.o
[ 50%] Linking CXX executable ../../bin/quantize-stats
[ 50%] Built target quantize-stats
[ 52%] Building CXX object examples/perplexity/CMakeFiles/perplexity.dir/perplexity.cpp.o
[ 54%] Linking CXX executable ../../bin/perplexity
[ 54%] Built target perplexity
[ 56%] Building CXX object examples/embedding/CMakeFiles/embedding.dir/embedding.cpp.o
[ 58%] Linking CXX executable ../../bin/embedding
[ 58%] Built target embedding
[ 60%] Building CXX object examples/save-load-state/CMakeFiles/save-load-state.dir/save-load-state.cpp.o
[ 62%] Linking CXX executable ../../bin/save-load-state
[ 62%] Built target save-load-state
[ 64%] Building CXX object examples/benchmark/CMakeFiles/benchmark.dir/benchmark-matmult.cpp.o
[ 66%] Linking CXX executable ../../bin/benchmark
[ 66%] Built target benchmark
[ 68%] Building CXX object examples/baby-llama/CMakeFiles/baby-llama.dir/baby-llama.cpp.o
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/examples/baby-llama/baby-llama.cpp: In function โ€˜int main(int, char**)โ€™:
/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/examples/baby-llama/baby-llama.cpp:1614:32: warning: variable โ€˜opt_params_adamโ€™ set but not used [-Wunused-but-set-variable]
 1614 |         struct ggml_opt_params opt_params_adam = ggml_opt_default_params(GGML_OPT_ADAM);
      |                                ^~~~~~~~~~~~~~~
[ 70%] Linking CXX executable ../../bin/baby-llama
[ 70%] Built target baby-llama
[ 72%] Building CXX object examples/train-text-from-scratch/CMakeFiles/train-text-from-scratch.dir/train-text-from-scratch.cpp.o
[ 75%] Linking CXX executable ../../bin/train-text-from-scratch
[ 75%] Built target train-text-from-scratch
[ 77%] Building CXX object examples/simple/CMakeFiles/simple.dir/simple.cpp.o
[ 79%] Linking CXX executable ../../bin/simple
[ 79%] Built target simple
[ 81%] Building CXX object examples/embd-input/CMakeFiles/embdinput.dir/embd-input-lib.cpp.o
[ 83%] Linking CXX shared library libembdinput.so
[ 83%] Built target embdinput
[ 85%] Building CXX object examples/embd-input/CMakeFiles/embd-input-test.dir/embd-input-test.cpp.o
[ 87%] Linking CXX executable ../../bin/embd-input-test
[ 87%] Built target embd-input-test
[ 89%] Building CXX object examples/server/CMakeFiles/server.dir/server.cpp.o
[ 91%] Linking CXX executable ../../bin/server
[ 91%] Built target server
[ 93%] Building CXX object pocs/vdot/CMakeFiles/vdot.dir/vdot.cpp.o
[ 95%] Linking CXX executable ../../bin/vdot
[ 95%] Built target vdot
[ 97%] Building CXX object pocs/vdot/CMakeFiles/q8dot.dir/q8dot.cpp.o
[100%] Linking CXX executable ../../bin/q8dot
[100%] Built target q8dot
cp: cannot stat '/data/ai/llmchat/repositories/llama_cpp/vendor/llama.cpp/build/bin/Release/libllama.so': No such file or directory
๐Ÿฆ™ Could not build llama.cpp DLL!
[2023-09-19 01:28:00,200] ApiLogger:WARNING - ๐Ÿฆ™ Could not import llama-cpp-python repository: ๐Ÿฆ™ Could not build llama.cpp DLL!
...trying to import installed llama-cpp package...
[2023-09-19 01:28:00,201] ||v1||:WARNING - Llama.cpp import error: Shared library with base name 'llama' not found
[2023-09-19 01:28:01,356] ||v1||:INFO - ๐Ÿฆ™ Successfully imported exllama module!
[2023-09-19 01:28:01,432] ||v1||:INFO - ๐Ÿฆ™ Successfully imported embeddings(Pytorch + Transformer) module!
[2023-09-19 01:28:01,432] ||v1||:WARNING - Sentence Encoder embedding import error: No module named 'tensorflow_hub'
[2023-09-19 01:28:01,465] ApiLogger:CRITICAL - ๐Ÿฆ™ Llama.cpp server is running
c0sogi commented 10 months ago

Ok. That might be a problem.

Because the browser is actually requesting to localhost, the browser is not able to connect to 18.211.48.230.

Then, in app/utils/js_initializer.py Try changing this line

to_url_local = f"{schema}://localhost:{config.port}"

to

to_url_local = f"{schema}://18.211.48.230:{config.port}"

this.

oppokui commented 10 months ago

email REST API eventually goes to server side, after your suggestion.

But it return 500 status.

{
    "status": 500,
    "msg": "์ด ์—๋Ÿฌ๋Š” ์„œ๋ฒ„์ธก ์—๋Ÿฌ ์ž…๋‹ˆ๋‹ค. ์ž๋™์œผ๋กœ ๋ฆฌํฌํŒ… ๋˜๋ฉฐ, ๋น ๋ฅด๊ฒŒ ์ˆ˜์ •ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.",
    "detail": "Internal Server Error",
    "code": "5009999"
}

At the backend terminal, I saw lot of exception:

INFO:     218.241.231.66:11003 - "GET /main.dart.js HTTP/1.1" 304 Not Modified
[2023-09-19 02:12:10,422] ApiLogger:ERROR - {
    "url": "18.211.48.230/api/auth/login/email",
    "method": "POST",
    "statusCode": 500,
    "errorDetail": {
        "errorFunc": "hashpw",
        "location": "84 line in /home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/bcrypt/__init__.py",
        "raised": "InternalServerError",
        "msg": "\uc774 \uc5d0\ub7ec\ub294 \uc11c\ubc84\uce21 \uc5d0\ub7ec \uc785\ub2c8\ub2e4. \uc790\ub3d9\uc73c\ub85c \ub9ac\ud3ec\ud305 \ub418\uba70, \ube60\ub974\uac8c \uc218\uc815\ud558\uaca0\uc2b5\ub2c8\ub2e4.",
        "detail": "Internal Server Error"
    },
    "client": {
        "ip": "218.241.231.66",
        "id": null,
        "email": null
    },
    "processedTime(ms)": 4.19378,
    "datetimeUTC": "2023/09/19 02:12:10",
    "datetimeKST": "2023/09/19 11:12:10",
    "cookies": {
        "_xsrf": "2|60e02f0b|71d27dbbf5372a7293c0316320c04580|1693230125",
        "username-18-211-48-230-8888": "2|1:0|10:1694792532|27:username-18-211-48-230-8888|196:eyJ1c2VybmFtZSI6ICIyMzdkMTI4NWRlOWI0MTExOTVhZjcwNWNmMmU5ODU1OCIsICJuYW1lIjogIkFub255bW91cyBFaXJlbmUiLCAiZGlzcGxheV9uYW1lIjogIkFub255bW91cyBFaXJlbmUiLCAiaW5pdGlhbHMiOiAiQUUiLCAiY29sb3IiOiBudWxsfQ==|9e5aeb023fcd40b93c2afad30f83c36e52cb7b1f62316537856584ae32664105"
    },
    "headers": {
        "host": "18.211.48.230:8001",
        "connection": "keep-alive",
        "content-length": "51",
        "accept": "application/json",
        "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36",
        "content-type": "application/json; charset=utf-8",
        "origin": "http://18.211.48.230:8001",
        "referer": "http://18.211.48.230:8001/chat/",
        "accept-encoding": "gzip, deflate",
        "accept-language": "en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7",
        "cookie": "_xsrf=2|60e02f0b|71d27dbbf5372a7293c0316320c04580|1693230125; username-18-211-48-230-8888=\"2|1:0|10:1694792532|27:username-18-211-48-230-8888|196:eyJ1c2VybmFtZSI6ICIyMzdkMTI4NWRlOWI0MTExOTVhZjcwNWNmMmU5ODU1OCIsICJuYW1lIjogIkFub255bW91cyBFaXJlbmUiLCAiZGlzcGxheV9uYW1lIjogIkFub255bW91cyBFaXJlbmUiLCAiaW5pdGlhbHMiOiAiQUUiLCAiY29sb3IiOiBudWxsfQ==|9e5aeb023fcd40b93c2afad30f83c36e52cb7b1f62316537856584ae32664105\""
    },
    "query_params": {}
}
Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/anyio/streams/memory.py", line 98, in receive
    return self.receive_nowait()
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/anyio/streams/memory.py", line 93, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/starlette/middleware/base.py", line 78, in call_next
    message = await recv_stream.receive()
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/anyio/streams/memory.py", line 118, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/ai/llmchat/app/middlewares/token_validator.py", line 123, in access_control
    response: Response = await call_next(
                         ^^^^^^^^^^^^^^^^
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/starlette/middleware/base.py", line 84, in call_next
    raise app_exc
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/starlette/middleware/base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
    raise e
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/fastapi/routing.py", line 273, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/ai/llmchat/app/routers/auth.py", line 97, in login
    if not bcrypt.checkpw(
           ^^^^^^^^^^^^^^^
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/bcrypt/__init__.py", line 91, in checkpw
    ret = hashpw(password, hashed_password)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/anaconda3/envs/python3.11/lib/python3.11/site-packages/bcrypt/__init__.py", line 84, in hashpw
    return _bcrypt.hashpass(password, salt)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Invalid salt
c0sogi commented 10 months ago

Clear your SQL table. The password column won't accept plain text.

oppokui commented 10 months ago

Got it! When I register a new user, I can login and see the chat UI, cool! One suggestion for the remote access UI is to get host:port from js code.

change

to_url_local = f"{schema}://localhost:{config.port}"

to:

to_url_local = f"{schema}://\"+location.host+\""