Open frgfm opened 8 months ago
Hi @frgfm
I want to work on this issue. Please assign it to me. By the way, what is the deadline?
Hey there @themujahidkhan :wave:
For sure, I was thinking about completing it by the end of next week! Would that be reasonable for you?
Hey @frgfm
Assign it to me, and I'll take my best shot. To complete within the given time frame
Hey @frgfm
I've implemented a copy-button
for code blocks, but I'm getting many issues running the backend server; kindly give me a resource link or let me know how to fix this issue.
I saw this backend API
repo, Link I followed the steps shown inside readme.md
file. But that also didn't help. Can you please list in bullet points how to run the backend?
Hey there :wave:
Yeah sorry about this, I'm holding the documentation update PR for the release :sweat_smile: Long story short, you need three things:
.env
filedocker-compose.yml
fileIn the env, put this (update the PAT, the rest can stay as is):
POSTGRES_DB=postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD='An0th3rDumm1PassW0rdz!'
SUPERADMIN_GH_PAT=your-github-pat
SUPERADMIN_LOGIN='JohnDoe'
SUPERADMIN_PWD='Dumm1PassW0rdz!'
GH_OAUTH_ID=your-github-oauth-app-id
GH_OAUTH_SECRET=your-github-oauth-app-secret
OLLAMA_MODEL='tinydolphin:1.1b-v2.8-q4_K_M'
the docker compose
version: '3.7'
services:
backend:
image: quackai/contribution-api:latest
command: uvicorn app.main:app --reload --host 0.0.0.0 --port 8050 --proxy-headers
ports:
- "8050:8050"
environment:
- POSTGRES_URL=postgresql+asyncpg://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db/${POSTGRES_DB}
- OLLAMA_ENDPOINT=http://ollama:11434
- OLLAMA_MODEL=${OLLAMA_MODEL}
- SECRET_KEY=${SECRET_KEY}
- SUPERADMIN_GH_PAT=${SUPERADMIN_GH_PAT}
- SUPERADMIN_LOGIN=${SUPERADMIN_LOGIN}
- SUPERADMIN_PWD=${SUPERADMIN_PWD}
- GH_OAUTH_ID=${GH_OAUTH_ID}
- GH_OAUTH_SECRET=${GH_OAUTH_SECRET}
- SUPPORT_EMAIL=${SUPPORT_EMAIL}
- DEBUG=false
depends_on:
db:
condition: service_healthy
ollama:
condition: service_healthy
ollama:
image: ollama/ollama:0.1.29
command: serve
volumes:
- "$HOME/.ollama:/root/.ollama"
expose:
- 11434
healthcheck:
test: ["CMD-SHELL", "ollama pull '${OLLAMA_MODEL}'"]
interval: 5s
timeout: 1m
retries: 3
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
healthcheck:
test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}'"]
interval: 10s
timeout: 3s
retries: 3
volumes:
postgres_data:
ollama:
and then
docker compose up -d
now you can update your API endpoint in the extension to http://localhost:8050
. If you don't have a GPU, you should comment the "deploy" section in the docker compose. Let me know how it goes :grin:
Hey @frgfm
Thanks for the quick response, BTW, What is this SUPERADMIN_LOGIN
& SUPERADMIN_PWD
, should I keep the values as it is that you've provided in the dummy .env
file or do I've to get it from somewhere?
You can keep the value as is:
So in dev mode, no need to update them. You only need the GitHub PAT :)
Understood, I followed the updated steps which you've shared, but Unfortunately, I'm unable to access the backend
as well unable to configure the endpoint
Let's try to solve this efficiently:
docker compose up -d
docker compose logs
That should pinpoint the problem. If we can't easily debug it, let's move the discussion to the corresponding repo :+1:
All three containers are up and running.
quack-companion-vscode-ollama-1
2024-03-14 18:02:43 time=2024-03-14T12:32:43.288Z level=INFO source=images.go:806 msg="total blobs: 6"
2024-03-14 18:02:43 time=2024-03-14T12:32:43.311Z level=INFO source=images.go:813 msg="total unused blobs removed: 0"
2024-03-14 18:02:43 time=2024-03-14T12:32:43.335Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)"
2024-03-14 18:02:43 time=2024-03-14T12:32:43.337Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama2068532240/runners ..."
2024-03-14 18:02:46 time=2024-03-14T12:32:46.612Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 rocm_v60000 cpu cuda_v11]"
2024-03-14 18:02:46 time=2024-03-14T12:32:46.612Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
2024-03-14 18:02:46 time=2024-03-14T12:32:46.613Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
2024-03-14 18:02:46 time=2024-03-14T12:32:46.614Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvami.inf_amd64_1c50bacc270a42bf/libnvidia-ml.so.1]"
2024-03-14 18:02:46 time=2024-03-14T12:32:46.631Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected"
2024-03-14 18:02:46 time=2024-03-14T12:32:46.631Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
2024-03-14 18:02:46 time=2024-03-14T12:32:46.638Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 7.5"
2024-03-14 18:02:48 [GIN] 2024/03/14 - 12:32:48 | 200 | 28.6µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:03:02 [GIN] 2024/03/14 - 12:33:02 | 200 | 14.060322718s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:03:07 [GIN] 2024/03/14 - 12:33:07 | 200 | 76.801µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:03:08 [GIN] 2024/03/14 - 12:33:08 | 200 | 36.133948ms | 172.18.0.4 | GET "/api/tags"
2024-03-14 18:03:24 [GIN] 2024/03/14 - 12:33:24 | 200 | 17.518156122s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:03:24 [GIN] 2024/03/14 - 12:33:24 | 200 | 16.62510822s | 172.18.0.4 | POST "/api/pull"
2024-03-14 18:03:29 [GIN] 2024/03/14 - 12:33:29 | 200 | 35.5µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:03:39 [GIN] 2024/03/14 - 12:33:39 | 200 | 9.350059648s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:03:44 [GIN] 2024/03/14 - 12:33:44 | 200 | 19.5µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:03:50 [GIN] 2024/03/14 - 12:33:50 | 200 | 6.561658361s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:03:55 [GIN] 2024/03/14 - 12:33:55 | 200 | 20.3µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:04:02 [GIN] 2024/03/14 - 12:34:02 | 200 | 6.837411112s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:04:07 [GIN] 2024/03/14 - 12:34:07 | 200 | 15.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:04:13 [GIN] 2024/03/14 - 12:34:13 | 200 | 5.848962366s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:04:18 [GIN] 2024/03/14 - 12:34:18 | 200 | 15.6µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:04:24 [GIN] 2024/03/14 - 12:34:24 | 200 | 5.927999608s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:04:29 [GIN] 2024/03/14 - 12:34:29 | 200 | 15.5µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:04:35 [GIN] 2024/03/14 - 12:34:35 | 200 | 5.811429606s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:04:40 [GIN] 2024/03/14 - 12:34:40 | 200 | 18.9µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:04:53 [GIN] 2024/03/14 - 12:34:53 | 200 | 13.067282922s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:04:58 [GIN] 2024/03/14 - 12:34:58 | 200 | 64.1µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:05:06 [GIN] 2024/03/14 - 12:35:06 | 200 | 8.380227996s | 127.0.0.1 | POST "/api/pull"
2024-03-14 18:05:11 [GIN] 2024/03/14 - 12:35:11 | 200 | 20.6µs | 127.0.0.1 | HEAD "/"
2024-03-14 18:05:18 [GIN] 2024/03/14 - 12:35:18 | 200 | 6.093607765s | 127.0.0.1 | POST "/api/pull"
quack-companion-vscode-backend-1
2024-03-14 18:03:04 INFO: Will watch for changes in these directories: ['/app']
2024-03-14 18:03:04 INFO: Uvicorn running on http://0.0.0.0:8050 (Press CTRL+C to quit)
2024-03-14 18:03:04 INFO: Started reloader process [1] using WatchFiles
2024-03-14 18:03:08 INFO: Loading Ollama model...
2024-03-14 18:03:18 Process SpawnProcess-1:
2024-03-14 18:03:18 Traceback (most recent call last):
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 537, in _make_request
2024-03-14 18:03:18 response = conn.getresponse()
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 461, in getresponse
2024-03-14 18:03:18 httplib_response = super().getresponse()
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/http/client.py", line 1377, in getresponse
2024-03-14 18:03:18 response.begin()
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/http/client.py", line 320, in begin
2024-03-14 18:03:18 version, status, reason = self._read_status()
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/http/client.py", line 281, in _read_status
2024-03-14 18:03:18 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/socket.py", line 704, in readinto
2024-03-14 18:03:18 return self._sock.recv_into(b)
2024-03-14 18:03:18 socket.timeout: timed out
2024-03-14 18:03:18
2024-03-14 18:03:18 The above exception was the direct cause of the following exception:
2024-03-14 18:03:18
2024-03-14 18:03:18 Traceback (most recent call last):
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
2024-03-14 18:03:18 resp = conn.urlopen(
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 845, in urlopen
2024-03-14 18:03:18 retries = retries.increment(
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py", line 470, in increment
2024-03-14 18:03:18 raise reraise(type(error), error, _stacktrace)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/urllib3/util/util.py", line 39, in reraise
2024-03-14 18:03:18 raise value
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 791, in urlopen
2024-03-14 18:03:18 response = self._make_request(
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 539, in _make_request
2024-03-14 18:03:18 self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 371, in _raise_timeout
2024-03-14 18:03:18 raise ReadTimeoutError(
2024-03-14 18:03:18 urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='ollama', port=11434): Read timed out. (read timeout=10)
2024-03-14 18:03:18
2024-03-14 18:03:18 During handling of the above exception, another exception occurred:
2024-03-14 18:03:18
2024-03-14 18:03:18 Traceback (most recent call last):
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2024-03-14 18:03:18 self.run()
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/multiprocessing/process.py", line 108, in run
2024-03-14 18:03:18 self._target(*self._args, **self._kwargs)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
2024-03-14 18:03:18 target(sockets=sockets)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/uvicorn/server.py", line 61, in run
2024-03-14 18:03:18 return asyncio.run(self.serve(sockets=sockets))
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
2024-03-14 18:03:18 return loop.run_until_complete(main)
2024-03-14 18:03:18 File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/uvicorn/server.py", line 68, in serve
2024-03-14 18:03:18 config.load()
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/uvicorn/config.py", line 467, in load
2024-03-14 18:03:18 self.loaded_app = import_from_string(self.app)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/uvicorn/importer.py", line 21, in import_from_string
2024-03-14 18:03:18 module = importlib.import_module(module_str)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
2024-03-14 18:03:18 return _bootstrap._gcd_import(name[level:], package, level)
2024-03-14 18:03:18 File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
2024-03-14 18:03:18 File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
2024-03-14 18:03:18 File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
2024-03-14 18:03:18 File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
2024-03-14 18:03:18 File "<frozen importlib._bootstrap_external>", line 850, in exec_module
2024-03-14 18:03:18 File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
2024-03-14 18:03:18 File "/app/app/main.py", line 17, in <module>
2024-03-14 18:03:18 from app.api.api_v1.router import api_router
2024-03-14 18:03:18 File "/app/app/api/api_v1/router.py", line 8, in <module>
2024-03-14 18:03:18 from app.api.api_v1.endpoints import code, guidelines, login, repos, users
2024-03-14 18:03:18 File "/app/app/api/api_v1/endpoints/code.py", line 14, in <module>
2024-03-14 18:03:18 from app.services.ollama import ollama_client
2024-03-14 18:03:18 File "/app/app/services/ollama.py", line 196, in <module>
2024-03-14 18:03:18 ollama_client = OllamaClient(settings.OLLAMA_ENDPOINT, settings.OLLAMA_MODEL)
2024-03-14 18:03:18 File "/app/app/services/ollama.py", line 84, in __init__
2024-03-14 18:03:18 response = requests.post(f"{self.endpoint}/api/pull", json={"name": model_name, "stream": False}, timeout=10)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 115, in post
2024-03-14 18:03:18 return request("post", url, data=data, json=json, **kwargs)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 59, in request
2024-03-14 18:03:18 return session.request(method=method, url=url, **kwargs)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
2024-03-14 18:03:18 resp = self.send(prep, **send_kwargs)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
2024-03-14 18:03:18 r = adapter.send(request, **kwargs)
2024-03-14 18:03:18 File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 532, in send
2024-03-14 18:03:18 raise ReadTimeout(e, request=request)
2024-03-14 18:03:18 requests.exceptions.ReadTimeout: HTTPConnectionPool(host='ollama', port=11434): Read timed out. (read timeout=10)
quack-companion-vscode-db-1
2024-03-14 18:02:43 2024-03-14 12:32:43.018 UTC [1] LOG: starting PostgreSQL 15.6 on x86_64-pc-linux-musl, compiled by gcc (Alpine 13.2.1_git20231014) 13.2.1 20231014, 64-bit
2024-03-14 18:02:43 2024-03-14 12:32:43.018 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2024-03-14 18:02:43 2024-03-14 12:32:43.018 UTC [1] LOG: listening on IPv6 address "::", port 5432
2024-03-14 18:02:43 2024-03-14 12:32:43.037 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-03-14 18:02:43 2024-03-14 12:32:43.055 UTC [24] LOG: database system was shut down at 2024-03-14 09:56:36 UTC
2024-03-14 18:02:43 2024-03-14 12:32:43.074 UTC [1] LOG: database system is ready to accept connections
2024-03-14 18:02:42
2024-03-14 18:02:42 PostgreSQL Database directory appears to contain a database; Skipping initialization
2024-03-14 18:02:42
I think it's linked to the boot of Ollama pulling the model. Try to do the following to pull the model:
docker compose exec -T ollama ollama pull tinydolphin:1.1b-v2.8-q4_K_M
Then restart everything
docker compose down && docker compose up -d
Everything is up and running, But this URL is not reachable http://localhost:8050/
There are no Errors in the docker logs as well
VS Code olllama
2024-03-14 22:33:14 time=2024-03-14T17:03:14.732Z level=INFO source=images.go:806 msg="total blobs: 6"
2024-03-14 22:33:14 time=2024-03-14T17:03:14.773Z level=INFO source=images.go:813 msg="total unused blobs removed: 0"
2024-03-14 22:33:14 time=2024-03-14T17:03:14.800Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)"
2024-03-14 22:33:14 time=2024-03-14T17:03:14.801Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama1014305697/runners ..."
2024-03-14 22:33:18 time=2024-03-14T17:03:18.159Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [rocm_v60000 cuda_v11 cpu_avx2 cpu_avx cpu]"
2024-03-14 22:33:18 time=2024-03-14T17:03:18.159Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
2024-03-14 22:33:18 time=2024-03-14T17:03:18.159Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
2024-03-14 22:33:18 time=2024-03-14T17:03:18.161Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: [/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 /usr/lib/wsl/drivers/nvami.inf_amd64_1c50bacc270a42bf/libnvidia-ml.so.1]"
2024-03-14 22:33:18 time=2024-03-14T17:03:18.177Z level=INFO source=gpu.go:82 msg="Nvidia GPU detected"
2024-03-14 22:33:18 time=2024-03-14T17:03:18.177Z level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
2024-03-14 22:33:18 time=2024-03-14T17:03:18.184Z level=INFO source=gpu.go:119 msg="CUDA Compute Capability detected: 7.5"
2024-03-14 22:33:39 time=2024-03-14T17:03:39.180Z level=INFO source=routes.go:843 msg="skipping file: registry.ollama.ai/library/tinydolphin:latest"
2024-03-14 22:33:19 [GIN] 2024/03/14 - 17:03:19 | 200 | 27.299µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:33:35 [GIN] 2024/03/14 - 17:03:35 | 200 | 15.789996109s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:33:39 [GIN] 2024/03/14 - 17:03:39 | 200 | 42.582775ms | 172.18.0.4 | GET "/api/tags"
2024-03-14 22:33:40 [GIN] 2024/03/14 - 17:03:40 | 200 | 16.3µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:33:51 [GIN] 2024/03/14 - 17:03:51 | 200 | 12.616661018s | 172.18.0.4 | POST "/api/pull"
2024-03-14 22:33:51 [GIN] 2024/03/14 - 17:03:51 | 200 | 11.767167883s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:33:56 [GIN] 2024/03/14 - 17:03:56 | 200 | 35.5µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:34:08 [GIN] 2024/03/14 - 17:04:08 | 200 | 11.298408272s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:34:13 [GIN] 2024/03/14 - 17:04:13 | 200 | 15.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:34:24 [GIN] 2024/03/14 - 17:04:24 | 200 | 11.103495854s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:34:29 [GIN] 2024/03/14 - 17:04:29 | 200 | 16µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:34:42 [GIN] 2024/03/14 - 17:04:42 | 200 | 13.03163242s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:34:47 [GIN] 2024/03/14 - 17:04:47 | 200 | 16µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:34:56 [GIN] 2024/03/14 - 17:04:56 | 200 | 8.795510037s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:35:01 [GIN] 2024/03/14 - 17:05:01 | 200 | 18.3µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:35:07 [GIN] 2024/03/14 - 17:05:07 | 200 | 6.634934724s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:35:12 [GIN] 2024/03/14 - 17:05:12 | 200 | 17.1µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:35:18 [GIN] 2024/03/14 - 17:05:18 | 200 | 6.085032001s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:35:24 [GIN] 2024/03/14 - 17:05:24 | 200 | 15.9µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:35:29 [GIN] 2024/03/14 - 17:05:29 | 200 | 5.941380987s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:35:34 [GIN] 2024/03/14 - 17:05:34 | 200 | 15.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:35:41 [GIN] 2024/03/14 - 17:05:41 | 200 | 6.682313414s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:35:46 [GIN] 2024/03/14 - 17:05:46 | 200 | 16.2µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:35:52 [GIN] 2024/03/14 - 17:05:52 | 200 | 5.921270241s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:35:57 [GIN] 2024/03/14 - 17:05:57 | 200 | 16.9µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:36:03 [GIN] 2024/03/14 - 17:06:03 | 200 | 5.951876925s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:36:08 [GIN] 2024/03/14 - 17:06:08 | 200 | 16.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:36:14 [GIN] 2024/03/14 - 17:06:14 | 200 | 5.934647566s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:36:19 [GIN] 2024/03/14 - 17:06:19 | 200 | 32.6µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:36:25 [GIN] 2024/03/14 - 17:06:25 | 200 | 5.898130207s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:36:30 [GIN] 2024/03/14 - 17:06:30 | 200 | 21.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:36:36 [GIN] 2024/03/14 - 17:06:36 | 200 | 6.222399809s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:36:41 [GIN] 2024/03/14 - 17:06:41 | 200 | 16.3µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:36:47 [GIN] 2024/03/14 - 17:06:47 | 200 | 5.914838755s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:36:52 [GIN] 2024/03/14 - 17:06:52 | 200 | 16.9µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:36:58 [GIN] 2024/03/14 - 17:06:58 | 200 | 5.926132729s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:37:03 [GIN] 2024/03/14 - 17:07:03 | 200 | 16.9µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:37:09 [GIN] 2024/03/14 - 17:07:09 | 200 | 6.032289047s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:37:14 [GIN] 2024/03/14 - 17:07:14 | 200 | 16.5µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:37:20 [GIN] 2024/03/14 - 17:07:20 | 200 | 6.105824756s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:37:25 [GIN] 2024/03/14 - 17:07:25 | 200 | 16.6µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:37:32 [GIN] 2024/03/14 - 17:07:32 | 200 | 6.158703255s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:37:37 [GIN] 2024/03/14 - 17:07:37 | 200 | 17.2µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:37:43 [GIN] 2024/03/14 - 17:07:43 | 200 | 6.359558093s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:37:48 [GIN] 2024/03/14 - 17:07:48 | 200 | 38.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:37:54 [GIN] 2024/03/14 - 17:07:54 | 200 | 5.999303699s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:37:59 [GIN] 2024/03/14 - 17:07:59 | 200 | 31.7µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:38:05 [GIN] 2024/03/14 - 17:08:05 | 200 | 5.830244051s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:38:10 [GIN] 2024/03/14 - 17:08:10 | 200 | 16.6µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:38:16 [GIN] 2024/03/14 - 17:08:16 | 200 | 6.069911053s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:38:21 [GIN] 2024/03/14 - 17:08:21 | 200 | 15.9µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:38:28 [GIN] 2024/03/14 - 17:08:28 | 200 | 6.639106059s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:38:33 [GIN] 2024/03/14 - 17:08:33 | 200 | 20.4µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:38:39 [GIN] 2024/03/14 - 17:08:39 | 200 | 6.04255753s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:38:44 [GIN] 2024/03/14 - 17:08:44 | 200 | 17.7µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:39:06 [GIN] 2024/03/14 - 17:09:06 | 200 | 22.294957251s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:39:11 [GIN] 2024/03/14 - 17:09:11 | 200 | 38.098µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:39:29 [GIN] 2024/03/14 - 17:09:29 | 200 | 17.805226736s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:39:34 [GIN] 2024/03/14 - 17:09:34 | 200 | 26.099µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:39:48 [GIN] 2024/03/14 - 17:09:48 | 200 | 13.665972156s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:39:53 [GIN] 2024/03/14 - 17:09:53 | 200 | 82.9µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:40:00 [GIN] 2024/03/14 - 17:10:00 | 200 | 6.852333908s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:40:05 [GIN] 2024/03/14 - 17:10:05 | 200 | 18.5µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:40:11 [GIN] 2024/03/14 - 17:10:11 | 200 | 6.107715692s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:40:16 [GIN] 2024/03/14 - 17:10:16 | 200 | 16.1µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:40:23 [GIN] 2024/03/14 - 17:10:23 | 200 | 6.650439721s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:40:28 [GIN] 2024/03/14 - 17:10:28 | 200 | 15.7µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:40:34 [GIN] 2024/03/14 - 17:10:34 | 200 | 6.239630271s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:40:39 [GIN] 2024/03/14 - 17:10:39 | 200 | 16.9µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:40:46 [GIN] 2024/03/14 - 17:10:46 | 200 | 6.648542688s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:40:51 [GIN] 2024/03/14 - 17:10:51 | 200 | 17.7µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:40:58 [GIN] 2024/03/14 - 17:10:58 | 200 | 7.426581435s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:41:03 [GIN] 2024/03/14 - 17:11:03 | 200 | 40.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:41:15 [GIN] 2024/03/14 - 17:11:15 | 200 | 11.441846115s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:41:20 [GIN] 2024/03/14 - 17:11:20 | 200 | 16.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:41:26 [GIN] 2024/03/14 - 17:11:26 | 200 | 6.337974323s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:41:31 [GIN] 2024/03/14 - 17:11:31 | 200 | 16.1µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:41:37 [GIN] 2024/03/14 - 17:11:37 | 200 | 5.742951022s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:41:42 [GIN] 2024/03/14 - 17:11:42 | 200 | 15.8µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:41:48 [GIN] 2024/03/14 - 17:11:48 | 200 | 6.593301257s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:41:53 [GIN] 2024/03/14 - 17:11:53 | 200 | 16.2µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:41:59 [GIN] 2024/03/14 - 17:11:59 | 200 | 5.984509867s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:42:04 [GIN] 2024/03/14 - 17:12:04 | 200 | 15.4µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:42:10 [GIN] 2024/03/14 - 17:12:10 | 200 | 5.937435282s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:42:15 [GIN] 2024/03/14 - 17:12:15 | 200 | 37.5µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:42:21 [GIN] 2024/03/14 - 17:12:21 | 200 | 6.068857319s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:42:26 [GIN] 2024/03/14 - 17:12:26 | 200 | 30.1µs | 127.0.0.1 | HEAD "/"
2024-03-14 22:42:32 [GIN] 2024/03/14 - 17:12:32 | 200 | 5.870468403s | 127.0.0.1 | POST "/api/pull"
2024-03-14 22:42:37 [GIN] 2024/03/14 - 17:12:37 | 200 | 26.5µs | 127.0.0.1 | HEAD "/"
Vs Code DB
2024-03-14 22:33:13 2024-03-14 17:03:13.912 UTC [1] LOG: starting PostgreSQL 15.6 on x86_64-pc-linux-musl, compiled by gcc (Alpine 13.2.1_git20231014) 13.2.1 20231014, 64-bit
2024-03-14 22:33:13 2024-03-14 17:03:13.912 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2024-03-14 22:33:13 2024-03-14 17:03:13.912 UTC [1] LOG: listening on IPv6 address "::", port 5432
2024-03-14 22:33:13 2024-03-14 17:03:13.927 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-03-14 22:33:13 2024-03-14 17:03:13.941 UTC [24] LOG: database system was interrupted; last known up at 2024-03-14 14:38:32 UTC
2024-03-14 22:33:17 2024-03-14 17:03:17.459 UTC [24] LOG: database system was not properly shut down; automatic recovery in progress
2024-03-14 22:33:17 2024-03-14 17:03:17.468 UTC [24] LOG: redo starts at 0/1547850
2024-03-14 22:33:17 2024-03-14 17:03:17.468 UTC [24] LOG: invalid record length at 0/1547888: wanted 24, got 0
2024-03-14 22:33:17 2024-03-14 17:03:17.468 UTC [24] LOG: redo done at 0/1547850 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2024-03-14 22:33:17 2024-03-14 17:03:17.497 UTC [22] LOG: checkpoint starting: end-of-recovery immediate wait
2024-03-14 22:33:17 2024-03-14 17:03:17.574 UTC [22] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.028 s, sync=0.008 s, total=0.085 s; sync files=2, longest=0.004 s, average=0.004 s; distance=0 kB, estimate=0 kB
2024-03-14 22:33:17 2024-03-14 17:03:17.588 UTC [1] LOG: database system is ready to accept connections
2024-03-14 22:33:13
2024-03-14 22:33:13 PostgreSQL Database directory appears to contain a database; Skipping initialization
2024-03-14 22:33:13
VS Code DB
2024-03-14 22:33:36 INFO: Will watch for changes in these directories: ['/app']
2024-03-14 22:33:36 INFO: Uvicorn running on http://0.0.0.0:8050 (Press CTRL+C to quit)
2024-03-14 22:33:36 INFO: Started reloader process [1] using WatchFiles
2024-03-14 22:33:39 INFO: Loading Ollama model...
2024-03-14 22:33:49 Process SpawnProcess-1:
2024-03-14 22:33:49 Traceback (most recent call last):
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 537, in _make_request
2024-03-14 22:33:49 response = conn.getresponse()
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 461, in getresponse
2024-03-14 22:33:49 httplib_response = super().getresponse()
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/http/client.py", line 1377, in getresponse
2024-03-14 22:33:49 response.begin()
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/http/client.py", line 320, in begin
2024-03-14 22:33:49 version, status, reason = self._read_status()
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/http/client.py", line 281, in _read_status
2024-03-14 22:33:49 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/socket.py", line 704, in readinto
2024-03-14 22:33:49 return self._sock.recv_into(b)
2024-03-14 22:33:49 socket.timeout: timed out
2024-03-14 22:33:49
2024-03-14 22:33:49 The above exception was the direct cause of the following exception:
2024-03-14 22:33:49
2024-03-14 22:33:49 Traceback (most recent call last):
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
2024-03-14 22:33:49 resp = conn.urlopen(
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 845, in urlopen
2024-03-14 22:33:49 retries = retries.increment(
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py", line 470, in increment
2024-03-14 22:33:49 raise reraise(type(error), error, _stacktrace)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/urllib3/util/util.py", line 39, in reraise
2024-03-14 22:33:49 raise value
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 791, in urlopen
2024-03-14 22:33:49 response = self._make_request(
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 539, in _make_request
2024-03-14 22:33:49 self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 371, in _raise_timeout
2024-03-14 22:33:49 raise ReadTimeoutError(
2024-03-14 22:33:49 urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='ollama', port=11434): Read timed out. (read timeout=10)
2024-03-14 22:33:49
2024-03-14 22:33:49 During handling of the above exception, another exception occurred:
2024-03-14 22:33:49
2024-03-14 22:33:49 Traceback (most recent call last):
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2024-03-14 22:33:49 self.run()
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/multiprocessing/process.py", line 108, in run
2024-03-14 22:33:49 self._target(*self._args, **self._kwargs)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
2024-03-14 22:33:49 target(sockets=sockets)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/uvicorn/server.py", line 61, in run
2024-03-14 22:33:49 return asyncio.run(self.serve(sockets=sockets))
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
2024-03-14 22:33:49 return loop.run_until_complete(main)
2024-03-14 22:33:49 File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/uvicorn/server.py", line 68, in serve
2024-03-14 22:33:49 config.load()
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/uvicorn/config.py", line 467, in load
2024-03-14 22:33:49 self.loaded_app = import_from_string(self.app)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/uvicorn/importer.py", line 21, in import_from_string
2024-03-14 22:33:49 module = importlib.import_module(module_str)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
2024-03-14 22:33:49 return _bootstrap._gcd_import(name[level:], package, level)
2024-03-14 22:33:49 File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
2024-03-14 22:33:49 File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
2024-03-14 22:33:49 File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
2024-03-14 22:33:49 File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
2024-03-14 22:33:49 File "<frozen importlib._bootstrap_external>", line 850, in exec_module
2024-03-14 22:33:49 File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
2024-03-14 22:33:49 File "/app/app/main.py", line 17, in <module>
2024-03-14 22:33:49 from app.api.api_v1.router import api_router
2024-03-14 22:33:49 File "/app/app/api/api_v1/router.py", line 8, in <module>
2024-03-14 22:33:49 from app.api.api_v1.endpoints import code, guidelines, login, repos, users
2024-03-14 22:33:49 File "/app/app/api/api_v1/endpoints/code.py", line 14, in <module>
2024-03-14 22:33:49 from app.services.ollama import ollama_client
2024-03-14 22:33:49 File "/app/app/services/ollama.py", line 196, in <module>
2024-03-14 22:33:49 ollama_client = OllamaClient(settings.OLLAMA_ENDPOINT, settings.OLLAMA_MODEL)
2024-03-14 22:33:49 File "/app/app/services/ollama.py", line 84, in __init__
2024-03-14 22:33:49 response = requests.post(f"{self.endpoint}/api/pull", json={"name": model_name, "stream": False}, timeout=10)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 115, in post
2024-03-14 22:33:49 return request("post", url, data=data, json=json, **kwargs)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 59, in request
2024-03-14 22:33:49 return session.request(method=method, url=url, **kwargs)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
2024-03-14 22:33:49 resp = self.send(prep, **send_kwargs)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
2024-03-14 22:33:49 r = adapter.send(request, **kwargs)
2024-03-14 22:33:49 File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 532, in send
2024-03-14 22:33:49 raise ReadTimeout(e, request=request)
2024-03-14 22:33:49 requests.exceptions.ReadTimeout: HTTPConnectionPool(host='ollama', port=11434): Read timed out. (read timeout=10)
Well the backend doesn't look like it's up and running, sorry about that! It doesn't boot because it can't reach ollama. Can you confirm that if you run
docker compose exec -T ollama ollama list
the model appears?
And if yes, try:
docker compose stop backend && docker compose start backend
You should be able to navigate to http://localhost:8050/docs to confirm that the backend is running :crossed_fingers:
Yes, the model appears,
Now that's strange. Let's move to https://github.com/quack-ai/contribution-api/issues as this is really not related to the extension. Hopefully this is only a local small issue with your docker setup :crossed_fingers:
Hey @themujahidkhan :wave: Just FWI, the day after my last message, I cleaned the entire repo for the backend. So perhaps just pulling the last image will do. Try this docker compose:
version: '3.7'
services:
ollama:
image: ollama/ollama:0.1.29
expose:
- 11434
volumes:
- "$HOME/.ollama:/root/.ollama"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
command: serve
healthcheck:
test: ["CMD-SHELL", "ollama --help"]
interval: 10s
timeout: 5s
retries: 3
backend:
image: quackai/contribution-api:latest
command: uvicorn app.main:app --reload --host 0.0.0.0 --port 5050 --proxy-headers
ports:
- "5050:5050"
environment:
- POSTGRES_URL=postgresql+asyncpg://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db/${POSTGRES_DB}
- OLLAMA_ENDPOINT=http://ollama:11434
- OLLAMA_MODEL=${OLLAMA_MODEL}
- SECRET_KEY=${SECRET_KEY}
- SUPERADMIN_GH_PAT=${SUPERADMIN_GH_PAT}
- SUPERADMIN_LOGIN=${SUPERADMIN_LOGIN}
- SUPERADMIN_PWD=${SUPERADMIN_PWD}
- GH_OAUTH_ID=${GH_OAUTH_ID}
- GH_OAUTH_SECRET=${GH_OAUTH_SECRET}
- SUPPORT_EMAIL=${SUPPORT_EMAIL}
- DEBUG=false
depends_on:
db:
condition: service_healthy
ollama:
condition: service_healthy
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
healthcheck:
test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}'"]
interval: 10s
timeout: 3s
retries: 3
volumes:
postgres_data:
ollama:
And run the service with:
docker compose pull backend
docker compose up -d --wait ollama
docker compose exec -T ollama ollama pull tinydolphin:1.1b-v2.8-q4_K_M
docker compose up -d --wait
Hopefully the problem should have disappeared then! Let me know how it goes :)
🚀 Feature
We should add copy button for code blocks.
Most LLM chat services have it, typically this is how it looks like in ChatGPT:
Motivation & pitch
Since the goal of the chat interface is to speed up or remove bottlenecks for developers, it produces code quite often. We need to make that interaction more natural.
Alternatives
Here are some resources about how to do this (even though, it's not visually appealing):
Additional context
No response