Open anibal-aguila opened 1 week ago
I have a similar issue. The deployment is stuck creating at 10%. I have deployed it on K8s.
Same but mine is stuck at 0%
I tried v0.6.313 releases for all components and it gets stuck at 100%
The upper half is workspace and the lower one is account.
MongoDB output
MinIO is created but empty
I checked the next day, and it has failed to create but the UI says creating.
I am having the same issue trying to run it on k8s. I tracked the error message down to line 203 of platform/server/workspace-service/src/service.ts and it seems to be from a failed fetch request from updateWorkspaceInfo on line 166. I don't know typescript well enough to get much further than that. It could be a set up issue or it could be a bug in the code, but something is happening at the fetch request.
@FireflyHacker I think update fails because of transactor pod. In workspace deployment the transactor url should be corrected.
It should bews://transactor
instead of ws://transactor:3333
. Reason is the service is set up at port 80 by default for transactor. Or you can change the transactor service port to 3333 (same as the target port)
Stucked also at creation in progress... 100%
`services: mongodb: image: "mongo:4.4" container_name: mongodb environment:
27017:27017
restart: unless-stopped
minio: image: "minio/minio" command: server /data --address ":9000" --console-address ":9001" ports:
files:/data restart: unless-stopped
elastic: image: "elasticsearch:7.14.2" command: | /bin/sh -c "./bin/elasticsearch-plugin list | grep -q ingest-attachment || yes | ./bin/elasticsearch-plugin install --silent ingest-attachment; /usr/local/bin/docker-entrypoint.sh eswrapper" volumes:
http.cors.allow-origin=${CORS_ALLOW_ORIGIN} healthcheck: interval: 20s retries: 10 test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"' restart: unless-stopped
account: image: hardcoreeng/account:${HULY_VERSION} links:
ACCOUNT_PORT=${ACCOUNT_PORT} restart: unless-stopped
workspace: image: hardcoreeng/workspace:${HULY_VERSION} links:
NOTIFY_INBOX_ONLY=${NOTIFY_INBOX_ONLY} restart: unless-stopped
front: image: hardcoreeng/front:${HULY_VERSION} links:
LAST_NAME_FIRST=${LAST_NAME_FIRST} restart: unless-stopped
collaborator: image: hardcoreeng/collaborator:${HULY_VERSION} links:
STORAGE_CONFIG=minio|minio?accessKey=${MINIO_ACCESS_KEY}&secretKey=${MINIO_SECRET_KEY} restart: unless-stopped
transactor: image: hardcoreeng/transactor:${HULY_VERSION} links:
LAST_NAME_FIRST=${LAST_NAME_FIRST} restart: unless-stopped
rekoni: image: hardcoreeng/rekoni-service:${HULY_VERSION} ports:
volumes: db: files: elastic: etcd:`
{ "result": [ { "workspace": "test222222222", "workspaceUrl": "test222222222", "version": { "major": 0, "minor": 0, "patch": 0 }, "branding": "huly", "workspaceName": "test222222222", "disabled": true, "region": "", "mode": "pending-creation", "progress": 0, "createdOn": 1729005506907, "lastVisit": 1729005567288, "createdBy": "test2", "lastProcessingTime": 0, "attempts": 0, "workspaceId": "w-test2-test22222222-670e87c2-59534b17fb-ca746a" } ] }
here is the netstat of the workspace
Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.11:33615 0.0.0.0:* LISTEN tcp6 0 0 :::3334 :::* LISTEN udp 0 0 127.0.0.11:59903 0.0.0.0:*
@F04C You are using SERVICE_PORT=3334
for transactor. In the account part you need to set TRANSACTOR_URL=ws://transactor:3334;ws://${SERVER_ADDRESS}:3334
similar to workspace.
Thank you ❤️
@muradbozik You were right. I tried changing everything from service.huly.example to service.huly.domain.com in the yaml files but that did not work. Changing all the ingress files to just the name of the service and changing the config to just the external IP from my load balancer seems to have resolved any issues (at least for v0.6.325). Oh and I had to add a postgres container to the k8s deployment.
Thank you for the advice!
Hi, Using traefik and additional last image of workspace the instance never start..
Thanks in advance,