Open francescomiliani opened 2 days ago
Hi @francescomiliani do you see the same issue with all paladin nodes? did you try http://localhost:31648/ui or http://localhost:31748/ui
@francescomiliani are you able to access the JSON RPC API? Specifically, if you use the POST
method against http://127.0.0.1:31548
(without a body or any other arguments), do you see the following?
{
"jsonrpc": "2.0",
"id": "1",
"error": {
"code": -32600,
"message": "PD020700: Invalid JSON/RPC request data"
}
}
Hi @francescomiliani do you see the same issue with all paladin nodes? did you try http://localhost:31648/ui or http://localhost:31748/ui
Hi @hosie, thank you for your reply. Below I attached screenshots taken from Chrome Browser
@francescomiliani are you able to access the JSON RPC API? Specifically, if you use the
POST
method againsthttp://127.0.0.1:31548
(without a body or any other arguments), do you see the following?{ "jsonrpc": "2.0", "id": "1", "error": { "code": -32600, "message": "PD020700: Invalid JSON/RPC request data" } }
Hi @gabriel-indik, thank you for your reply.
Here the result executed with Postman
here a telnet on path suggested
Current suspicion is that something is not quite right in the docker and/or kind networking layer. The 31548
on local host should be an open node port on your kind cluster and the internal k8s networking should be routing that to you paladin node pod
via the defined svc
. If the port was not opened, we would expect connection refused
rather than a hang. If you could run docker ps
docker info
and kind version
and kubectl get svc -n paladin
it might show something. Would be particularly interested to see, from that output, which versions of docker
and kind
you have.
Another thing to try, just to rule out a problem with the container itself. If you exec into one node ( e.g. kubectl -it exec paladin-node1-0 -- /bin/sh
) and try to connect to another node curl http://paladin-node2:8548
. you should see
{"jsonrpc":"2.0","id":"1","error":{"code":-32600,"message":"PD020700: Invalid JSON/RPC request data"}}
Current suspicion is that something is not quite right in the docker and/or kind networking layer. The
31548
on local host should be an open node port on your kind cluster and the internal k8s networking should be routing that to you paladin nodepod
via the definedsvc
. If the port was not opened, we would expectconnection refused
rather than a hang. If you could rundocker ps
docker info
andkind version
andkubectl get svc -n paladin
it might show something. Would be particularly interested to see, from that output, which versions ofdocker
andkind
you have.Another thing to try, just to rule out a problem with the container itself. If you exec into one node ( e.g.
kubectl -it exec paladin-node1-0 -- /bin/sh
) and try to connect to another nodecurl http://paladin-node2:8548
. you should see{"jsonrpc":"2.0","id":"1","error":{"code":-32600,"message":"PD020700: Invalid JSON/RPC request data"}}
Hi @hosie, thanks again, here commands and outputs:
Docker PS
docker info Client: Version: 25.0.3 Context: desktop-linux Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.12.1-desktop.4 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.24.6-desktop.1 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-compose debug: Get a shell into any image or container. (Docker Inc.) Version: 0.0.24 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-debug dev: Docker Dev Environments (Docker Inc.) Version: v0.1.0 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-dev extension: Manages Docker extensions (Docker Inc.) Version: v0.2.22 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-extension feedback: Provide feedback, right in your terminal! (Docker Inc.) Version: v1.0.4 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-feedback init: Creates Docker-related starter files for your project (Docker Inc.) Version: v1.0.1 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-init sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.) Version: 0.6.0 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-sbom scout: Docker Scout (Docker Inc.) Version: v1.5.0 Path: /Users/siaeasyshop/.docker/cli-plugins/docker-scout
Server: Containers: 2 Running: 2 Paused: 0 Stopped: 0 Images: 1 Server Version: 25.0.3 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: ae07eda36dd25f8a1b98dfbf587313b99c0190bb runc version: v1.1.12-0-g51d5e94 init version: de40ad0 Security Options: seccomp Profile: unconfined cgroupns Kernel Version: 6.6.16-linuxkit Operating System: Docker Desktop OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 3.825GiB Name: docker-desktop ID: f6216e93-9795-4347-bfc8-619ea2450ab9 Docker Root Dir: /var/lib/docker Debug Mode: false HTTP Proxy: http.docker.internal:3128 HTTPS Proxy: http.docker.internal:3128 No Proxy: hubproxy.docker.internal Experimental: false Insecure Registries: hubproxy.docker.internal:5555 127.0.0.0/8 Live Restore Enabled: false
WARNING: daemon is not using the default seccomp profile
Kind version
kubectl get svc -n paladin
kubectl -it exec paladin-node1-0 -- /bin/sh curl http://paladin-node2:8548/
Thanks again for your support
Thanks @francescomiliani the only things I can see that are significantly different from my env is
Server Version: 27.2.0
CPUs: 10
and Total Memory: 15.6GiB
. Architecture: aarch64
I am pretty sure that it should not need 10CPU and 15GiB of memory but maybe worth increasing your resources in docker desktop to see if it makes a difference. and/or check whether your docker desktop has any pending updates.
In the meantime, I'll see if I can find anyone else who has tested on x86
Mac. Could you run uname -a
to confirm the version and arch of your host OS?
Hi @hosie, here my uname -a output
Darwin MacBook-Pro-di-Sviluppo.local 21.2.0 Darwin Kernel Version 21.2.0: Sun Nov 28 20:28:54 PST 2021; root:xnu-8019.61.5~1/RELEASE_X86_64 x86_64
We might update Docker, for instance, or could be a problem related to kind?
@francescomiliani
I do know of one other person who has been using the same kind version as you without issue but I do not know of anyone who has been using x86
on Mac.
How persistent is this problem for you? e.g. if you restart the pods ( kubectl delete paladin-node1-0
) or restart the kind cluster / or even delete it kind delete cluster --name paladin
and follow the getting started instructions from scratch, do you still see the same issue?
@hosie It seems we solved the problem 😎
We did the following steps:
1) Reinstalled kind via binary: we installed kind via brew. So, we uninstalled it and re-installed it via downloading the binary and moving it under /Applications, and updating the bash_profile. Here the guide: kind
2) Updated Docket to : Docker version 27.3.1, build ce12230
Thank you for your support :)
That's great news @francescomiliani . Thank you for persisting with it.
What happened?
Hello, we experimented issues during installation on MacOS, whose conducts to errors on access it via the web.
What did you expect to happen?
A correct working
How can we reproduce it (as minimally and precisely as possible)?
By installing Paladin on MacOs with version specified in OS Version section
Anything else we need to know?
Attached logs logs.zip
OS version
Hello,
Installing the software on Windows using the following link, https://lf-decentralized-trust-labs.github.io/paladin/head/getting-started/installation/#outcome we did not encounter any issues. However, on Mac OS, after the installation, we were unable to access it via the web. The configuration is : macOS Monterey Versione 12.1 MacBook Pro (Retina, 13-inch, Early 2015) Processor 2,9 Ghz Intel Core i5 dual-core Memory : 8 GB 1867 Mhz DDR3
In attachment the logs file
The Pod verification command produces the following outputs:
Kubectl get pods
Kubectl get service
Kubectl get scd
Kubectl get reg
The web access remains pending indefinitely
Also with curl