Open mikhail-manuilov opened 7 years ago
Probably a problem with the cluster secret used by each peer.
Note that this project is solely for running a number of automated tests on ipfs/ipfs-cluster and not for deploying any of them for real-world-use within kubernetes.
I understand kubernetes here is purely for testing purposes, but just want to clarify some stuff if it's possible.
Why do tests do not require the same secrets across all peers?
I thought about same secret, but got some strange issue that folders/data/ipfs
and /data/ipfs-cluster
are updated each time pod dies (and starts again). So I can't change secret in service.json
and restart, I looked into /usr/local/bin/start-daemons.sh
maybe it's purely Azure problem, but files other than these two directories are not changed.
I don't have such issue in docker and two local volumes for each daemon.
Why do tests do not require the same secrets across all peers?
They do, afaik they just run a custom container which ensures that.
Other than that, I am not sure why your /data folders are not persistent.
Seems like I know the root of my problem:VOLUME
directives in ipfs/go-ipfs
and ipfs/ipfs-cluster Dockerfile
's. Seems like I need multiple volumes to run pods, not expected behaviour at all.
I don't fully understand how VOLUME directives affect kubernetes, but maybe want to open an issue and explain? We can fix the dockerfiles if there's a way to improve them...
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: ipfs-cluster-bootstrapper
labels:
name: ipfs-cluster
app: ipfs
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "5001"
prometheus.io/path: "debug/metrics/prometheus"
spec:
replicas: 1
serviceName: ipfs-cluster-svc
template:
metadata:
labels:
name: ipfs-cluster
role: bootstrapper
app: ipfs
spec:
containers:
- name: ipfs-cluster-bootstrapper
image: "ipfs/ipfs-cluster:latest"
command: ["/usr/local/bin/start-daemons.sh"]
args:
- --loglevel
- debug
- --debug
ports:
- containerPort: 4001
name: "swarm"
protocol: "TCP"
- containerPort: 5001
name: "api"
protocol: "TCP"
- containerPort: 9094
name: "clusterapi"
protocol: "TCP"
- containerPort: 9095
name: "clusterproxy"
protocol: "TCP"
- containerPort: 9096
name: "cluster"
protocol: "TCP"
volumeMounts:
- mountPath: /data/ipfs
name: data-ipfs
- mountPath: /data/ipfs-cluster
name: data-ipfs-cluster
volumeClaimTemplates:
- metadata:
annotations:
volume.alpha.kubernetes.io/storage-class: default
name: data-ipfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
- metadata:
annotations:
volume.alpha.kubernetes.io/storage-class: default
name: data-ipfs-cluster
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
This is how to fix this behavior. NEEDS two volumes.
@mikhail-manuilov, would you want to send in a pull request with the changes you're proposing? I'll be happy to look it over and approve it once I confirm it meets our requirements.
Since there's kubernetes definition files are for testing purposes only, and posted above tested only in Azure cloud. Also I suppose having two volumes for one container is no-good, maybe Dockerfile
should be changed for ipfs/go-ipfs
and ipfs/ipfs-cluster
. Since ipfs/ipfs-cluster
uses FROM
ipfs/go-ipfs
, I suppose creating one VOLUME
for /data
in ipfs/go-ipfs
and deleting VOLUME $IPFS_CLUSTER_PATH
from ipfs/ipfs-cluster
Dockerfile
will do the job
Hello, I've created ipfs-cluster 4 ipfs-cluster nodes in kubernetes using examples from here.
Also I created service to interconnect nodes:
Then I run script to add peers to bbotstrap peer (as in init.sh)
This is log of bootstrap pod:
Tcpdump shows normal TCP\IP flow: