git clone https://github.com/rucio/k8s-tutorial/
kubectl
: https://kubernetes.io/docs/tasks/tools/install-kubectl/helm
: https://helm.sh/docs/intro/install/minikube
if you do not have a pre-existing Kubernetes cluster: https://kubernetes.io/docs/tasks/tools/install-minikube/NOTE: All following commands should be run from the top-level directory of this repository.
You can skip this step if you have already set up a Kubernetes cluster.
minikube
setup script:./scripts/setup-minikube.sh
You can perform either an automatic deployment or a manual deployment, as documented below.
./scripts/deploy-rucio.sh
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add rucio https://rucio.github.io/helm-charts
kubectl apply -k ./secrets
If you have done this step in a previous tutorial deployment on this cluster, the existing Postgres PersistentVolumeClaim must be deleted.
kubectl get pvc data-postgres-postgresql-0
If the PVC exists, the command will return the following message:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-postgres-postgresql-0 Bound ... 8Gi RWO standard <unset> 4s
If the PVC does not exist, the command will return this message:
Error from server (NotFound): persistentvolumeclaims "data-postgres-postgresql-0" not found
You can skip to the next section if the PVC does not exist.
kubectl patch pvc data-postgres-postgresql-0 -p '{"metadata":{"finalizers":null}}'
kubectl delete pvc data-postgres-postgresql-0
postgres
if it is installed:helm uninstall postgres
helm install postgres bitnami/postgresql -f manifests/values-postgres.yaml
kubectl get pod postgres-postgresql-0
Once the Postgres setup is complete, you should see STATUS: Running
.
kubectl apply -f manifests/init-pod.yaml
kubectl logs -f init
kubectl get pod init
Once the init container pod setup is complete, you should see STATUS: Completed
.
helm install server rucio/rucio-server -f manifests/values-server.yaml
kubectl rollout status deployment server-rucio-server
kubectl apply -f manifests/xrd.yaml
kubectl apply -f manifests/ftsdb.yaml
kubectl rollout status deployment fts-mysql
kubectl apply -f manifests/fts.yaml
kubectl rollout status deployment fts-server
helm install daemons rucio/rucio-daemons -f manifests/values-daemons.yaml
This command might take a few minutes.
helm
fails to install, before re-installing, remove the previous failed installation:helm list # list all helm installations
helm delete $installation
job
also exists. You can easily remove this:kubectl get jobs # get all jobs
kubectl delete jobs/$jobname
Once the setup is complete, you can use Rucio by interacting with it via a client.
You can either run the provided script to showcase the usage of Rucio, or you can manually run the Rucio commands described in the Manual client usage section.
./scripts/use-rucio.sh
kubectl apply -f manifests/client.yaml
kubectl get pod client
Once the client container pod setup is complete, you should see STATUS: Running
.
kubectl exec -it client -- /bin/bash
rucio-admin rse add XRD1
rucio-admin rse add XRD2
rucio-admin rse add XRD3
rucio-admin rse add-protocol --hostname xrd1 --scheme root --prefix //rucio --port 1094 --impl rucio.rse.protocols.gfal.Default --domain-json '{"wan": {"read": 1, "write": 1, "delete": 1, "third_party_copy_read": 1, "third_party_copy_write": 1}, "lan": {"read": 1, "write": 1, "delete": 1}}' XRD1
rucio-admin rse add-protocol --hostname xrd2 --scheme root --prefix //rucio --port 1094 --impl rucio.rse.protocols.gfal.Default --domain-json '{"wan": {"read": 1, "write": 1, "delete": 1, "third_party_copy_read": 1, "third_party_copy_write": 1}, "lan": {"read": 1, "write": 1, "delete": 1}}' XRD2
rucio-admin rse add-protocol --hostname xrd3 --scheme root --prefix //rucio --port 1094 --impl rucio.rse.protocols.gfal.Default --domain-json '{"wan": {"read": 1, "write": 1, "delete": 1, "third_party_copy_read": 1, "third_party_copy_write": 1}, "lan": {"read": 1, "write": 1, "delete": 1}}' XRD3
rucio-admin rse set-attribute --rse XRD1 --key fts --value https://fts:8446
rucio-admin rse set-attribute --rse XRD2 --key fts --value https://fts:8446
rucio-admin rse set-attribute --rse XRD3 --key fts --value https://fts:8446
Note that 8446
is the port exposed by the fts-server
pod. You can view the ports opened by a pod by kubectl describe pod PODNAME
.
rucio-admin rse add-distance --distance 1 --ranking 1 XRD1 XRD2
rucio-admin rse add-distance --distance 1 --ranking 1 XRD1 XRD3
rucio-admin rse add-distance --distance 1 --ranking 1 XRD2 XRD1
rucio-admin rse add-distance --distance 1 --ranking 1 XRD2 XRD3
rucio-admin rse add-distance --distance 1 --ranking 1 XRD3 XRD1
rucio-admin rse add-distance --distance 1 --ranking 1 XRD3 XRD2
rucio-admin account set-limits root XRD1 -1
rucio-admin account set-limits root XRD2 -1
rucio-admin account set-limits root XRD3 -1
rucio-admin scope add --account root --scope test
dd if=/dev/urandom of=file1 bs=10M count=1
dd if=/dev/urandom of=file2 bs=10M count=1
dd if=/dev/urandom of=file3 bs=10M count=1
dd if=/dev/urandom of=file4 bs=10M count=1
rucio upload --rse XRD1 --scope test file1
rucio upload --rse XRD1 --scope test file2
rucio upload --rse XRD2 --scope test file3
rucio upload --rse XRD2 --scope test file4
rucio add-dataset test:dataset1
rucio attach test:dataset1 test:file1 test:file2
rucio add-dataset test:dataset2
rucio attach test:dataset2 test:file3 test:file4
rucio add-container test:container
rucio attach test:container test:dataset1 test:dataset2
rucio add-dataset test:dataset3
rucio attach test:dataset3 test:file4
rucio add-rule test:container 1 XRD3
This command will output a rule ID, which can also be obtained via:
rucio list-rules test:container
rucio rule-info <rule_id>
As the daemons run with long sleep cycles (e.g. 30 seconds, 60 seconds) by default, this could take a while. You can monitor the output of the daemon containers to see what they are doing.
kubectl
completion:Bash:
source <(kubectl completion bash)
Zsh:
source <(kubectl completion zsh)
kubectl get pods
kubectl get pods --all-namespaces
kubectl logs <NAME>
kubectl logs -f <NAME>
helm repo update
minikube stop