Closed VC16dec closed 6 years ago
@VC16dec
How a config file should look like in case of minio, i.e. any specific backupstorageprovider
Please see https://github.com/heptio/ark/blob/v0.9.3/examples/minio/10-ark-config.yaml for an example config with minio.
any specific bucket to be use for restic
Please also see the example config above.
Will it handle PVs backup and restoration with minio
No, minio is only used for backup storage. If you want to back up PVs, you'll need to specify an appropriate volumeStorageProvider
. See https://github.com/heptio/ark/blob/adc29a2db0d5d3474ae426c4b28d2083f5b04afb/docs/support-matrix.md for more details.
Can we restore the backup to another server (may not be in the same cluster) Means can we create backup with the previously created tar ball and restore it on a server where we have ark already installed.
If you point both ark instances at the same minio bucket, yes. See https://heptio.github.io/ark/v0.9.0/use-cases for more details (especially restoreOnlyMode
). We're also working on backing up to multiple locations and replicating backup data across locations for future releases.
does ark with minio support glusterfs storageclass backup and restore?
You can use Ark's integration with restic to back up and restore gluster PersistentVolumes.
Hi,
Thanks for the Response above, I tried back up PV with Restic on my local K8 deployment, but still struck:- error getting volume path on host: expected one matching path, got 0
1) Modfied Config 2) Did annotation 3) Created backup but didnot work.
Config used:-
apiVersion: ark.heptio.com/v1
kind: Config
metadata:
namespace: heptio-ark
name: default
volumeStorageProvider:
name: restic
bucket: restic-bucket
backupStorageProvider:
name: aws
bucket: ark
resticLocation: my-restic-bucket
config:
region: minio
s3ForcePathStyle: "true"
s3Url: http://minio.heptio-ark.svc:9000
ark restic repo get
NAME STATUS LAST MAINTENANCE
default Ready 2018-08-28 08:48:16 +0000 UTC
PV I tried to backup kubectl get pv --all-namespaces | grep -i appc
bk1-appc-data0 1Gi RWO Retain Bound default/bk1-appc-data-bk1-appc-0 bk1-appc-data 44m
ark backup get test -o yaml
apiVersion: ark.heptio.com/v1
kind: Backup
metadata:
creationTimestamp: 2018-08-28T08:48:13Z
name: test
namespace: heptio-ark
resourceVersion: "25925200"
selfLink: /apis/ark.heptio.com/v1/namespaces/heptio-ark/backups/test
uid: 1905097a-aa9f-11e8-b76f-02b328358f45
spec:
excludedNamespaces: null
excludedResources: null
hooks:
resources: null
includeClusterResources: null
includedNamespaces:
- '*'
includedResources: null
labelSelector:
matchLabels:
release: bk1
ttl: 720h0m0s
status:
completionTimestamp: 2018-08-28T08:48:17Z
expiration: 2018-09-27T08:48:13Z
phase: Failed
startTimestamp: 2018-08-28T08:48:13Z
validationErrors: null
version: 1
volumeBackups: null
kubectl -n heptio-ark get podvolumebackups -l ark.heptio.com/backup-name=test -o yaml
apiVersion: v1
items:
- apiVersion: ark.heptio.com/v1
kind: PodVolumeBackup
metadata:
clusterName: ""
creationTimestamp: 2018-08-28T08:48:16Z
deletionGracePeriodSeconds: null
deletionTimestamp: null
generateName: test-
generation: 0
initializers: null
labels:
ark.heptio.com/backup-name: test
ark.heptio.com/backup-uid: 1905097a-aa9f-11e8-b76f-02b328358f45
name: test-qt79l
namespace: heptio-ark
ownerReferences:
- apiVersion: ark.heptio.com/v1
controller: true
kind: Backup
name: test
uid: 1905097a-aa9f-11e8-b76f-02b328358f45
resourceVersion: "25925197"
selfLink: /apis/ark.heptio.com/v1/namespaces/heptio-ark/podvolumebackups/test-qt79l
uid: 1afb642f-aa9f-11e8-b76f-02b328358f45
spec:
node: k8s-1
pod:
kind: Pod
name: bk1-appc-0
namespace: default
uid: 8cdcd97f-aa9a-11e8-b76f-02b328358f45
repoIdentifier: s3:http://minio.heptio-ark.svc:9000/my-restic-bucket/default
tags:
backup: test
backup-uid: 1905097a-aa9f-11e8-b76f-02b328358f45
ns: default
pod: bk1-appc-0
pod-uid: 8cdcd97f-aa9a-11e8-b76f-02b328358f45
volume: bk1-appc-data
volume: bk1-appc-data
status:
message: 'error getting volume path on host: expected one matching path, got 0'
path: ""
phase: Failed
snapshotID: ""
kind: List
metadata:
resourceVersion: ""
selfLink: ""
@skriss do you remember what the issue(s) were the last time we ran into error getting volume path on host: expected one matching path, got 0
?
Hi ,
May be I am missing below:- What needs to be done in below step of restic integration? Create a new bucket for restic to store its data in, and give the heptio-ark IAM user access to it, similarly to the main Ark bucket you've already set up.
I think, Bucket is a terminology in AWS, Here I am using local storage, In config, I just gave the name of restic location by default to my-restic-bucket, Do I need to create a link or mount any path, How it will map to my local ubuntu server storage.
@VC16dec what do you mean by local storage? Are you talking about minio?
Local means, I am not trying this on any cloud specific volume. K8 cluster deployment is over ubuntu baremetal server and yes minio is the backupstorageprovider used.
Did you create a bucket in minio called my-restic-bucket
?
No, I didnt find any documentation related to that. Can you please tell how to do that? And also confirm about volumestorageprovider config value as well.
@VC16dec if you take a look at https://github.com/heptio/ark/blob/master/examples/minio/00-minio-deployment.yaml#L89, this Job
is what initially sets up minio for Ark (in our example). To create an additional bucket within minio to use for restic, you'd want to do something similar -- the easiest thing to do would be to use that exact Job
spec again (you may have to change the name or delete the existing completed one if it's still there), but on L112, change the last part of the command from "...mb -p ark/ark" to "...mb -p ark/my-restic-bucket".
@VC16dec can you also kubectl describe
the PV(s) that you're trying to back up, as well as the pod(s) that use that volume?
@VC16dec
And also confirm about volumestorageprovider config value as well.
If you're only trying to use restic, you don't need to configure this.
Ok, Thanks a lot for quick support here for additional bucket, I have to define the same job again First one with ark bucket Second with my restic bucket
We need both right?
Will test it and confirm
@VC16dec correct, 2 separate buckets
Hi, Added restic bucket too but Still same Error:- minio deployment
apiVersion: batch/v1
kind: Job
metadata:
namespace: heptio-ark
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
**- "mc --config-folder=/config config host add ark http://minio:9000 minio minio123 && mc --config-folder=/config mb -p ark/ark"
- "mc --config-folder=/config config host add ark http://minio:9000 minio minio123 && mc --config-folder=/config mb -p ark/my-restic-bucket"**
volumeMounts:
- name: config
mountPath: "/config"
kubectl -n heptio-ark get podvolumebackups -l ark.heptio.com/backup-name=test5 -o yaml
apiVersion: v1
items:
- apiVersion: ark.heptio.com/v1
kind: PodVolumeBackup
metadata:
clusterName: ""
creationTimestamp: 2018-08-29T06:40:16Z
deletionGracePeriodSeconds: null
deletionTimestamp: null
generateName: test5-
generation: 0
initializers: null
labels:
ark.heptio.com/backup-name: test5
ark.heptio.com/backup-uid: 63c6216f-ab56-11e8-b76f-02b328358f45
name: test5-pv4q7
namespace: heptio-ark
ownerReferences:
- apiVersion: ark.heptio.com/v1
controller: true
kind: Backup
name: test5
uid: 63c6216f-ab56-11e8-b76f-02b328358f45
resourceVersion: "26014276"
selfLink: /apis/ark.heptio.com/v1/namespaces/heptio-ark/podvolumebackups/test5-pv4q7
uid: 64005254-ab56-11e8-b76f-02b328358f45
spec:
node: k8s-1
pod:
kind: Pod
name: bk1-appc-0
namespace: default
uid: 8cdcd97f-aa9a-11e8-b76f-02b328358f45
repoIdentifier: s3:http://minio.heptio-ark.svc:9000/my-restic-bucket/default
tags:
backup: test5
backup-uid: 63c6216f-ab56-11e8-b76f-02b328358f45
ns: default
pod: bk1-appc-0
pod-uid: 8cdcd97f-aa9a-11e8-b76f-02b328358f45
volume: bk1-appc-data
volume: bk1-appc-data
status:
**message: 'error getting volume path on host: expected one matching path, got 0'**
path: ""
phase: Failed
snapshotID: ""
kind: List
metadata:
resourceVersion: ""
selfLink: ""
kubectl describe pv bk1-appc-data
Name: bk1-appc-data0
Labels: app=bk1-appc
chart=appc-2.0.0
heritage=Tiller
name=bk1-appc
release=bk1
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass: bk1-appc-data
Status: Bound
Claim: default/bk1-appc-data-bk1-appc-0
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 1Gi
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /dockerdata-nfs/bk1/appc/mdsal0
HostPathType:
Events: <none>
kubectl describe -n default pod bk1-appc-0
Name: bk1-appc-0
Namespace: default
Node: k8s-1/10.53.202.92
Start Time: Tue, 28 Aug 2018 08:15:41 +0000
Labels: app=appc
controller-revision-hash=bk1-appc-65bc75fbbb
release=bk1
Annotations: backup.ark.heptio.com/backup-volumes=bk1-appc-data
kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"default","name":"bk1-appc","uid":"8cc7d57c-aa9a-11e8-b76f-02b328358f45","apiVers...
Status: Running
IP: 10.42.242.111
Controlled By: StatefulSet/bk1-appc
Init Containers:
appc-readiness:
Container ID: docker://ddd50ace44d79179e7c281123777fa4976a3d06098474f62db5312a4f1fd3046
Image: oomk8s/readiness-check:2.0.0
Image ID: docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
Port: <none>
Command:
/root/ready.py
Args:
--container-name
appc-db
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 28 Aug 2018 08:15:48 +0000
Finished: Tue, 28 Aug 2018 08:16:04 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: default (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9270k (ro)
Containers:
appc:
Container ID: docker://a536b22dc9e87e06feb12b3a4f68883d5462721456a2b018130756eb5a42fc36
Image: nexus3.onap.org:10001/onap/appc-image:1.4.0-SNAPSHOT-latest
Image ID: docker-pullable://nexus3.onap.org:10001/onap/appc-image@sha256:f8e234c1d87041e1fa30b1ffc48aaf7e2f758bb08acf6885c7d57c188dffa047
Ports: 8181/TCP, 1830/TCP
Command:
/opt/appc/bin/startODL.sh
State: Running
Started: Wed, 29 Aug 2018 06:42:57 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 29 Aug 2018 06:28:49 +0000
Finished: Wed, 29 Aug 2018 06:42:55 +0000
Ready: False
Restart Count: 94
Readiness: exec [/opt/appc/bin/health_check.sh] delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db-root-password' in secret 'bk1-appc'> Optional: false
SDNC_CONFIG_DIR: /opt/onap/appc/data/properties
APPC_CONFIG_DIR: /opt/onap/appc/data/properties
DMAAP_TOPIC_ENV: SUCCESS
ENABLE_AAF: false
ENABLE_ODL_CLUSTER: true
APPC_REPLICAS: 1
Mounts:
/etc/localtime from localtime (ro)
/opt/onap/appc/bin/health_check.sh from onap-appc-bin (rw)
/opt/onap/appc/bin/installAppcDb.sh from onap-appc-bin (rw)
/opt/onap/appc/bin/startODL.sh from onap-appc-bin (rw)
/opt/onap/appc/data/properties/aaa-app-config.xml from onap-appc-data-properties (rw)
/opt/onap/appc/data/properties/aaiclient.properties from onap-appc-data-properties (rw)
/opt/onap/appc/data/properties/appc.properties from onap-appc-data-properties (rw)
/opt/onap/appc/data/properties/dblib.properties from onap-appc-data-properties (rw)
/opt/onap/appc/data/properties/svclogic.properties from onap-appc-data-properties (rw)
/opt/onap/appc/svclogic/bin/showActiveGraphs.sh from onap-appc-svclogic-bin (rw)
/opt/onap/appc/svclogic/config/svclogic.properties from onap-appc-svclogic-config (rw)
/opt/onap/ccsdk/bin/installSdncDb.sh from onap-sdnc-bin (rw)
/opt/onap/ccsdk/bin/startODL.sh from onap-sdnc-bin (rw)
/opt/onap/ccsdk/data/properties/aaiclient.properties from onap-sdnc-data-properties (rw)
/opt/onap/ccsdk/data/properties/dblib.properties from onap-sdnc-data-properties (rw)
/opt/onap/ccsdk/data/properties/svclogic.properties from onap-sdnc-data-properties (rw)
/opt/onap/ccsdk/svclogic/bin/showActiveGraphs.sh from onap-sdnc-svclogic-bin (rw)
/opt/onap/ccsdk/svclogic/config/svclogic.properties from onap-sdnc-svclogic-config (rw)
/opt/opendaylight/current/daexim from bk1-appc-data (rw)
/opt/opendaylight/current/etc/org.ops4j.pax.logging.cfg from log-config (rw)
/var/log/onap from logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9270k (ro)
filebeat-onap:
Container ID: docker://cbb8c7f8be436c30b5115ed7af2ca017c211306991f94d5087590bedbc954d0f
Image: docker.elastic.co/beats/filebeat:5.5.0
Image ID: docker-pullable://docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942
Port: <none>
State: Running
Started: Tue, 28 Aug 2018 08:25:19 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/filebeat/data from data-filebeat (rw)
/usr/share/filebeat/filebeat.yml from filebeat-conf (rw)
/var/log/onap from logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9270k (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
bk1-appc-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: bk1-appc-data-bk1-appc-0
ReadOnly: false
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
HostPathType:
filebeat-conf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-filebeat
Optional: false
log-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-logging-cfg
Optional: false
logs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
data-filebeat:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
onap-appc-data-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-onap-appc-data-properties
Optional: false
onap-appc-svclogic-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-onap-appc-svclogic-config
Optional: false
onap-appc-svclogic-bin:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-onap-appc-svclogic-bin
Optional: false
onap-appc-bin:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-onap-appc-bin
Optional: false
onap-sdnc-data-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-onap-sdnc-data-properties
Optional: false
onap-sdnc-svclogic-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-onap-sdnc-svclogic-config
Optional: false
onap-sdnc-svclogic-bin:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-onap-sdnc-svclogic-bin
Optional: false
onap-sdnc-bin:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bk1-appc-onap-sdnc-bin
Optional: false
default-token-9270k:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9270k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 44m (x325 over 22h) kubelet, k8s-1 Readiness probe failed: APPC is not healthy.
++ ps -e
++ wc -l
++ grep startODL
+ startODL_status=1
++ /opt/opendaylight/current/bin/client bundle:list
++ grep Waiting
++ wc -l
+ waiting_bundles=0
++ /opt/opendaylight/current/bin/client system:start-level
+ run_level='Level 100'
+ '[' 'Level 100' == 'Level 100' ']'
+ '[' 1 -lt 1 ']'
+ echo APPC is not healthy.
+ exit 1
Warning Unhealthy 19m (x1172 over 22h) kubelet, k8s-1 (combined from similar events): Readiness probe failed: APPC is not healthy.
++ wc -l
++ grep startODL
++ ps -e
+ startODL_status=1
++ wc -l
++ grep Waiting
++ /opt/opendaylight/current/bin/client bundle:list
+ waiting_bundles=0
++ /opt/opendaylight/current/bin/client system:start-level
+ run_level='Level 100'
+ '[' 'Level 100' == 'Level 100' ']'
+ '[' 1 -lt 1 ']'
+ echo APPC is not healthy.
+ exit 1
Warning Unhealthy 14m (x171 over 22h) kubelet, k8s-1 Readiness probe failed: APPC is not healthy.
++ ps -e
++ grep startODL
++ wc -l
+ startODL_status=1
++ /opt/opendaylight/current/bin/client bundle:list
++ wc -l
++ grep Waiting
+ waiting_bundles=0
++ /opt/opendaylight/current/bin/client system:start-level
+ run_level='Level 100'
+ '[' 'Level 100' == 'Level 100' ']'
+ '[' 1 -lt 1 ']'
+ echo APPC is not healthy.
+ exit 1
Warning Unhealthy 4m (x5638 over 22h) kubelet, k8s-1 Readiness probe failed: APPC is not healthy.
++ ps -e
++ grep startODL
++ wc -l
+ startODL_status=1
++ /opt/opendaylight/current/bin/client bundle:list
++ grep Waiting
++ wc -l
+ waiting_bundles=0
++ /opt/opendaylight/current/bin/client system:start-level
+ run_level='Level 100'
+ '[' 'Level 100' == 'Level 100' ']'
+ '[' 1 -lt 1 ']'
+ echo APPC is not healthy.
+ exit 1
@VC16dec looking at the description of your PV:
Source:
Type: HostPath (bare host directory volume)
Ark does not support HostPath volumes for restic snapshotting. To snapshot a pod's volume, Ark has a pod running on every node (from a DaemonSet) with its own HostPath mount of /var/lib/kubelet/pods
. When this pod receives a signal that it's time to perform a restic snapshot, it looks for the appropriate volume in its mount of /var/lib/kubelet/pods
. This works for all volume types except HostPath. We decided that mounting the entire root filesystem of the node into this pod is a potential security concern (more so than the mount of /var/lib/kubelet/pods
), and for this reason, we don't support snapshotting HostPath volumes with restic.
If you are interested in on-prem storage that Ark can snapshot, there are several other options besides HostPath dynamic provisioning, such as the new local volume type, PortWorx, and Rook/Ceph, to name a few.
HTH!
Ok Got it,
Another query:- I have mysql container running and want to backup that using ark, So does Ark support snapshot of DB or point in time transactions, or only Volume mount of that DB.
do I need to take backup only when no transactions are taking place in DB.
I'm happy to answer here for now, but in the future, I'd recommend posting the question to our Google Group - https://groups.google.com/forum/#!forum/heptio-ark.
We don't currently have any built-in integration with mysql. We have talked about the idea of "application/workload profiles" for Ark, with preconfigured actions for handling things like mysql/postgresql/etc, but we haven't implemented that yet.
Ark does support "hooks" that can execute before and after taking a snapshot, so you could perform actions such as flushing database buffers, writing a db snapshot to disk, freezing/unfreezing the filesystem, and so on. See https://heptio.github.io/ark/v0.9.0/hooks for more details on hooks.
Thank a lot Andy, Really appreciate the quick respone Surely will.post further query over google group.
Happy to help! Is it ok to close this now?
Yes
What steps did you take and what happened: ARK with minio storage (Few Queries):-
1) How a config file should look like in case of minio, i.e. any specific backupstorageprovider any specific bucket to be use for restic
2) Will it handle PVs backup and restoration with minio
3) Can we restore the backup to another server (may not be in the same cluster) Means can we create backup with the previously created tar ball and restore it on a server where we have ark already installed.
4) does ark with minio support glusterfs storageclass backup and restore?
Environment: Ark version (use ark version): v0.9.0 Kubernetes version (use kubectl version): v1.8.10 Kubernetes installer & version: Rancher 1.6.14 Cloud provider or hardware configuration: 64GB RAM, Ubuntu 16.0.4 and Openstack Pike OS (e.g. from /etc/os-release):"Ubuntu 16.04.3 LTS