CONABIO / kube_sipecam_playground

Jupyter notebooks to test kube_sipecam deployments in k8s
MIT License
0 stars 0 forks source link

Use kale to create kubeflow pipeline for MAD-Mex lc workflow #8

Closed palmoreck closed 4 years ago

palmoreck commented 4 years ago

In a first step will follow: 1_issue_5_pipeline_lc_MAD-Mex_pixel_wise.ipynb

using results processed in https://github.com/CONABIO/kube_sipecam_playground/projects/2

In a second step will examinate what will be incorporated in the kubeflow pipeline that its described in 1_issue_5_basic_setup_in_AWS_for_MAD_Mex_classif_pipeline.ipynb

This 2nd step will be another issue and will be linked to milestone 4

palmoreck commented 4 years ago

For "Model fit & predict & register in geonode using Kale" task will use:

https://github.com/CONABIO/kube_sipecam/tree/master/minikube_sipecam/deployments/geonode_conabio/hostpath_pv

Docu of Dockerfile in: kube_sipecam/dockerfiles/geonode_conabio And add rule in security groups for 30002 port.

Create in /shared_volume/.geonode_conabio:

cat .geonode_conabio 
HOST_NAME="<ipv4 DNS of ec2>"
USER_GEOSERVER="super"
PASSWORD_GEOSERVER="duper"
PASSWORD_DB_GEONODE_DATA="geonode"

Next just for reference:

import_raster --base_directory /shared_volume/land_cover_results/ --input_filename raster_landsat8_chiapas_madmex_31_clases_pixel_wise_54_-38.tif --region "Chiapas, Mexico, North America, Latin America" --name "Chiapas_lc_2017_landsat8_test" --title "Land cover Chiapas landsat8 2017 test" --abstract "Test" --key_words "Chiapas"

Use sld from:

https://github.com/CONABIO/geonode/blob/master/styles/madmex_31_classes.sld

Disk full:

HTTP response headers: HTTPHeaderDict({'Date': 'Tue, 01 Sep 2020 18:12:22 GMT', 'Content-Length': '487', 'Content-Type': 'text/plain; charset=utf-8'})
HTTP response body: {"error_message":"Error creating pipeline: Create pipeline failed: InternalServerError: Failed to store b2fa5a70-cab4-4c89-8784-9c0cb118d1b4: Storage backend has reached its minimum free disk threshold. Please delete a few objects to proceed.","error_details":"Error creating pipeline: Create pipeline failed: InternalServerError: Failed to store b2fa5a70-cab4-4c89-8784-9c0cb118d1b4: Storage backend has reached its minimum free disk threshold. Please delete a few objects to proceed."}

Delete kubeflow (MAD-Mex and geonode deployments)

To free space:

minikube stop
minikube delete

Check:

docker system df
docker system prune --all --volumes
rm -r /root/.minikube/*
rm -r /root/.kube/*
rm -r /opt/kf-test

Start again (being in root dir)

CONFIG_URI="https://raw.githubusercontent.com/kubeflow/manifests/v1.0-branch/kfdef/kfctl_k8s_istio.v1.0.2.yaml"
source ~/.profile
chmod gou+wrx -R /opt/
mkdir -p ${KF_DIR}
#minikube start
cd /root && minikube start --driver=none
#kubeflow start
cd ${KF_DIR} && kfctl apply -V -f ${CONFIG_URI}

If there's problems use:

wget https://raw.githubusercontent.com/kubeflow/manifests/v1.0-branch/kfdef/kfctl_k8s_istio.v1.0.2.yaml
wget https://codeload.github.com/kubeflow/manifests/tar.gz/v1.0.2 -O v1.0.2.tar.gz
#change kfctl_k8s_istio.v1.0.2.yaml at the end uri:
  repos:
  - name: manifests
    uri: https://github.com/kubeflow/manifests/archive/v1.0.2.tar.gz
#for: 
  repos:
  - name: manifests
    uri: file:///opt/kf-test/v1.0.2.tar.gz

ref: https://github.com/aws-samples/eks-workshop/issues/639

If there's problems with geonode (because stack of docker-compose was deleted, clone again repo and deploy geonode)