user-cont / conu

conu - python API for your containers
http://conu.readthedocs.io/en/latest/
MIT License
165 stars 33 forks source link

OpenShift origin backend #255

Closed rpitonak closed 6 years ago

rpitonak commented 6 years ago

Openshift tests are skipped temporary because of CI.

I would like to hear feedback for naming, design etc. - that's why it is still WIP.

TomasTomecek commented 6 years ago

Playing with this right now and this is what I'm getting:

09:46:22.363 backend.py        INFO   Build application from local source in project myproject
Uploading directory "examples/openshift/standalone-test-app" as binary input for the build ...
build "app-q8q8-2" started
09:46:29.237 backend.py        INFO   Waiting for service to get ready
2018-08-11 09:46:29,265 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
2018-08-11 09:46:29,265 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
2018-08-11 09:46:29,265 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
2018-08-11 09:46:29,280 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
2018-08-11 09:46:29,280 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
2018-08-11 09:46:29,280 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
2018-08-11 09:46:29,289 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
2018-08-11 09:46:29,289 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
2018-08-11 09:46:29,289 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate veri
fy failed (_ssl.c:726)'),)': /api/v1/services?watch=False
09:46:30.244 backend.py        INFO   Deleting app
09:46:31.055 backend.py        INFO   No resources found
09:46:31.055 backend.py        INFO
Traceback (most recent call last):
  File "docs/source/examples/openshift/openshift_s2i_local.py", line 22, in <module>
    expected_output="Hello World from standalone WSGI application!")
  File "/home/tt/g/conu/conu/backend/origin/backend.py", line 234, in wait_for_service
    app_name=app_name, expected_output=expected_output, expected_retval=True).run()
  File "/home/tt/g/conu/conu/utils/probes.py", line 63, in run
    return self._run()
  File "/home/tt/g/conu/conu/utils/probes.py", line 132, in _run
    raise result
urllib3.exceptions.MaxRetryError: None: Max retries exceeded with url: /api/v1/services?watch=False (Caused by None)

Certificates are stored in /var/lib/origin/openshift.local.config/master if you are using oc cluster up. To me this looks like a bug in urllib3.

These are the changes I did locally:

diff --git a/conu/backend/k8s/client.py b/conu/backend/k8s/client.py
index 3e5535d..9a22cc5 100644
--- a/conu/backend/k8s/client.py
+++ b/conu/backend/k8s/client.py
@@ -19,6 +19,7 @@ singleton instances of kubernetes client
 """

 from kubernetes import client, config
+from kubernetes.client.api_client import ApiClient

 core_api = None
@@ -34,8 +35,10 @@ def get_core_api():
     global core_api

     if core_api is None:
+        api_client = ApiClient(header_name="Authorization",
+                               header_value="Bearer $OC_WHOAMI_DASH_T")
         config.load_kube_config()
-        core_api = client.CoreV1Api()
+        core_api = client.CoreV1Api(api_client=api_client)

     return core_api

Not sure that's how you should do this.

Unfortunately, when the command fails, no oc output is printed. oc start-build was also missing -n

diff --git a/conu/backend/origin/backend.py b/conu/backend/origin/backend.py
index d139143..73905fb 100644
--- a/conu/backend/origin/backend.py
+++ b/conu/backend/origin/backend.py
@@ -129,19 +129,18 @@ class OpenshiftBackend(K8sBackend):
             logger.info("Creating new app in project %s" % project)

             try:
-                o = run_cmd(c, return_output=True)
-                logger.debug(o)
+                run_cmd(c)
             except subprocess.CalledProcessError as ex:
                 raise ConuException("oc new-app failed: %s" % ex)

             if os.path.isdir(source):
-                c = self._oc_command(["start-build"] + [name] + ["--from-dir=%s" % source])
+                c = self._oc_command(["-n"] + [project] + ["start-build"] +
+                                     [name] + ["--from-dir=%s" % source])

                 logger.info("Build application from local source in project %s" % project)

                 try:
-                    o = run_cmd(c, return_output=True)
-                    logger.debug(o)
+                    run_cmd(c)
                 except subprocess.CalledProcessError as ex:
                     raise ConuException("oc start-build failed: %s" % ex)

And finally make sure we have debug logs:

diff --git a/docs/source/examples/openshift/openshift_s2i_local.py b/docs/source/examples/openshift/openshift_s2i_local.py
index 0a771a2..4cc0ebc 100644
--- a/docs/source/examples/openshift/openshift_s2i_local.py
+++ b/docs/source/examples/openshift/openshift_s2i_local.py
@@ -3,7 +3,7 @@ from conu.backend.origin.backend import OpenshiftBackend
 from conu.backend.docker.backend import DockerBackend

 with OpenshiftBackend(logging_level=logging.DEBUG) as openshift_backend:
-    with DockerBackend() as backend:
+    with DockerBackend(logging_level=logging.DEBUG) as backend:
         # builder image
         python_image = backend.ImageClass("centos/python-36-centos7")
TomasTomecek commented 6 years ago

Also this one is pretty annoying since it's causing syntax error on 3.7: https://github.com/kubernetes-client/python/commit/b10c7b6a175ab96291a6f74d68ea6027151f3b71