ansible / ansible-container

DEPRECATED -- Ansible Container was a tool to build Docker images and orchestrate containers using only Ansible playbooks.
GNU Lesser General Public License v3.0
2.19k stars 392 forks source link

K8s Deploy playbook breaks #879

Closed shayrybak closed 6 years ago

shayrybak commented 6 years ago
ISSUE TYPE
container.yml
version: "2"
settings:
  conductor:
    # The Conductor container does the heavy lifting, and provides a portable
    # Python runtime for building your target containers. It should be derived
    # from the same distribution as you're building your target containers with.
    base: centos:7
    roles_path: 
      - ../../roles

  project_name: test

services: 
  sumologic:
    from: centos:7
    expose:
      - 514
    entrypoint: "/usr/bin/dumb-init"
    command: /opt/SumoCollector/collector console -- -t
    roles:
      - dumb_init
      - sumologic
    user: root
    k8s:
      state: present
      service:
        force: false
      deployment:
        force: false
      # routes:
      # - port: 8443
      #   tls:
      #   termination: passthrough
      #   force: false

registries: 
  k8s:
    url: https://127.0.0.1:8443
OS / ENVIRONMENT
Ansible Container, version 0.9.3rc0
Darwin, Shays-MBP.localdomain, 17.4.0, Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64, x86_64
2.7.10 (default, Jul 15 2017, 17:16:57)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)] /Users/shay/virtualenvs/ansible-container/bin/python
{
  "ContainersPaused": 0,
  "Labels": [],
  "CgroupDriver": "cgroupfs",
  "ContainersRunning": 0,
  "ContainerdCommit": {
    "Expected": "89623f28b87a6004d4b785663257362d1658a729",
    "ID": "89623f28b87a6004d4b785663257362d1658a729"
  },
  "InitBinary": "docker-init",
  "NGoroutines": 41,
  "Swarm": {
    "ControlAvailable": false,
    "NodeID": "",
    "Error": "",
    "RemoteManagers": null,
    "LocalNodeState": "inactive",
    "NodeAddr": ""
  },
  "LoggingDriver": "json-file",
  "OSType": "linux",
  "HttpProxy": "docker.for.mac.http.internal:3128",
  "Runtimes": {
    "runc": {
      "path": "docker-runc"
    }
  },
  "DriverStatus": [
    [
      "Backing Filesystem",
      "extfs"
    ],
    [
      "Supports d_type",
      "true"
    ],
    [
      "Native Overlay Diff",
      "true"
    ]
  ],
  "OperatingSystem": "Docker for Mac",
  "Containers": 63,
  "HttpsProxy": "docker.for.mac.http.internal:3129",
  "BridgeNfIp6tables": true,
  "MemTotal": 12575907840,
  "SecurityOptions": [
    "name=seccomp,profile=default"
  ],
  "Driver": "overlay2",
  "IndexServerAddress": "https://index.docker.io/v1/",
  "ClusterStore": "",
  "InitCommit": {
    "Expected": "949e6fa",
    "ID": "949e6fa"
  },
  "GenericResources": null,
  "Isolation": "",
  "SystemStatus": null,
  "OomKillDisable": true,
  "ClusterAdvertise": "",
  "SystemTime": "2018-02-05T11:56:55.203199078Z",
  "Name": "linuxkit-025000000001",
  "CPUSet": true,
  "RegistryConfig": {
    "AllowNondistributableArtifactsCIDRs": [],
    "Mirrors": [],
    "IndexConfigs": {
      "docker.io": {
        "Official": true,
        "Name": "docker.io",
        "Secure": true,
        "Mirrors": []
      }
    },
    "AllowNondistributableArtifactsHostnames": [],
    "InsecureRegistryCIDRs": [
      "127.0.0.0/8"
    ]
  },
  "DefaultRuntime": "runc",
  "ContainersStopped": 63,
  "NCPU": 4,
  "NFd": 22,
  "Architecture": "x86_64",
  "KernelMemory": true,
  "CpuCfsQuota": true,
  "Debug": true,
  "ID": "VDHN:PUWI:UIXZ:HPIN:RAQW:R66F:FMAN:2YIE:2K7A:BBMH:DRUZ:RTLV",
  "IPv4Forwarding": true,
  "KernelVersion": "4.9.60-linuxkit-aufs",
  "BridgeNfIptables": true,
  "NoProxy": "",
  "LiveRestoreEnabled": false,
  "ServerVersion": "17.12.0-ce",
  "CpuCfsPeriod": true,
  "ExperimentalBuild": true,
  "MemoryLimit": true,
  "SwapLimit": true,
  "Plugins": {
    "Volume": [
      "local"
    ],
    "Network": [
      "bridge",
      "host",
      "ipvlan",
      "macvlan",
      "null",
      "overlay"
    ],
    "Authorization": null,
    "Log": [
      "awslogs",
      "fluentd",
      "gcplogs",
      "gelf",
      "journald",
      "json-file",
      "logentries",
      "splunk",
      "syslog"
    ]
  },
  "Images": 76,
  "DockerRootDir": "/var/lib/docker",
  "NEventsListener": 2,
  "CPUShares": true,
  "RuncCommit": {
    "Expected": "b2567b37d7b75eb4cf325b77297b140ea686ce8f",
    "ID": "b2567b37d7b75eb4cf325b77297b140ea686ce8f"
  }
}
{
  "KernelVersion": "4.9.60-linuxkit-aufs",
  "Components": [
    {
      "Version": "17.12.0-ce",
      "Name": "Engine",
      "Details": {
        "KernelVersion": "4.9.60-linuxkit-aufs",
        "Os": "linux",
        "BuildTime": "2017-12-27T20:12:29.000000000+00:00",
        "ApiVersion": "1.35",
        "MinAPIVersion": "1.12",
        "GitCommit": "c97c6d6",
        "Arch": "amd64",
        "Experimental": "true",
        "GoVersion": "go1.9.2"
      }
    }
  ],
  "Arch": "amd64",
  "BuildTime": "2017-12-27T20:12:29.000000000+00:00",
  "ApiVersion": "1.35",
  "Platform": {
    "Name": ""
  },
  "Version": "17.12.0-ce",
  "MinAPIVersion": "1.12",
  "GitCommit": "c97c6d6",
  "Os": "linux",
  "Experimental": true,
  "GoVersion": "go1.9.2"
}
SUMMARY

After creating a deploy playbook and running it with ansible-playbook, I get an error that the namespace doesn't exist

STEPS TO REPRODUCE
ansible-container build
ansible-container --engine k8s deploy --local-images
ansible-playbook ansible-deployment/test.yaml
EXPECTED RESULTS

working playbook deploying to k8s

ACTUAL RESULTS
(ansible-container) Shays-MBP:deploy-test shay$ ansible-playbook ansible-deployment/test.yml -vvv
ansible-playbook 2.4.3.0
  config file = /Users/shay/projects/ansible-container/projects/deploy-test/ansible.cfg
  configured module search path = [u'/Users/shay/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/shay/virtualenvs/ansible-container/lib/python2.7/site-packages/ansible
  executable location = /Users/shay/virtualenvs/ansible-container/bin/ansible-playbook
  python version = 2.7.10 (default, Jul 15 2017, 17:16:57) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)]
Using /Users/shay/projects/ansible-container/projects/deploy-test/ansible.cfg as config file
 [WARNING]: Unable to parse /etc/ansible/hosts as an inventory source

 [WARNING]: No inventory was parsed, only implicit localhost is available

 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAYBOOK: test.yml ****************************************************************************************************************************************************************************************************************************************************
1 plays in ansible-deployment/test.yml

PLAY [Manage the lifecycle of test on K8s] ****************************************************************************************************************************************************************************************************************************
META: ran handlers

TASK [ansible.kubernetes-modules : Install latest openshift client] ***************************************************************************************************************************************************************************************************
task path: /Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/roles/ansible.kubernetes-modules/tasks/main.yml:4
skipping: [localhost] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

TASK [Create namespace test] ******************************************************************************************************************************************************************************************************************************************
task path: /Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/test.yml:11
Using module file /Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/roles/ansible.kubernetes-modules/library/k8s_v1_namespace.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: shay
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/shay/.ansible/tmp/ansible-tmp-1517831949.88-158153552996679 `" && echo ansible-tmp-1517831949.88-158153552996679="` echo /Users/shay/.ansible/tmp/ansible-tmp-1517831949.88-158153552996679 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/69/63n5td7x39nc0vt9y2m28fbw0000gn/T/tmp5tvTVm TO /Users/shay/.ansible/tmp/ansible-tmp-1517831949.88-158153552996679/k8s_v1_namespace.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/shay/.ansible/tmp/ansible-tmp-1517831949.88-158153552996679/ /Users/shay/.ansible/tmp/ansible-tmp-1517831949.88-158153552996679/k8s_v1_namespace.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/Users/shay/virtualenvs/ansible-container/bin/python /Users/shay/.ansible/tmp/ansible-tmp-1517831949.88-158153552996679/k8s_v1_namespace.py; rm -rf "/Users/shay/.ansible/tmp/ansible-tmp-1517831949.88-158153552996679/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
    "api_version": "v1",
    "changed": true,
    "invocation": {
        "module_args": {
            "annotations": null,
            "api_key": null,
            "cert_file": null,
            "context": null,
            "debug": false,
            "force": false,
            "host": null,
            "key_file": null,
            "kubeconfig": null,
            "labels": null,
            "name": "test",
            "namespace": null,
            "password": null,
            "resource_definition": null,
            "spec_finalizers": null,
            "src": null,
            "ssl_ca_cert": null,
            "state": "present",
            "username": null,
            "verify_ssl": null
        }
    },
    "namespace": {
        "api_version": "v1",
        "kind": "Namespace",
        "metadata": {
            "annotations": null,
            "cluster_name": null,
            "creation_timestamp": "2018-02-05T11:59:10Z",
            "deletion_grace_period_seconds": null,
            "deletion_timestamp": null,
            "finalizers": null,
            "generate_name": null,
            "generation": null,
            "labels": null,
            "name": "test",
            "namespace": null,
            "owner_references": null,
            "resource_version": "3095",
            "self_link": "/api/v1/namespaces/test",
            "uid": "fa04be80-0a6b-11e8-b9df-080027938cbf"
        },
        "spec": {
            "finalizers": [
                "kubernetes"
            ]
        },
        "status": {
            "phase": "Active"
        }
    },
    "request": {
        "kind": "Namespace",
        "metadata": {
            "name": "test"
        }
    }
}

TASK [Destroy the application by removing namespace test] *************************************************************************************************************************************************************************************************************
task path: /Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/test.yml:17
Using module file /Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/roles/ansible.kubernetes-modules/library/k8s_v1_namespace.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: shay
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/shay/.ansible/tmp/ansible-tmp-1517831951.93-233515082745084 `" && echo ansible-tmp-1517831951.93-233515082745084="` echo /Users/shay/.ansible/tmp/ansible-tmp-1517831951.93-233515082745084 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/69/63n5td7x39nc0vt9y2m28fbw0000gn/T/tmp5uUy77 TO /Users/shay/.ansible/tmp/ansible-tmp-1517831951.93-233515082745084/k8s_v1_namespace.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/shay/.ansible/tmp/ansible-tmp-1517831951.93-233515082745084/ /Users/shay/.ansible/tmp/ansible-tmp-1517831951.93-233515082745084/k8s_v1_namespace.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/Users/shay/virtualenvs/ansible-container/bin/python /Users/shay/.ansible/tmp/ansible-tmp-1517831951.93-233515082745084/k8s_v1_namespace.py; rm -rf "/Users/shay/.ansible/tmp/ansible-tmp-1517831951.93-233515082745084/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
    "api_version": "v1",
    "changed": true,
    "invocation": {
        "module_args": {
            "annotations": null,
            "api_key": null,
            "cert_file": null,
            "context": null,
            "debug": false,
            "force": false,
            "host": null,
            "key_file": null,
            "kubeconfig": null,
            "labels": null,
            "name": "test",
            "namespace": null,
            "password": null,
            "resource_definition": null,
            "spec_finalizers": null,
            "src": null,
            "ssl_ca_cert": null,
            "state": "absent",
            "username": null,
            "verify_ssl": null
        }
    },
    "namespace": {},
    "request": {
        "kind": "Namespace",
        "metadata": {
            "name": "test"
        }
    }
}

TASK [Create service] *************************************************************************************************************************************************************************************************************************************************
task path: /Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/test.yml:23
Using module file /Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/roles/ansible.kubernetes-modules/library/k8s_v1_service.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: shay
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/shay/.ansible/tmp/ansible-tmp-1517831960.1-112838041555533 `" && echo ansible-tmp-1517831960.1-112838041555533="` echo /Users/shay/.ansible/tmp/ansible-tmp-1517831960.1-112838041555533 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/69/63n5td7x39nc0vt9y2m28fbw0000gn/T/tmpvVgZlf TO /Users/shay/.ansible/tmp/ansible-tmp-1517831960.1-112838041555533/k8s_v1_service.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/shay/.ansible/tmp/ansible-tmp-1517831960.1-112838041555533/ /Users/shay/.ansible/tmp/ansible-tmp-1517831960.1-112838041555533/k8s_v1_service.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/Users/shay/virtualenvs/ansible-container/bin/python /Users/shay/.ansible/tmp/ansible-tmp-1517831960.1-112838041555533/k8s_v1_service.py; rm -rf "/Users/shay/.ansible/tmp/ansible-tmp-1517831960.1-112838041555533/" > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/var/folders/69/63n5td7x39nc0vt9y2m28fbw0000gn/T/ansible_Cb126z/ansible_modlib.zip/ansible/module_utils/k8s_common.py", line 251, in _create
    k8s_obj = self.helper.create_object(namespace, body=request_body)
  File "/Users/shay/virtualenvs/ansible-container/lib/python2.7/site-packages/openshift/helper/base.py", line 234, in create_object
    raise self.get_exception_class()(msg, status=exc.status)

fatal: [localhost]: FAILED! => {
    "changed": false,
    "error": 404,
    "invocation": {
        "module_args": {
            "annotations": null,
            "api_key": null,
            "cert_file": null,
            "context": null,
            "debug": false,
            "force": false,
            "host": null,
            "key_file": null,
            "kubeconfig": null,
            "labels": {
                "app": "test",
                "service": "sumologic"
            },
            "name": "sumologic",
            "namespace": "test",
            "password": null,
            "resource_definition": {
                "apiVersion": "v1",
                "kind": "Service",
                "metadata": {
                    "labels": {
                        "app": "test",
                        "service": "sumologic"
                    },
                    "name": "sumologic",
                    "namespace": "test"
                },
                "spec": {
                    "ports": [
                        {
                            "name": "port-514-tcp",
                            "port": 514,
                            "protocol": "TCP",
                            "targetPort": 514
                        }
                    ],
                    "selector": {
                        "app": "test",
                        "service": "sumologic"
                    }
                }
            },
            "spec_cluster_ip": null,
            "spec_deprecated_public_i_ps": null,
            "spec_external_i_ps": null,
            "spec_external_name": null,
            "spec_load_balancer_ip": null,
            "spec_load_balancer_source_ranges": null,
            "spec_ports": [
                {
                    "name": "port-514-tcp",
                    "port": 514,
                    "protocol": "TCP",
                    "targetPort": 514
                }
            ],
            "spec_selector": {
                "app": "test",
                "service": "sumologic"
            },
            "spec_session_affinity": null,
            "spec_type": null,
            "src": null,
            "ssl_ca_cert": null,
            "state": "present",
            "username": null,
            "verify_ssl": null
        }
    },
    "msg": "Failed to create object: namespaces \"test\" not found"
}
    to retry, use: --limit @/Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/test.retry

PLAY RECAP ************************************************************************************************************************************************************************************************************************************************************
localhost                  : ok=2    changed=2    unreachable=0    failed=1
shayrybak commented 6 years ago

In addition I tried removing the task that deletes the namespace and create the namespace manually Then when I run the playbook I get:

(ansible-container) Shays-MBP:deploy-test shay$ ansible-playbook ansible-deployment/test.yml
 [WARNING]: Unable to parse /etc/ansible/hosts as an inventory source

 [WARNING]: No inventory was parsed, only implicit localhost is available

 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [Manage the lifecycle of test on K8s] *****************************************************************************************************************************************************************

TASK [ansible.kubernetes-modules : Install latest openshift client] ****************************************************************************************************************************************
skipping: [localhost]

TASK [Create namespace test] *******************************************************************************************************************************************************************************
ok: [localhost]

TASK [Create service] **************************************************************************************************************************************************************************************
ok: [localhost]

TASK [Stop running containers by scaling replicas down to 0] ***********************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: Error initializing AnsibleModuleHelper: {"message": "Error: Apps_v1beta1Deployment was not found in client.models. Did you specify the correct Kind and API Version?"}
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/var/folders/69/63n5td7x39nc0vt9y2m28fbw0000gn/T/ansible_G9_MMc/ansible_module_k8s_apps_v1beta1_deployment.py\", line 4751, in <module>\n    main()\n  File \"/var/folders/69/63n5td7x39nc0vt9y2m28fbw0000gn/T/ansible_G9_MMc/ansible_module_k8s_apps_v1beta1_deployment.py\", line 4742, in main\n    raise Exception(exc.message)\nException: Error initializing AnsibleModuleHelper: {\"message\": \"Error: Apps_v1beta1Deployment was not found in client.models. Did you specify the correct Kind and API Version?\"}\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 0}
    to retry, use: --limit @/Users/shay/projects/ansible-container/projects/deploy-test/ansible-deployment/test.retry

PLAY RECAP *************************************************************************************************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=1
j00bar commented 6 years ago

The playbook generated contains all parts of the lifecycle for your app, separated by tags. So the appropriate command to run would be ansible-playbook test.yml --tags=start for example.

shayrybak commented 6 years ago

@j00bar , Thanks for that. I missed that part. Can you look at my previous comment? I get that error even after using --tags start

j00bar commented 6 years ago

Hi @shayrybak -

Does this error still occur with the latest version of Ansible Container, relying on the newest python-openshift bindings? Thanks!

shayrybak commented 6 years ago

This was fixed with the python-openshift update. Thanks