jeff1evesque / machine-learning

Web-interface + rest API for classification and regression (https://jeff1evesque.github.io/machine-learning.docs)
Other
258 stars 85 forks source link

Replace puppet 'docker' with containers #2935

Closed jeff1evesque closed 6 years ago

jeff1evesque commented 7 years ago

Our intention with our docker containers, was to use them for unit testing our application as a whole, while testing the validity of the puppet scripts, used to build our development environment. However, this basis is not really valid, since we implement the docker puppet environment, which is beginning to change from the vagrant environment. This means, our docker containers are no longer checking the validity of puppet logic, used to build our development environment. And since the requirements of docker, and vagrant is not always a one-to-one relationship, we won't always be able to reuse the exact puppet script(s) between the vagrant and docker puppet environments.

Additionally, running puppet in docker, is similarly flawed to #2932. Therefore, we will eliminate our puppet implementation, within our docker containers, used for unit testing. This means we'll remove entirely the docker puppet environment, create an equal number of dockerfile's, as the number of puppet modules defined in our vagrant puppet environment, and adjust our .travis.yml, respectively.

jeff1evesque commented 7 years ago

We'll need to decide whether to write custom python scripts, or RUN bash commands which parse our current settings.yaml, and packages.yaml, and reference corresponding attributes.

jeff1evesque commented 7 years ago

Another supporting fact regarding the divergence between two puppet environments, was the requirement of installing nodejs from a 4.x repository for the docker environment, while the vagrant environment was able to remain the same, using the 5.x repository.

jeff1evesque commented 6 years ago

We'll look into replacing vagrant with rancher's integration with docker swarm.

jeff1evesque commented 6 years ago

We need to look into implementing rancher compose, configuration files.

jeff1evesque commented 6 years ago

We may need not need cygwin. However, all distro's will need wget, to successfully run our current install_rancher script. Therefore, we may need to adjust our language in our current README.md.

jeff1evesque commented 6 years ago

The following is our current $ACCESS json string:

{
    "id": "1c31",
    "type": "apiKey",
    "links": {
        "self": "http:\/\/192.168.99.100:8080\/v1\/projects\/1a5\/apikeys\/1c31",
        "account": "http:\/\/192.168.99.100:8080\/v1\/projects\/1a5\/apikeys\/1c31\/account",
        "images": "http:\/\/192.168.99.100:8080\/v1\/projects\/1a5\/apikeys\/1c31\/images",
        "instances": "http:\/\/192.168.99.100:8080\/v1\/projects\/1a5\/apikeys\/1c31\/instances",
        "certificate": "http:\/\/192.168.99.100:8080\/v1\/projects\/1a5\/apikeys\/1c31\/certificate"
    },
    "actions": {
        "activate": "http:\/\/192.168.99.100:8080\/v1\/projects\/1a5\/apikeys\/1c31\/?action=activate",
        "remove": "http:\/\/192.168.99.100:8080\/v1\/projects\/1a5\/apikeys\/1c31\/?action=remove",
        "deactivate": "http:\/\/192.168.99.100:8080\/v1\/projects\/1a5\/apikeys\/1c31\/?action=deactivate"
    },
    "baseType": "credential",
    "name": "jeff1evesque",
    "state": "registering",
    "accountId": "1a5",
    "created": "2018-02-23T03:20:15Z",
    "createdTS": 1519356015000,
    "description": null,
    "kind": "apiKey",
    "publicValue": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
    "removed": null,
    "secretValue": "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy",
    "transitioning": "yes",
    "transitioningMessage": "In Progress",
    "transitioningProgress": null,
    "uuid": "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz"
}

However, upon running our install_rancher, we get the following traceback error:

$ ./install_rancher
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   635    0   635    0     0   1941      0 --:--:-- --:--:-- --:--:--  2035
100 3076k  100 3076k    0     0  4290k      0 --:--:-- --:--:-- --:--:-- 11.3M
Archive:  rancher-compose-windows-amd64-v0.12.5-rc1.zip
  inflating: rancher-compose.exe
Windows (MINGW*) installation for rancher-compose, depends on
'choco', and 'python'.

[Y]: install
[N]: do not install

Proceed with installation: y
Getting latest version of the Chocolatey package for download.
Getting Chocolatey from https://chocolatey.org/api/v2/package/chocolatey/0.10.8
.
Extracting C:\my\local\path\to\chocolatey\chocInstall\chocolat
ey.zip to C:\my\local\path\to\chocolatey\chocInstall...
Installing chocolatey on this machine
Creating ChocolateyInstall as an environment variable (targeting 'Machine')
  Setting ChocolateyInstall to 'C:\ProgramData\chocolatey'
WARNING: It's very likely you will need to close and reopen your shell
  before you can use choco.
Restricting write permissions to Administrators
We are setting up the Chocolatey package repository.
The packages themselves go to 'C:\ProgramData\chocolatey\lib'
  (i.e. C:\ProgramData\chocolatey\lib\yourPackageName).
A shim file for the command line goes to 'C:\ProgramData\chocolatey\bin'
  and points to an executable in 'C:\ProgramData\chocolatey\lib\yourPackageName
'.

Creating Chocolatey folders if they do not already exist.

WARNING: You can safely ignore errors related to missing log files when
  upgrading from a version of Chocolatey less than 0.9.9.
  'Batch file could not be found' is also safe to ignore.
  'The system cannot find the file specified' - also safe.
Chocolatey (choco.exe) is now ready.
You can call choco from anywhere, command line or powershell by typing choco.
Run choco /? for a list of functions.
You may need to shut down and restart powershell and/or consoles
 first prior to using choco.
Ensuring chocolatey commands are on the path
Ensuring chocolatey.nupkg is in the lib folder
Chocolatey v0.10.8
Installing the following packages:
python
By installing you accept licenses for the packages.
python v3.6.4 already installed. Forcing reinstall of version '3.6.4'.
 Please use upgrade if you meant to upgrade to a new version.
Progress: Downloading python 3.6.4... 100%

python v3.6.4 (forced) [Approved]
python package files install completed. Performing other installation steps.
 The install of python was successful.
  Software install location not explicitly set, could be in package or
  default install location if installer.

Chocolatey installed 1/1 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
db609306037b56a8c5689bc9afcc0cf0802f17c405049136ff15e0bc12840d86
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver f
ailed programming external connectivity on endpoint trusting_kowalevski (0256f1c
e7fad0a6b7290ff4cd8e1cbc808ad264f215cf6872f6d35847ba8cf99): Bind for 0.0.0.0:808
0 failed: port is already allocated.
Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> POST /v1/projects/1a5/apikey HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
> Content-Length: 123
>
} [123 bytes data]
* upload completely sent off: 123 out of 123 bytes
100   123    0     0  100   123      0    608 --:--:-- --:--:-- --:--:--   608<
HTTP/1.1 201 Created
< Content-Type: application/json; charset=utf-8
< Date: Fri, 23 Feb 2018 01:37:10 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< X-Api-Account-Id: 1a5
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v1/projects/1a5/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Transfer-Encoding: chunked
<
{ [1179 bytes data]
100  1290    0  1167  100   123   3402    358 --:--:-- --:--:-- --:--:--  3402
* Connection #0 to host 192.168.99.100 left intact
Traceback (most recent call last):
  File "<string>", line 1, in <module>
AttributeError: 'dict' object has no attribute 'name'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
AttributeError: 'dict' object has no attribute 'secretValue'
Access and secret generated in ./rancher-auth.txt
./install_rancher: line 120: rancher: command not found
←[31mERRO←[0m[0000] Failed to find the compose file: docker-compose.development

←[31mFATA←[0m[0000] Failed to read project: open docker-compose.development: The
 system cannot find the file specified.
jeff1evesque commented 6 years ago

We are now receiving the following traceback error:

$ ./install_rancher
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   622    0   622    0     0   2213      0 --:--:-- --:--:-- --:--:--  2347
100 3605k  100 3605k    0     0  5135k      0 --:--:-- --:--:-- --:--:-- 17.1M
Archive:  rancher-windows-amd64-v0.6.7.zip
  inflating: rancher.exe
Unable to find image 'rancher/server:latest' locally
latest: Pulling from rancher/server
bae382666908: Pull complete
29ede3c02ff2: Pull complete
da4e69f33106: Pull complete
8d43e5f5d27f: Pull complete
b0de1abb17d6: Pull complete
422f47db4517: Pull complete
79d37de643ce: Pull complete
69d13e08a4fe: Pull complete
2ddfd3c6a2b7: Pull complete
bc433fed3823: Pull complete
b82e188df556: Pull complete
dae2802428a4: Pull complete
a6247572ea3c: Pull complete
884c916ebae4: Pull complete
85517c9c5365: Pull complete
02dded9fe690: Pull complete
fd9f433c3bc6: Pull complete
44d91b3fea45: Pull complete
0d463387dfeb: Pull complete
60753c4d26f0: Pull complete
a003892966fe: Pull complete
Digest: sha256:42441f0128fae4d72d51f92de2049392427d462356282a46f28434332967c7e4
Status: Downloaded newer image for rancher/server:latest
d405c4f6e9f8a4859a8dc23af3e86a0b28ed5d502d48e0343455b6cfee04c775
Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 172.17.0.2...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0*
connect to 172.17.0.2 port 9595 failed: Timed out
* Failed to connect to 172.17.0.2 port 9595: Timed out
* Closing connection 0
curl: (7) Failed to connect to 172.17.0.2 port 9595: Timed out
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\lib\json\__init__.py", line 299, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "C:\Python36\lib\json\__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "C:\Python36\lib\json\decoder.py", line 339, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Python36\lib\json\decoder.py", line 357, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 2 column 1 (char 1)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\lib\json\__init__.py", line 299, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "C:\Python36\lib\json\__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "C:\Python36\lib\json\decoder.py", line 339, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Python36\lib\json\decoder.py", line 357, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 2 column 1 (char 1)
←[31mFATA←[0m[0000] Get /v2-beta/projects/1a5/schemas/schemas: unsupported proto
col scheme ""
jeff1evesque commented 6 years ago

After an initial pass with our install_rancher script, we have the following error traceback:

$ ./install_rancher
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   622    0   622    0     0   2347      0 --:--:-- --:--:-- --:--:--  2658
100 3605k  100 3605k    0     0  5247k      0 --:--:-- --:--:-- --:--:-- 5247k
Archive:  rancher-windows-amd64-v0.6.7.zip
  inflating: rancher.exe
Unable to find image 'rancher/server:latest' locally
latest: Pulling from rancher/server
bae382666908: Pull complete
29ede3c02ff2: Pull complete
da4e69f33106: Pull complete
8d43e5f5d27f: Pull complete
b0de1abb17d6: Pull complete
422f47db4517: Pull complete
79d37de643ce: Pull complete
69d13e08a4fe: Pull complete
2ddfd3c6a2b7: Pull complete
bc433fed3823: Pull complete
b82e188df556: Pull complete
dae2802428a4: Pull complete
a6247572ea3c: Pull complete
884c916ebae4: Pull complete
85517c9c5365: Pull complete
02dded9fe690: Pull complete
fd9f433c3bc6: Pull complete
44d91b3fea45: Pull complete
0d463387dfeb: Pull complete
60753c4d26f0: Pull complete
a003892966fe: Pull complete
Digest: sha256:42441f0128fae4d72d51f92de2049392427d462356282a46f28434332967c7e4
Status: Downloaded newer image for rancher/server:latest
a5557ca03637550f3e168a9e0248b8471896b92823b4430702ab0ddd1531acc5
Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying fe80::dc80:445a:1eb0:5bdb...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
  Trying 192.168.56.1...
* connect to fe80::dc80:445a:1eb0:5bdb port 8080 failed: Connection refused
*   Trying fe80::244f:c951:8eff:ad6b...
* connect to 192.168.56.1 port 8080 failed: Connection refused
*   Trying 192.168.99.1...
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0*
connect to fe80::244f:c951:8eff:ad6b port 8080 failed: Connection refused
*   Trying fe80::91a1:b263:6265:4aa6...
* connect to 192.168.99.1 port 8080 failed: Connection refused
*   Trying 207.142.80.10...
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0*
connect to fe80::91a1:b263:6265:4aa6 port 8080 failed: Connection refused
*   Trying fe80::833:f19:3071:aff5...
* connect to 207.142.80.10 port 8080 failed: Connection refused
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0*
connect to fe80::833:f19:3071:aff5 port 8080 failed: Connection refused
*   Trying 2002:cf8e:500a::cf8e:500a...
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0*
connect to 2002:cf8e:500a::cf8e:500a port 8080 failed: Connection refused
*   Trying 2001:0:5cf2:8c15:833:f19:3071:aff5...
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0*
connect to 2001:0:5cf2:8c15:833:f19:3071:aff5 port 8080 failed: Connection refus
ed
* Failed to connect to  port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to  port 8080: Connection refused
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\lib\json\__init__.py", line 299, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "C:\Python36\lib\json\__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "C:\Python36\lib\json\decoder.py", line 339, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Python36\lib\json\decoder.py", line 357, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 2 column 1 (char 1)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\lib\json\__init__.py", line 299, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "C:\Python36\lib\json\__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "C:\Python36\lib\json\decoder.py", line 339, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Python36\lib\json\decoder.py", line 357, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 2 column 1 (char 1)
←[31mFATA←[0m[0000] Get http://:8080/v2-beta/projects/1a5/schemas/schemas: dial
tcp :8080: connectex: The requested address is not valid in its context.

The error indicates that the rancher server did not start, and thus erroring upon the corresponding curl. At the same time, the browser does not render the http://192.168.99.100:8080 address. However, if we wait 10 minutes, the rancher server starts, and the browser is able to load the latter address. Similarly, if we rerun our install_rancher script, it seems to run without error:

$ ./install_rancher
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   622    0   622    0     0   2101      0 --:--:-- --:--:-- --:--:--  2347
100 3605k  100 3605k    0     0  4918k      0 --:--:-- --:--:-- --:--:-- 4918k
Archive:  rancher-windows-amd64-v0.6.7.zip
  inflating: rancher.exe
Error response from daemon: Cannot kill container: rancher: No such container: r
ancher
Error: No such container: rancher

Windows docker implementation, requires DockerToolbox, which
creates a 'default' container to manage docker. To proceed,
rancher configurations will reflect port '8080'.

C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: Conflict
. The container name "/default" is already in use by container "484961ad9611685d
18117cbaf219c9bdc898af2e523cf834b1828f644ac5c2cd". You have to remove (or rename
) that container to be able to reuse that name.
See 'C:\Program Files\Docker Toolbox\docker.exe run --help'.
Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> POST //v2-beta/apikeys HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
> Content-Length: 236
>
} [236 bytes data]
* upload completely sent off: 236 out of 236 bytes
< HTTP/1.1 201 Created
< Content-Type: application/json; charset=utf-8
< Date: Tue, 27 Feb 2018 04:39:11 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< X-Api-Account-Id: 1a1
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v2-beta/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Content-Length: 1106
<
{ [1106 bytes data]
100  1342  100  1106  100   236   5882   1255 --:--:-- --:--:-- --:--:--  5882
* Connection #0 to host 192.168.99.100 left intact
1st5
jeff1evesque commented 6 years ago

520ada8: our rancher docker containers, will be predicated on containers from dockerhub, rather than manually building each container from a local dockerfile. This should speed up development builds. However, our CI tests will still be able to test the latest builds, if any of the dockerfiles are changed.

jeff1evesque commented 6 years ago

066e5b7: the README.md was a bit too verbose, regarding the installation of rancher. Instead we migrated the corresponding content, into its own designated documentation, when compiled will autogenerate on https://jeff1evesque.github.io/machine-learning.docs. As we continue developing this issue, the corresponding documentation could adjust respectively.

jeff1evesque commented 6 years ago

The following steps will be required to proceed:

It seems we have forgotten, to find the rancher cli command, to add necessary host(s).

jeff1evesque commented 6 years ago

We ran our install_rancher script:

$ ./install_rancher
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   622    0   622    0     0   2213      0 --:--:-- --:--:-- --:--:--  2347
100 3605k  100 3605k    0     0  4719k      0 --:--:-- --:--:-- --:--:-- 4719k
Archive:  rancher-windows-amd64-v0.6.7.zip
  inflating: rancher.exe

Windows docker implementation, requires DockerToolbox, which
creates a 'default' container to manage docker. To proceed,
rancher configurations will reflect port '8080'.

Unable to find image 'rancher/server:stable' locally
stable: Pulling from rancher/server
bae382666908: Pull complete
29ede3c02ff2: Pull complete
da4e69f33106: Pull complete
8d43e5f5d27f: Pull complete
b0de1abb17d6: Pull complete
422f47db4517: Pull complete
79d37de643ce: Pull complete
69d13e08a4fe: Pull complete
2ddfd3c6a2b7: Pull complete
bc433fed3823: Pull complete
b82e188df556: Pull complete
dae2802428a4: Pull complete
a6247572ea3c: Pull complete
884c916ebae4: Pull complete
85517c9c5365: Pull complete
02dded9fe690: Pull complete
fd9f433c3bc6: Pull complete
44d91b3fea45: Pull complete
0d463387dfeb: Pull complete
60753c4d26f0: Pull complete
a003892966fe: Pull complete
Digest: sha256:42441f0128fae4d72d51f92de2049392427d462356282a46f28434332967c7e4
Status: Downloaded newer image for rancher/server:stable
7d632fef55aa0cc627bce3fa85006cbda8974a976dbf0d17590e2dbad2ccd5e3
Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
connect to 192.168.99.100 port 8080 failed: Connection refused
* Failed to connect to 192.168.99.100 port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.99.100 port 8080: Connection refused

Rancher server has not started. Attempting to obtain
access + secret key, from rancher in 30s.

Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
connect to 192.168.99.100 port 8080 failed: Connection refused
* Failed to connect to 192.168.99.100 port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.99.100 port 8080: Connection refused

Rancher server has not started. Attempting to obtain
access + secret key, from rancher in 30s.

Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
connect to 192.168.99.100 port 8080 failed: Connection refused
* Failed to connect to 192.168.99.100 port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.99.100 port 8080: Connection refused

Rancher server has not started. Attempting to obtain
access + secret key, from rancher in 30s.

Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> POST //v2-beta/apikeys HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
> Content-Length: 272
>
} [272 bytes data]
* upload completely sent off: 272 out of 272 bytes
< HTTP/1.1 201 Created
< Content-Type: application/json; charset=utf-8
< Date: Thu, 08 Mar 2018 03:03:38 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< X-Api-Account-Id: 1a1
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v2-beta/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Content-Length: 1106
<
{ [1106 bytes data]
100  1378  100  1106  100   272   4157   1022 --:--:-- --:--:-- --:--:--  4157
* Connection #0 to host 192.168.99.100 left intact
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> POST //v2-beta/projects/1a5/registrationTokens HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
>
< HTTP/1.1 201 Created
< Content-Type: application/json; charset=utf-8
< Date: Thu, 08 Mar 2018 03:03:38 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< X-Api-Account-Id: 1a5
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v2-beta/projects/1a5/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Content-Length: 1168
<
{"id":"1c3","type":"registrationToken","links":{"self":"http:\/\/192.168.99.100:
8080\/v2-beta\/projects\/1a5\/registrationtokens\/1c3","account":"http:\/\/192.1
68.99.100:8080\/v2-beta\/projects\/1a5\/registrationtokens\/1c3\/account","image
s":"http:\/\/192.168.99.100:8080\/v2-beta\/projects\/1a5\/registrationtokens\/1c
3\/images","instances":"http:\/\/192.168.99.100:8080\/v2-beta\/projects\/1a5\/re
gistrationtokens\/1c3\/instances"},"actions":{"activate":"http:\/\/192.168.99.10
0:8080\/v2-beta\/projects\/1a5\/registrationtokens\/1c3\/?action=activate","remo
ve":"http:\/\/192.168.99.100:8080\/v2-beta\/projects\/1a5\/registrationtokens\/1
c3\/?action=remove","deactivate":"http:\/\/192.168.99.100:8080\/v2-beta\/project
s\/1a5\/registrationtokens\/1c3\/?action=deactivate"},"baseType":"credential","n
ame":null,"state":"registering","accountId":"1a5","command":null,"created":"2018
-03-08T03:03:38Z","createdTS":1520478218000,"description":null,"image":null,"kin
d":"registrationToken","registrationUrl":null,"removed":null,"token":null,"trans
itioning":"yes","transitioningMessage":"In Progress","transitioningProgress":nul
l,"uuid":"18c18df5-bcf3-47ff-abba-b2123bed86a9"}* Connection #0 to host 192.168.
99.100 left intact
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> GET //v2-beta/projects/1a5/registrationTokens HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Date: Thu, 08 Mar 2018 03:03:39 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< Vary: Accept-Encoding, User-Agent
< X-Api-Account-Id: 1a5
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v2-beta/projects/1a5/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Transfer-Encoding: chunked
<
{ [2476 bytes data]
100  3739    0  3739    0     0  21865      0 --:--:-- --:--:-- --:--:-- 23967
* Connection #0 to host 192.168.99.100 left intact
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> GET //v2-beta/projects/1a5/registrationTokens HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Date: Thu, 08 Mar 2018 03:03:41 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< Vary: Accept-Encoding, User-Agent
< X-Api-Account-Id: 1a5
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v2-beta/projects/1a5/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Transfer-Encoding: chunked
<
{ [3627 bytes data]
100  3615    0  3615    0     0  28920      0 --:--:-- --:--:-- --:--:-- 28920
* Connection #0 to host 192.168.99.100 left intact
Unable to find image 'rancher/agent:v1.2.9' locally
v1.2.9: Pulling from rancher/agent
b3e1c725a85f: Pull complete
6a710864a9fc: Pull complete
d0ac3b234321: Pull complete
87f567b5cf58: Pull complete
063e24b217c4: Pull complete
d0a3f58caef0: Pull complete
16914729cfd3: Pull complete
dc5c21984c5b: Pull complete
d7e8f9784b20: Pull complete
Digest: sha256:c21255ac4d94ffbc7b523f870f2aea5189b68fa3d642800adb4774aab4748e66
Status: Downloaded newer image for rancher/agent:v1.2.9
1st5

However, even though our MLStack correctly has 5 services:

services

Each of the 5 services, lack any containers:

no-container

Furthermore, towards the end of our install_rancher, we attempted to define a rancher host:

        ## register host with rancher
        docker run \
            --rm \
            --privileged \
            -v /var/run/docker.sock:/var/run/docker.sock \
            -v /var/lib/rancher:/var/lib/rancher \
            rancher/agent:v1.2.9 \
            "$TOKEN"

However, no hosts have been created, as indicated at the top of each of the above screenshot:

add-host

jeff1evesque commented 6 years ago

The following can be reviewed, to help facilitate the development of install_rancher:

jeff1evesque commented 6 years ago

After executing our install_rancher, we notice the following containers:

$ docker ps -a
CONTAINER ID        IMAGE                   COMMAND                  CREATED                  STATUS                         PORTS                              NAMES
9c9447cff621        rancher/agent:v1.2.9    "/run.sh run"             28 seconds ago      Restarting (1) 8 seconds ago                                      rancher-agent
35d948a9691c        rancher/server:stable   "/usr/bin/entry /usr"   About an hour ago   Up About an hour               3306/tcp, 0.0.0.0:8080->8080/tcp   default
jeff1evesque commented 6 years ago

We temporarily added the following echo in our install_rancher:

...
    if [ "$TOKEN" ]; then
        echo '========================================'
        echo "$RANCHER_URL/v1/scripts/$TOKEN"
        echo '========================================'

        ## register host with rancher
        docker run \
            --rm \
            --privileged \
            -v /var/run/docker.sock:/var/run/docker.sock \
            -v /var/lib/rancher:/var/lib/rancher \
            rancher/agent:v1.2.9 \
            "$RANCHER_URL/v1/scripts/$TOKEN"
...

Then, we attempted a fresh new rancher install:

$ ./install_rancher
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   622    0   622    0     0   2213      0 --:--:-- --:--:-- --:--:--  2347
100 3605k  100 3605k    0     0  4529k      0 --:--:-- --:--:-- --:--:-- 4529k
Archive:  rancher-windows-amd64-v0.6.7.zip
  inflating: rancher.exe

Windows docker implementation, requires DockerToolbox, which
creates a 'default' container to manage docker. To proceed,
rancher configurations will reflect port '8080'.

Unable to find image 'rancher/server:stable' locally
stable: Pulling from rancher/server
bae382666908: Pull complete
29ede3c02ff2: Pull complete
da4e69f33106: Pull complete
8d43e5f5d27f: Pull complete
b0de1abb17d6: Pull complete
422f47db4517: Pull complete
79d37de643ce: Pull complete
69d13e08a4fe: Pull complete
2ddfd3c6a2b7: Pull complete
bc433fed3823: Pull complete
b82e188df556: Pull complete
dae2802428a4: Pull complete
a6247572ea3c: Pull complete
884c916ebae4: Pull complete
85517c9c5365: Pull complete
02dded9fe690: Pull complete
fd9f433c3bc6: Pull complete
44d91b3fea45: Pull complete
0d463387dfeb: Pull complete
60753c4d26f0: Pull complete
a003892966fe: Pull complete
Digest: sha256:42441f0128fae4d72d51f92de2049392427d462356282a46f28434332967c7e4
Status: Downloaded newer image for rancher/server:stable
ca064677260f441d8231db7c6ce25f773fd629ed9147edc731bbac24f6da947a
Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
connect to 192.168.99.100 port 8080 failed: Connection refused
* Failed to connect to 192.168.99.100 port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.99.100 port 8080: Connection refused

Rancher server has not started. Attempting to obtain
access + secret key, from rancher in 30s.

Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
connect to 192.168.99.100 port 8080 failed: Connection refused
* Failed to connect to 192.168.99.100 port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.99.100 port 8080: Connection refused

Rancher server has not started. Attempting to obtain
access + secret key, from rancher in 30s.

Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
connect to 192.168.99.100 port 8080 failed: Connection refused
* Failed to connect to 192.168.99.100 port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.99.100 port 8080: Connection refused

Rancher server has not started. Attempting to obtain
access + secret key, from rancher in 30s.

Note: Unnecessary use of -X or --request, POST is already inferred.
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> POST /v2-beta/apikeys HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
> Content-Length: 272
>
} [272 bytes data]
* upload completely sent off: 272 out of 272 bytes
100   272    0     0  100   272      0   1454 --:--:-- --:--:-- --:--:--  1454<
HTTP/1.1 201 Created
< Content-Type: application/json; charset=utf-8
< Date: Sat, 10 Mar 2018 16:10:53 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< X-Api-Account-Id: 1a1
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v2-beta/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Content-Length: 1106
<
{ [1106 bytes data]
100  1378  100  1106  100   272   1967    483 --:--:-- --:--:-- --:--:--  1967
* Connection #0 to host 192.168.99.100 left intact
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> POST /v2-beta/projects/1a5/registrationTokens HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
>
< HTTP/1.1 201 Created
< Content-Type: application/json; charset=utf-8
< Date: Sat, 10 Mar 2018 16:10:53 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< X-Api-Account-Id: 1a5
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v2-beta/projects/1a5/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Content-Length: 1168
<
{"id":"1c3","type":"registrationToken","links":{"self":"http:\/\/192.168.99.100:
8080\/v2-beta\/projects\/1a5\/registrationtokens\/1c3","account":"http:\/\/192.1
68.99.100:8080\/v2-beta\/projects\/1a5\/registrationtokens\/1c3\/account","image
s":"http:\/\/192.168.99.100:8080\/v2-beta\/projects\/1a5\/registrationtokens\/1c
3\/images","instances":"http:\/\/192.168.99.100:8080\/v2-beta\/projects\/1a5\/re
gistrationtokens\/1c3\/instances"},"actions":{"activate":"http:\/\/192.168.99.10
0:8080\/v2-beta\/projects\/1a5\/registrationtokens\/1c3\/?action=activate","remo
ve":"http:\/\/192.168.99.100:8080\/v2-beta\/projects\/1a5\/registrationtokens\/1
c3\/?action=remove","deactivate":"http:\/\/192.168.99.100:8080\/v2-beta\/project
s\/1a5\/registrationtokens\/1c3\/?action=deactivate"},"baseType":"credential","n
ame":null,"state":"registering","accountId":"1a5","command":null,"created":"2018
-03-10T16:10:53Z","createdTS":1520698253000,"description":null,"image":null,"kin
d":"registrationToken","registrationUrl":null,"removed":null,"token":null,"trans
itioning":"yes","transitioningMessage":"In Progress","transitioningProgress":nul
l,"uuid":"80746b51-cbe7-401a-9a91-b1406c45c4e8"}* Connection #0 to host 192.168.
99.100 left intact
* timeout on name lookup is not supported
*   Trying 192.168.99.100...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*
Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
> GET /v2-beta/projects/1a5/registrationTokens HTTP/1.1
> Host: 192.168.99.100:8080
> User-Agent: curl/7.49.1
> Accept: application/json
> Content-Type: application/json
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Date: Sat, 10 Mar 2018 16:10:54 GMT
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Server: Jetty(9.2.11.v20150529)
< Set-Cookie: PL=rancher;Path=/
< Vary: Accept-Encoding, User-Agent
< X-Api-Account-Id: 1a5
< X-Api-Client-Ip: 192.168.99.1
< X-Api-Schemas: http://192.168.99.100:8080/v2-beta/projects/1a5/schemas
< X-Api-User-Id: 1a1
< X-Rancher-Version: v1.6.14
< Transfer-Encoding: chunked
<
{ [3627 bytes data]
100  3615    0  3615    0     0  21017      0 --:--:-- --:--:-- --:--:-- 21017
* Connection #0 to host 192.168.99.100 left intact
========================================
http://192.168.99.100:8080/v1/scripts/158B4FACE558A3EC2740:1514678400000:iX22YoC
OQBXC5K6f8jgCoZVBeYw
========================================
Unable to find image 'rancher/agent:v1.2.9' locally
v1.2.9: Pulling from rancher/agent
b3e1c725a85f: Pull complete
6a710864a9fc: Pull complete
d0ac3b234321: Pull complete
87f567b5cf58: Pull complete
063e24b217c4: Pull complete
d0a3f58caef0: Pull complete
16914729cfd3: Pull complete
dc5c21984c5b: Pull complete
d7e8f9784b20: Pull complete
Digest: sha256:c21255ac4d94ffbc7b523f870f2aea5189b68fa3d642800adb4774aab4748e66
Status: Downloaded newer image for rancher/agent:v1.2.9

INFO: Running Agent Registration Process, CATTLE_URL=http://192.168.99.100:8080/
v1
INFO: Attempting to connect to: http://192.168.99.100:8080/v1
INFO: http://192.168.99.100:8080/v1 is accessible
INFO: Inspecting host capabilities
INFO: Boot2Docker: true
INFO: Host writable: false
INFO: Token: xxxxxxxx
INFO: Running registration
INFO: Printing Environment
INFO: ENV: CATTLE_ACCESS_KEY=1E99787AEF20947CC54A
INFO: ENV: CATTLE_HOME=/var/lib/cattle
INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_URL=http://192.168.99.100:8080/v1
INFO: ENV: DETECTED_CATTLE_AGENT_IP=172.17.0.1
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9
INFO: Launched Rancher Agent: c85e5afd27749592d292ef8f7a7974ea1101089729bc9022c8
f3c311d5f3254e
1st5

We notice the following containers were created, and running:

$ docker ps -a
CONTAINER ID        IMAGE                   COMMAND                  CREATED         STATUS                          PORTS                              NAMES
c85e5afd2774        rancher/agent:v1.2.9    "/run.sh run"            2 minutes ago   Restarting (1) 24 seconds ago                                      rancher-agent
ca064677260f        rancher/server:stable   "/usr/bin/entry /usr"    4 minutes ago   Up 4 minutes                    3306/tcp, 0.0.0.0:8080->8080/tcp   default

However, after 3-5 minutes running the above script, we still notice rancher has not detected a host:

no-host

Note: the following rancher command in our install_rancher:

...
        ## register host with rancher
        docker run \
            --rm \
            --privileged \
            -v /var/run/docker.sock:/var/run/docker.sock \
            -v /var/lib/rancher:/var/lib/rancher \
            rancher/agent:v1.2.9 \
            "$RANCHER_URL/v1/scripts/$TOKEN"
...

Was taken from the rancher web-interface:

rancher-command

jeff1evesque commented 6 years ago

We attempted to acquire more information via docker logs rancher-agent.

jeff1evesque commented 6 years ago

Per rancher's related boot2docker issue, I attempted to install an older boot2docker:

$ docker-machine create -d virtualbox --virtualbox-boot2docker-url=https://gith
ub.com/boot2docker/boot2docker/releases/download/v17.09.1-ce/boot2docker.iso ra
ncher
Running pre-create checks...
(rancher) Boot2Docker URL was explicitly set to "https://github.com/boot2docker/
boot2docker/releases/download/v17.09.1-ce/boot2docker.iso" at create time, so Do
cker Machine cannot upgrade this machine to the latest version.
Creating machine...
(rancher) Boot2Docker URL was explicitly set to "https://github.com/boot2docker/
boot2docker/releases/download/v17.09.1-ce/boot2docker.iso" at create time, so Do
cker Machine cannot upgrade this machine to the latest version.
(rancher) Downloading C:\Users\jeff1evesque\.docker\machine\cache\boot2docker.iso f
rom https://github.com/boot2docker/boot2docker/releases/download/v17.09.1-ce/boo
t2docker.iso...
(rancher) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....1
00%
(rancher) Creating VirtualBox VM...
(rancher) Creating SSH key...
(rancher) Starting the VM...
(rancher) Check network to re-create if needed...
(rancher) Windows might ask for the permission to configure a dhcp server. Somet
imes, such confirmation window is minimized in the taskbar.
(rancher) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this vi
rtual machine, run: C:\Program Files\Docker Toolbox\docker-machine.exe env ranch
er
$ docker ps -a
CONTAINER ID        IMAGE                   COMMAND                  CREATED         STATUS                          PORTS                              NAMES
9daf8973b4a0        rancher/agent:v1.2.9    "/run.sh run"            About an hour ago   Restarting (1) 45 seconds ago                                      rancher-agent
ca064677260f        rancher/server:stable   "/usr/bin/entry /usr"   3 hours ago        Up 3 hours                      3306/tcp, 0.0.0.0:8080->8080/tcp   default

Then, reran a portion of the install_rancher, by manually running the suggested docker command, to install a respective rancher host:

sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://192.168.99.100:8080/v1/scripts/158B4FACE558A3EC2740:1514678400000:iX22YoCOQBXC5K6f8jgCoZVBeYw

However, after 20-30 minutes, the browser still doesn't indicate that a rancher host was created:

no-host

jeff1evesque commented 6 years ago

Now, I attempted to run the following:

$ docker-machine.exe env rancher
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="C:\Users\jeff1evesque\.docker\machine\machines\rancher"
export DOCKER_MACHINE_NAME="rancher"
export COMPOSE_CONVERT_WINDOWS_PATHS="true"
# Run this command to configure your shell:
# eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env rancher)
$ eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env rancher)

Followed by attempting to add the rancher host manually:

$ docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v
/var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://192.168.99.100:80
80/v1/scripts/158B4FACE558A3EC2740:1514678400000:iX22YoCOQBXC5K6f8jgCoZVBeYw
Unable to find image 'rancher/agent:v1.2.9' locally
v1.2.9: Pulling from rancher/agent
b3e1c725a85f: Pull complete
6a710864a9fc: Pull complete
d0ac3b234321: Pull complete
87f567b5cf58: Pull complete
063e24b217c4: Pull complete
d0a3f58caef0: Pull complete
16914729cfd3: Pull complete
dc5c21984c5b: Pull complete
d7e8f9784b20: Pull complete
Digest: sha256:c21255ac4d94ffbc7b523f870f2aea5189b68fa3d642800adb4774aab4748e66
Status: Downloaded newer image for rancher/agent:v1.2.9

INFO: Running Agent Registration Process, CATTLE_URL=http://192.168.99.100:8080/
v1
INFO: Attempting to connect to: http://192.168.99.100:8080/v1
INFO: http://192.168.99.100:8080/v1 is accessible
INFO: Inspecting host capabilities
INFO: Boot2Docker: true
INFO: Host writable: false
INFO: Token: xxxxxxxx
INFO: Running registration
INFO: Printing Environment
INFO: ENV: CATTLE_ACCESS_KEY=67D1F4B27FE8801C4CC4
INFO: ENV: CATTLE_HOME=/var/lib/cattle
INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_URL=http://192.168.99.100:8080/v1
INFO: ENV: DETECTED_CATTLE_AGENT_IP=192.168.99.101
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9
INFO: Launched Rancher Agent: 9f6122184039ec5dfd77251e0a5a6ed90cf83cc529561a1f68
ef1003ed8138a2

We see the following containers running:

$ docker ps -a
CONTAINER ID        IMAGE                  COMMAND             CREATED    STATUS              PORTS               NAMES
9f6122184039        rancher/agent:v1.2.9   "/run.sh run"       About a minute ago   Up About a minute                       rancher-agent

Additionally, our browser (~2 minutes later) indicates rancher is configured with a host:

base-1 mariadb-2

For now, we'll need to transition the above manual commands, to our install_rancher script.

jeff1evesque commented 6 years ago

Additionally, after running the above manual commands, our stack now displays containers:

stack

containers

jeff1evesque commented 6 years ago

3fd9b8a: our current install_rancher, now automates the above manual commands. More specifically, a rancher host, containing a stack of multiple docker containers (each pulled from dockerhub) is created. However, our containers seem not to be in a satisfactory state (similar to above). So, we'll need to fix our respective *.dockerfiles, and push them to dockerhub, respectively, to ensure future rancher installation, provisions the application with functional containers. Then, we'll need to know the respective IP address for our local application via the web browser.

jeff1evesque commented 6 years ago

f4581bf: we implement interpolated variables in an attempt to use absolute path, per #rancher IRC:

2:02:37 PM RancherBot3: Rancher is a multi-host system. The client (UI or CLI) talks to the API, which tells the agent to do things, which tells docker on the hosts to do things. There is no shell, there is no "current directory", and the hosts don't know what directory you are sitting in on your laptop. A mount relative to the current directory (./) has no meaning.

jeff1evesque commented 6 years ago

Currently, only the mariadb container is active:

containers

Note: our host has rebooted due to updates. As a result, rancher is not running immediately off a fresh install. Instead, running after a host reboot. However, rancher produces very similar results from a fresh install (previous attempt indicates all containers were stopped).

Note: the corresponding docker logs default > docker-logs--default.txt output can be fully reviewed within the corresponding docker-logs--default.txt.

jeff1evesque commented 6 years ago

We temporarily removed the volumes directives from our docker-compose.development.yml:

$ git diff docker-compose.development.yml
diff --git a/docker-compose.development.yml b/docker-compose.development.yml
index d782681..20cd091 100644
--- a/docker-compose.development.yml
+++ b/docker-compose.development.yml
@@ -3,8 +3,6 @@ services:
   mariadb:
     image: jeff1evesque/ml-mariadb:0.7
     network_mode: bridge
-    volumes:
-    - "${CWD}/docker_storage_mariadb:/var/lib/mariadb"
   webserver:
     image: jeff1evesque/ml-webserver:0.7
     network_mode: bridge
@@ -14,13 +12,9 @@ services:
   mongodb:
     image: jeff1evesque/ml-mongodb:0.7
     network_mode: bridge
-    volumes:
-    - "${CWD}/docker_storage_mongodb:/var/lib/mongodb"
   redis:
     image: jeff1evesque/ml-redis:0.7
     network_mode: bridge
-    volumes:
-    - "${CWD}/docker_storage_redis:/var/lib/redis"
   base:
     image: jeff1evesque/ml-base:0.7
     network_mode: bridge

Then, completely removed rancher + docker, followed by rerunning our install_rancher. We noticed that our redis container started, and remained in Active state. However, the other containers continuously toggled between Unhealthy: In Progress, to either Updating Active: (Need to restart service reconcile), or Updating Active: In progress, then Active, and finally back to Unhealthy, where the cycle repeats.

containers

jeff1evesque commented 6 years ago

After removing rancher + docker, then running install_rancher, we experience the same above behavior.

Note: the corresponding docker-logs--default.txt can be further reviewed.

jeff1evesque commented 6 years ago

We manually load the following custom docker-compose.yml into rancher:

version: '2'
services:
  mariadb:
    labels:
      io.rancher.container.start_once: 'true'
    image: jeff1evesque/ml-mariadb:0.7
    network_mode: bridge
  mariadb2:
    labels:
      io.rancher.container.start_once: 'true'
    image: mariadb
    network_mode: bridge
    ports:
    - 5000:8080/tcp
    - 6000:9090/tcp
  redis:
    labels:
      io.rancher.container.start_once: 'true'
    image: jeff1evesque/ml-redis:0.7
    network_mode: bridge
  base:
    labels:
      io.rancher.container.start_once: 'true'
    image: jeff1evesque/ml-base:0.7
    network_mode: bridge

manual-load

We notice that our mariadb2 contrib container is loaded into rancher:

mariadb-2

Then, about 5-10 minutes later, the mariadb2 container status changed to stopped:

started-once

jeff1evesque commented 6 years ago

Our earlier attempts to update our containers, were predicated on the following syntax:

sudo docker build -f default.dockerfile -t ml-base .
sudo docker run --name base -d ml-base
docker commit -m "replace 'ml-default' with 'ml-base'" -a 'jeff1evesque' base jeff1evesque/ml-base:0.7
docker login
docker push jeff1evesque/ml-base

We more than likely implemented the following variation when committing the respective changes:

docker commit -m "xxxx" -a 'jeff1evesque' base jeff1evesque/ml-yyyy:0.7

Note: the above base should mirror (i.e. be replaced by) exactly the yyyy value.


The following are more appropriate syntax, when updating our current containers:

## update base
docker build -f base.dockerfile -t ml-base .
docker run --name base -d ml-base
docker commit -m "update 'base'" -a 'jeff1evesque' base jeff1evesque/ml-base:0.7

## update mongodb
docker build -f mongodb.dockerfile -t ml-mongodb .
docker run --name mongodb -d ml-mongodb
docker commit -m "update 'mongodb'" -a 'jeff1evesque' mongodb jeff1evesque/ml-mongodb:0.7

## update mariadb
docker build -f mariadb.dockerfile -t ml-mariadb .
docker run --name mariadb -d ml-mariadb
docker commit -m "update 'mariadb'" -a 'jeff1evesque' mariadb jeff1evesque/ml-mariadb:0.7

## update redis
docker build -f redis.dockerfile -t ml-redis .
docker run --name redis -d ml-redis
docker commit -m "update 'redis'" -a 'jeff1evesque' redis jeff1evesque/ml-redis:0.7

## update webserver
docker build -f webserver.dockerfile -t ml-webserver .
docker run --name webserver -d ml-webserver
docker commit -m "update 'webserver'" -a 'jeff1evesque' webserver jeff1evesque/ml-webserver:0.7

## push changes to dockerhub
docker login
docker push jeff1evesque/ml-base:0.7
docker push jeff1evesque/ml-mongodb:0.7
docker push jeff1evesque/ml-mariadb:0.7
docker push jeff1evesque/ml-redis:0.7
docker push jeff1evesque/ml-webserver:0.7
jeff1evesque commented 6 years ago

5368b9d: it is likely that our above dockerhub commits were not performed correctly. However, it is more than likely, that our puppet configurations for our new rancher development environment was not provisioned correctly, on the premise of our puppet hiera configurations. Additionally, we'll need to comb through, and determine whether to completely remove the vagrant_implement key, since it's become obsolete, with no purpose; or, to collapse the vagrant and docker puppet environments, which corresponds to collapsing the corresponding hiera/, and hiera/test/hiera/ directories. These changes would impose respective implications, on the applications factory.py, since we may no longer need to have conditional flask attributes.

jeff1evesque commented 6 years ago

If we decide to collapse, and reduce our syntax into the docker puppet environment, we may need to check our current unit tests, and eliminate conditional hiera loading.

jeff1evesque commented 6 years ago

d6424d7: we need to figure out how to distinguish between the web, and api nginx reverse proxies. More specifically, we need to define two reverse proxies. However, our docker implementation, only provisions one proxy. So, we'll need to determine how to intelligently define an environment variable from docker, passed into puppet; or, we need to determine how to conditionally define an nginx instance (i.e. either web or api instance), based on a host property (i.e. hostname).

Additionally, we need to adjust our hiera yaml definition, to account for the following keys:

$hiera             = lookup('reverse_proxy')
$nginx_type    = $hiera('reverse_proxy_type')
[...]
$reverse_proxy = $hiera['reverse_proxy_web']
jeff1evesque commented 6 years ago

57d678a, 280dfcd, e941232: we attempted to answer the above statement regarding the web, and api reverse proxies. Our attempt involved defining a yaml file, for each of the reverse proxies, named after the corresponding docker host. This likely meant our docker-compose.development.yaml, needed a respective hostname for each service listed. However, we made made the assumption the --name flag, within our unit-tests will define the correspond hostname in the docker container. Otherwise, we'll have to add a corresponding -h flag, to indicate the hostname, for each respective docker run service.

jeff1evesque commented 6 years ago

Since the builds will be predicated on docker containers, maintained by rancher, the requirement to generate ssh keys are no longer needed. So, this information, along with the puppet documentation can be removed.

jeff1evesque commented 6 years ago

After a fresh install_rancher execution, our mariadb, mongodb, and redis containers take about 3-5 minutes to attain an Active state. This state is retained another 10 minutes longer (and possibly longer):

partial-active

However, our webserver has the following error logs:

3/27/2018 6:20:41 PMTraceback (most recent call last):
3/27/2018 6:20:41 PM  File "app.py", line 35, in <module>
3/27/2018 6:20:41 PM    app = create_app()
3/27/2018 6:20:41 PM  File "/var/machine-learning/factory.py", line 133, in create_app
3/27/2018 6:20:41 PM    backupCount=5
3/27/2018 6:20:41 PM  File "/usr/lib/python2.7/logging/handlers.py", line 117, in __init__
3/27/2018 6:20:41 PM    BaseRotatingHandler.__init__(self, filename, mode, encoding, delay)
3/27/2018 6:20:41 PM  File "/usr/lib/python2.7/logging/handlers.py", line 64, in __init__
3/27/2018 6:20:41 PM    logging.FileHandler.__init__(self, filename, mode, encoding, delay)
3/27/2018 6:20:41 PM  File "/usr/lib/python2.7/logging/__init__.py", line 903, in __init__
3/27/2018 6:20:41 PM    StreamHandler.__init__(self, self._open())
3/27/2018 6:20:41 PM  File "/usr/lib/python2.7/logging/__init__.py", line 928, in _open
3/27/2018 6:20:41 PM    stream = open(self.baseFilename, self.mode)
3/27/2018 6:20:41 PMIOError: [Errno 2] No such file or directory: '/vagrant/log/webserver/flask.log'

Additionally, our nginx-xxx container has the following error log:

nginx-error

jeff1evesque commented 6 years ago

Our current webserver build has the following error:

[...UNRELATED-TRACE-OMITTED...]
Start_sass/Exec[sass]/returns: /usr/lib/node_modules/node-sass/lib/binding.js:13
Start_sass/Exec[sass]/returns:       throw new Error(errors.unsupportedEnvironment());
Start_sass/Exec[sass]/returns:       ^
Start_sass/Exec[sass]/returns:
Start_sass/Exec[sass]/returns: Error: Node Sass does not yet support your current environment: Linux 64-bit with Unsupported runtime (57)
Start_sass/Exec[sass]/returns: For more information on which environments are supported please see:
Start_sass/Exec[sass]/returns: https://github.com/sass/node-sass/releases/tag/v4.5.0
Start_sass/Exec[sass]/returns:     at module.exports (/usr/lib/node_modules/node-sass/lib/binding.js:13:13)
Start_sass/Exec[sass]/returns:     at Object.<anonymous> (/usr/lib/node_modules/node-sass/lib/index.js:14:35)
Start_sass/Exec[sass]/returns:     at Module._compile (module.js:652:30)
Start_sass/Exec[sass]/returns:     at Object.Module._extensions..js (module.js:663:10)
Start_sass/Exec[sass]/returns:     at Module.load (module.js:565:32)
Start_sass/Exec[sass]/returns:     at tryModuleLoad (module.js:505:12)
Start_sass/Exec[sass]/returns:     at Function.Module._load (module.js:497:3)
Start_sass/Exec[sass]/returns:     at Module.require (module.js:596:17)
Start_sass/Exec[sass]/returns:     at require (internal/module.js:11:18)
Start_sass/Exec[sass]/returns:     at Object.<anonymous> (/usr/lib/node_modules/node-sass/bin/node-sass:11:10)
Error: './sass /var/machine-learning' returned 1 instead of one of [0]
Error: /Stage[main]/Compiler::Start_sass/Exec[sass]/returns: change from notrun to 0 failed: './sass /var/machine-learning' returned 1 instead of one of [0]
[...UNRELATED-TRACE-OMITTED...]
jeff1evesque commented 6 years ago

Our webserver indicates an Active state in rancher. However, our nginx containers still exhibit the same previous errors, since we haven't successfully loaded the corresponding image to dockerhub:

nginx-error

jeff1evesque commented 6 years ago

c931411: we previously intended to use the nginx puppet contrib module. However, our custom nginx module has the same name. We will need to rebuild our base image, then reattempt to build nginx:

$ docker build -f nginx.dockerfile -t ml-nginx-web .
Sending build context to Docker daemon  85.23MB
Step 1/6 : FROM ml-base
 ---> bf1567411f3c
Step 2/6 : ENV ROOT_PROJECT /var/machine-learning
 ---> Running in c10b5d2b7205
Removing intermediate container c10b5d2b7205
 ---> 2ae35e053983
Step 3/6 : ENV ENVIRONMENT docker
 ---> Running in 992d2f4a8da6
Removing intermediate container 992d2f4a8da6
 ---> 3bdd71f54fa6
Step 4/6 : ENV ENVIRONMENT_DIR $ROOT_PROJECT/puppet/environment/$ENVIRONMENT
 ---> Running in 6a07d1b1e562
Removing intermediate container 6a07d1b1e562
 ---> 9cab04fb0091
Step 5/6 : RUN /opt/puppetlabs/bin/puppet apply $ENVIRONMENT_DIR/modules/nginx/m
anifests/init.pp --modulepath=$ENVIRONMENT_DIR/modules_contrib:$ENVIRONMENT_DIR/
modules --confdir=$ROOT_PROJECT
 ---> Running in 9460b1d2f6af
Warning: ModuleLoader: module 'nginx' has unresolved dependencies - it will only
 see those that are resolved. Use 'puppet module list --tree' to see information
 about modules
   (file & line not available)
Warning: Class 'nginx' is already defined at /var/machine-learning/puppet/enviro
nment/docker/modules/nginx/manifests/init.pp:4; cannot redefine at /var/machine-
learning/puppet/environment/docker/modules_contrib/nginx/manifests/init.pp:28
Error: Evaluation Error: Error while evaluating a Function Call, Could not find
class ::nginx::ssl for 9460b1d2f6af.localdomain at /var/machine-learning/puppet/
environment/docker/modules/nginx/manifests/init.pp:9:5 on node 9460b1d2f6af.loca
ldomain
The command '/bin/sh -c /opt/puppetlabs/bin/puppet apply $ENVIRONMENT_DIR/module
s/nginx/manifests/init.pp --modulepath=$ENVIRONMENT_DIR/modules_contrib:$ENVIRON
MENT_DIR/modules --confdir=$ROOT_PROJECT' returned a non-zero code: 1
jeff1evesque commented 6 years ago

We'll likely need to implement the following:

## update nginx-web
docker build -f nginx.dockerfile -t ml-nginx-web .
docker run --hostname nginx-web --name nginx-web -d ml-nginx-web
docker commit -m "update 'nginx-web'" -a 'jeff1evesque' nginx-web jeff1evesque/ml-nginx-web:0.7

## update nginx-api
docker build -f nginx.dockerfile -t ml-nginx-api .
docker run --hostname nginx-api --name nginx-api -d ml-nginx-api
docker commit -m "update 'nginx-api'" -a 'jeff1evesque' nginx-api jeff1evesque/ml-nginx-api:0.7

This means we'll need to create two separate dockerhub repositories, instead of the single ml-nginx.

jeff1evesque commented 6 years ago

Our ml-nginx-api container successfully built. while failing to run:

$ docker build --build-arg NGINX_NAME=nginx-api -f nginx.dockerfile -t ml-nginx
-api .
Sending build context to Docker daemon   85.3MB
Step 1/7 : FROM ml-base
 ---> af5ef1e4f378
Step 2/7 : ENV ROOT_PROJECT /var/machine-learning
 ---> Using cache
 ---> 437bcbde9d3c
Step 3/7 : ENV ENVIRONMENT docker
 ---> Using cache
 ---> 5dac8027af75
Step 4/7 : ENV ENVIRONMENT_DIR $ROOT_PROJECT/puppet/environment/$ENVIRONMENT
 ---> Using cache
 ---> d3ccfdc4beae
Step 5/7 : ARG NGINX_NAME
 ---> Using cache
 ---> 5a04c4da95e5
Step 6/7 : RUN echo $(grep $(hostname) /etc/hosts | cut -f1) ${NGINX_NAME} >> /e
tc/hosts &&     echo ${NGINX_NAME} > /etc/hostname     /opt/puppetlabs/bin/puppe
t apply $ENVIRONMENT_DIR/modules/nginx/manifests/init.pp --modulepath=$ENVIRONME
NT_DIR/modules_contrib:$ENVIRONMENT_DIR/modules --confdir=$ROOT_PROJECT
 ---> Using cache
 ---> 9884e4d80659
Step 7/7 : ENTRYPOINT ["nginx"]
 ---> Using cache
 ---> e4367e7fe023
Successfully built e4367e7fe023
Successfully tagged ml-nginx-api:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Win
dows Docker host. All files and directories added to build context will have '-r
wxr-xr-x' permissions. It is recommended to double check and reset permissions f
or sensitive files and directories.
$ docker run --name nginx-api -d ml-nginx-api
e466e7506edb6cb703a47f2d98cfa6d1a17f53e7b0e568c12fc6bb64322b9d8f
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runt
ime create failed: container_linux.go:348: starting container process caused "ex
ec: \"nginx\": executable file not found in $PATH": unknown.
jeff1evesque commented 6 years ago

We are having problems installing node-sass in our ml-sass container. Specifically, the following notation, results with our puppet agent stalling indefinitely:

    ## packages: install general packages (npm)
    package { "node-sass@${version_node_sass}":
        ensure   => 'present',
        provider => 'npm',
        require  => Class['nodejs'],
    }

However, the following exec:

    exec { "node-sass@${version_node_sass}":
        command     => "npm install -g node-sass@${version_node_sass}",
        path        => '/usr/bin',
        require     => Class['nodejs'],
    }

results with the following error:

Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: /usr/bin/node-sass -> /usr/lib/node_modules/node-sass/bin/node-sass
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns:
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: > node-sass@4.8.3 install /usr/lib/node_modules/node-sass
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: > node scripts/install.js
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns:
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! file sh
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! path sh
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! code ELIFECYCLE
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! errno ENOENT
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! syscall spawn sh
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! node-sass@4.8.3 install: `node scripts/install.js`
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! spawn sh ENOENT
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR!
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! Failed at the node-sass@4.8.3 install script.
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns:
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR! A complete log of this run can be found in:
Notice: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: npm ERR!     /root/.npm/_logs/2018-04-02T22_31_45_401Z-debug.log
Error: 'npm install -g node-sass@4.8.3' returned 1 instead of one of [0]
Error: /Stage[main]/Package::Webcompilers/Exec[node-sass@4.8.3]/returns: change from notrun to 0 failed: 'npm install -g node-sass@4.8.3' returned 1 instead of one of [0]

If we attempt to relocate the logic into webserver.dockerfile:

RUN npm install -f node-sass@4.8.3

Then, node-sass undergoes the following trace:

Step 15/15 : RUN npm install -g node-sass@4.8.3
 ---> Running in b11b0eeb42a5
/usr/bin/node-sass -> /usr/lib/node_modules/node-sass/bin/node-sass

> node-sass@4.8.3 install /usr/lib/node_modules/node-sass
> node scripts/install.js

Unable to save binary /usr/lib/node_modules/node-sass/vendor/linux-x64-57 : { Error: EACCES: permission denied, mkdir '/usr/lib/node_modules/node-sass/vendor'
    at Object.fs.mkdirSync (fs.js:885:18)
    at sync (/usr/lib/node_modules/node-sass/node_modules/mkdirp/index.js:71:13)
    at Function.sync (/usr/lib/node_modules/node-sass/node_modules/mkdirp/index.js:77:24)
    at checkAndDownloadBinary (/usr/lib/node_modules/node-sass/scripts/install.js:114:11)
    at Object.<anonymous> (/usr/lib/node_modules/node-sass/scripts/install.js:157:1)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)
  errno: -13,
  code: 'EACCES',
  syscall: 'mkdir',
  path: '/usr/lib/node_modules/node-sass/vendor' }

> node-sass@4.8.3 postinstall /usr/lib/node_modules/node-sass
> node scripts/build.js

Building: /usr/bin/node /usr/lib/node_modules/node-sass/node_modules/node-gyp/bin/node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library=
gyp info it worked if it ends with ok
gyp verb cli [ '/usr/bin/node',
gyp verb cli   '/usr/lib/node_modules/node-sass/node_modules/node-gyp/bin/node-gyp.js',
gyp verb cli   'rebuild',
gyp verb cli   '--verbose',
gyp verb cli   '--libsass_ext=',
gyp verb cli   '--libsass_cflags=',
gyp verb cli   '--libsass_ldflags=',
gyp verb cli   '--libsass_library=' ]
gyp info using node-gyp@3.6.2
gyp info using node@8.11.1 | linux | x64
gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp verb check python checking for Python executable "python2" in the PATH
gyp verb `which` succeeded python2 /usr/bin/python2
gyp verb check python version `/usr/bin/python2 -c "import platform; print(platform.python_version());"` returned: "2.7.6\n"
gyp verb get node dir no --target version specified, falling back to host node version: 8.11.1
gyp verb command install [ '8.11.1' ]
gyp verb install input version string "8.11.1"
gyp verb install installing version: 8.11.1
gyp verb install --ensure was passed, so won't reinstall if already installed
gyp WARN EACCES user "undefined" does not have permission to access the dev dir "/root/.node-gyp/8.11.1"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/usr/lib/node_modules/node-sass/.node-gyp"
gyp verb tmpdir == cwd automatically will remove dev files after to save disk space
gyp verb command install [ '8.11.1' ]
gyp verb install input version string "8.11.1"
gyp verb install installing version: 8.11.1
gyp verb install --ensure was passed, so won't reinstall if already installed
gyp verb install version not already installed, continuing with install 8.11.1
gyp verb ensuring nodedir is created /usr/lib/node_modules/node-sass/.node-gyp/8.11.1
gyp WARN EACCES user "undefined" does not have permission to access the dev dir "/usr/lib/node_modules/node-sass/.node-gyp/8.11.1"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/usr/lib/node_modules/node-sass/.node-gyp"
gyp verb tmpdir == cwd automatically will remove dev files after to save disk space
[...REMAINING-TRACE-OMITTED...]
jeff1evesque commented 6 years ago

Our nginx reverse proxy build, currently has failed dependencies:

root@trusty64:/vagrant# docker build --build-arg TYPE=api --build-arg RUN=false --build-arg VHOST=machine-learning-api.com --build-arg HOST_PORT=9090 --build-arg LISTEN_PORT=6000 --build-arg WEBSERVER_PORT=6001 -f nginx.dockerfile -t ml-nginx-api .
Sending build context to Docker daemon  79.26MB
Step 1/17 : FROM ml-base
 ---> 07907ffc22d4
Step 2/17 : ENV ENVIRONMENT docker
 ---> Running in 6430361668e4
Removing intermediate container 6430361668e4
 ---> b4901c2eafc6
Step 3/17 : ENV ROOT_PROJECT /var/machine-learning
 ---> Running in 25ffd94f3e64
Removing intermediate container 25ffd94f3e64
 ---> f056f92da413
Step 4/17 : ENV PUPPET /opt/puppetlabs/bin/puppet
 ---> Running in 1ef772256c89
Removing intermediate container 1ef772256c89
 ---> b8d3bb29a8a9
Step 5/17 : ENV ROOT_PUPPET /etc/puppetlabs
 ---> Running in 95b53189602c
Removing intermediate container 95b53189602c
 ---> 76514eaada15
Step 6/17 : ENV MODULES $ROOT_PUPPET/code/modules
 ---> Running in 33d888bb8525
Removing intermediate container 33d888bb8525
 ---> 86d77d8c909e
Step 7/17 : ENV CONTRIB_MODULES $ROOT_PUPPET/code/modules_contrib
 ---> Running in b73ba15bb26e
Removing intermediate container b73ba15bb26e
 ---> db14054e7566
Step 8/17 : ARG RUN
 ---> Running in a768fdbfb727
Removing intermediate container a768fdbfb727
 ---> c6dfcd0a704e
Step 9/17 : ARG TYPE
 ---> Running in 0712e88fd6f9
Removing intermediate container 0712e88fd6f9
 ---> 27e71d0121a5
Step 10/17 : ARG VHOST
 ---> Running in b6169010cc90
Removing intermediate container b6169010cc90
 ---> 25c846e1dc03
Step 11/17 : ARG HOST_PORT
 ---> Running in c4cb471e707b
Removing intermediate container c4cb471e707b
 ---> 43e30f923539
Step 12/17 : ARG LISTEN_PORT
 ---> Running in b078d7a9ae30
Removing intermediate container b078d7a9ae30
 ---> b044ec7e858c
Step 13/17 : ARG WEBSERVER_PORT
 ---> Running in 22f6ffb6e709
Removing intermediate container 22f6ffb6e709
 ---> 357feca549ec
Step 14/17 : COPY hiera $ROOT_PROJECT/hiera
 ---> e8378c8b1f75
Step 15/17 : COPY puppet/environment/$ENVIRONMENT/modules/reverse_proxy $ROOT_PUPPET/code/modules/reverse_proxy
 ---> e1cf6fe1eba1
Step 16/17 : RUN $PUPPET apply -e "class { reverse_proxy:     run            => '$RUN',     type           => '$TYPE',     vhost          => '$VHOST',     host_port      => '$HOST_PORT',     listen_port    => $LISTEN_PORT,     webserver_port => '$WEBSERVER_PORT', } " --modulepath=$CONTRIB_MODULES:$MODULES --confdir=$ROOT_PUPPET/puppet
 ---> Running in db62de16ced2
Warning: Unknown variable: 'nginx_version'. at /etc/puppetlabs/code/modules/reverse_proxy/manifests/params.pp:34:27
Warning: Unknown variable: '::reverse_proxy::params::country'. at /etc/puppetlabs/code/modules/reverse_proxy/manifests/init.pp:15:23
Warning: Unknown variable: '::reverse_proxy::params::org'. at /etc/puppetlabs/code/modules/reverse_proxy/manifests/init.pp:16:23
Warning: Unknown variable: '::reverse_proxy::params::state'. at /etc/puppetlabs/code/modules/reverse_proxy/manifests/init.pp:17:23
Warning: Unknown variable: '::reverse_proxy::params::locality'. at /etc/puppetlabs/code/modules/reverse_proxy/manifests/init.pp:18:23
Warning: Unknown variable: '::reverse_proxy::params::unit'. at /etc/puppetlabs/code/modules/reverse_proxy/manifests/init.pp:19:23
Warning: Unknown variable: '::reverse_proxy::params::bit'. at /etc/puppetlabs/code/modules/reverse_proxy/manifests/init.pp:20:23
Warning: Unknown variable: '::reverse_proxy::params::days'. at /etc/puppetlabs/code/modules/reverse_proxy/manifests/init.pp:21:23
Notice: Compiled catalog for db62de16ced2.localdomain in environment production in 1.06 seconds
Notice: /Stage[main]/Reverse_proxy::Config/File[/root/build]/ensure: created
Notice: /Stage[main]/Reverse_proxy::Config/File[/root/build/ssl-nginx-api]/ensure: defined content as '{md5}e52e89122155173008fed4ff2355eaf6'
Notice: /Stage[main]/Reverse_proxy::Config/Exec[create-certificate-api]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Nginx::Package::Debian/Apt::Source[nginx]/Apt::Key[Add key: 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from Apt::Source nginx]/Apt_key[Add key: 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from Apt::Source nginx]/ensure: created
Notice: /Stage[main]/Nginx::Package::Debian/Apt::Source[nginx]/Apt::Setting[list-nginx]/File[/etc/apt/sources.list.d/nginx.list]/ensure: defined content as '{md5}38dd3bfbe3d46866a7fc46a8eba7a763'
Notice: /Stage[main]/Apt::Update/Exec[apt_update]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Nginx::Package::Debian/Package[nginx]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/conf.stream.d]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/conf.mail.d]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/var/nginx]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/var/log/nginx]/owner: owner changed 'root' to 'www-data'
Notice: /Stage[main]/Nginx::Config/File[/var/log/nginx]/group: group changed 'root' to 'adm'
Notice: /Stage[main]/Nginx::Config/File[/var/log/nginx]/mode: mode changed '0755' to '0750'
Notice: /Stage[main]/Nginx::Config/File[/var/nginx/client_body_temp]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/var/nginx/proxy_temp]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/sites-available]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/sites-enabled]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/streams-enabled]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/streams-available]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/nginx.conf]/content: content changed '{md5}f7984934bd6cab883e1f33d5129834bb' to '{md5}4ced68fa346e9856741b81d28261fcef'
Error: Could not set 'file' on ensure: No such file or directory @ dir_s_mkdir - /etc/nginx/conf.d/http:/localhost-upstream.conf20180404-5-1pr1vr6.lock
Error: Could not set 'file' on ensure: No such file or directory @ dir_s_mkdir - /etc/nginx/conf.d/http:/localhost-upstream.conf20180404-5-1pr1vr6.lock
Wrapped exception:
No such file or directory @ dir_s_mkdir - /etc/nginx/conf.d/http:/localhost-upstream.conf20180404-5-1pr1vr6.lock
Error: /Stage[main]/Reverse_proxy::Config/Nginx::Resource::Upstream[http://localhost]/Concat[/etc/nginx/conf.d/http://localhost-upstream.conf]/File[/etc/nginx/conf.d/http://localhost-upstream.conf]/ensure: change from absent to file failed: Could not set 'file' on ensure: No such file or directory @ dir_s_mkdir - /etc/nginx/conf.d/http:/localhost-upstream.conf20180404-5-1pr1vr6.lock
Notice: /Stage[main]/Reverse_proxy::Config/Nginx::Resource::Server[machine-learning-api.com]/Concat[/etc/nginx/sites-available/machine-learning-api.com.conf]/File[/etc/nginx/sites-available/machine-learning-api.com.conf]/ensure: defined content as '{md5}ebe4bbf34d1e3d93f747eb64471a6a92'
Notice: /Stage[main]/Reverse_proxy::Config/Nginx::Resource::Server[machine-learning-api.com]/File[machine-learning-api.com.conf symlink]/ensure: created
Notice: /Stage[main]/Nginx::Service/Service[nginx]: Dependency File[/etc/nginx/conf.d/http://localhost-upstream.conf] has failures: true
Warning: /Stage[main]/Nginx::Service/Service[nginx]: Skipping because of failed dependencies
Notice: Applied catalog in 21.09 seconds
Removing intermediate container db62de16ced2
 ---> ef402b451d80
Step 17/17 : CMD ["/bin/sh", "-c", "nginx"]
 ---> Running in fb6ef4e53531
Removing intermediate container fb6ef4e53531
 ---> fbddc9bc4c91
jeff1evesque commented 6 years ago

We are now receiving the following error for our nginx-xxx container:

nginx: [emerg] upstream "localhost" may not have port 9090 in /etc/nginx/sites-enabled/machine-learning-api.com.conf:23

The corresponding machine-learning-api.com.conf needs further review:

root@nginx-api:/# cat /etc/nginx/sites-enabled/machine-learning-api.com.conf
# MANAGED BY PUPPET
server {
  listen       *:6000 ssl;

  server_name  machine-learning-api.com;

  ssl on;

  ssl_certificate           /etc/puppetlabs/puppet/ssl/certs/machine-learning-api.com_api.crt;
  ssl_certificate_key       /etc/puppetlabs/puppet/ssl/private_keys/machine-learning-api.com_api.key;
  ssl_session_cache         shared:SSL:10m;
  ssl_session_timeout       5m;
  ssl_protocols             TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers               ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS;
  ssl_prefer_server_ciphers on;

  index  index.html index.htm index.php;

  access_log            /var/log/nginx/puppet_access.log combined;
  error_log             /var/log/nginx/puppet_error.log;

  location / {
    proxy_pass            https://localhost:9090;
    proxy_read_timeout    90s;
    proxy_connect_timeout 90s;
    proxy_send_timeout    90s;
    proxy_set_header      Host $host;
    proxy_set_header      X-Real-IP $remote_addr;
    proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header      Proxy "";
  }
}
jeff1evesque commented 6 years ago

Docker does not reliably honor upstart init scripts. So, we'll need to refactor several of our existing services.

jeff1evesque commented 6 years ago

Our puppet RUN statements within mongodb.dockerfile, executes without error. However, the successive statement, attempting to provision mongodb users fail:

[...PRECEDING-TRACE-OMITTED...]
Notice: Applied catalog in 8.24 seconds
Removing intermediate container fb7d4fb87096
 ---> 5034f0fecfd8
Step 14/16 : RUN /usr/bin/mongod --fork --config /etc/mongod.conf &&     cd /root/build &&     ./create-mongodb-users
 ---> Running in c721d3c3beb1
about to fork child process, waiting until server is ready for connections.
forked process: 7
child process started successfully, parent exiting
2018-04-08T23:27:33.760-0400 E QUERY    [thread1] Error: couldn't add user: not authorized on admin to execute command { createUser: "authenticated", pwd: "xxx", roles: [ "readWrite", "userAdmin", "dbAdmin", { role: "readWrite", db: "dataset" }, { role: "userAdmin", db: "dataset" }, { role: "dbAdmin", db: "dataset" } ], digestPassword: false, writeConcern: { w: "majority", wtimeout: 5000.0 } } :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.createUser@src/mongo/shell/db.js:1267:15
@(shell eval):1:1

The command '/bin/sh -c /usr/bin/mongod --fork --config /etc/mongod.conf &&     cd /root/build &&     ./create-mongodb-users' returned a non-zero code: 252
jeff1evesque commented 6 years ago

Our current webserver docker build, failed to get a layer:

root@trusty64:/vagrant# docker build --build-arg PORT=5001 --build-arg TYPE=web -f webser          ver.dockerfile -t jeff1evesque/ml-webserver-web:0.7 .
Sending build context to Docker daemon   75.8MB
Step 1/24 : FROM jeff1evesque/ml-sklearn:0.7
 ---> 30291c16b6b2
Step 2/24 : ENV ENVIRONMENT docker
 ---> Using cache
 ---> 0cafe93bde7f
Step 3/24 : ENV PUPPET /opt/puppetlabs/bin/puppet
 ---> Using cache
 ---> 388c89cf6309
Step 4/24 : ENV ROOT_PROJECT /var/machine-learning
 ---> Using cache
 ---> 4ea31b62a20d
Step 5/24 : ENV ROOT_PUPPET /etc/puppetlabs
 ---> Using cache
 ---> 75af636fe2a3
Step 6/24 : ENV MODULES $ROOT_PUPPET/code/modules
 ---> Using cache
 ---> 4d12fcdc7560
Step 7/24 : ENV CONTRIB_MODULES $ROOT_PUPPET/code/modules_contrib
 ---> Using cache
 ---> e06b7b6d0c29
Step 8/24 : ARG PORT
 ---> Using cache
 ---> 8ae3eb456f84
Step 9/24 : ARG TYPE
 ---> Using cache
 ---> 5a6650c96301
Step 10/24 : RUN mkdir -p $ROOT_PROJECT/interface
 ---> Running in 7f1af60dfe86
Removing intermediate container 7f1af60dfe86
 ---> 22229a80d5da
Step 11/24 : RUN mkdir -p $ROOT_PUPPET/brain
 ---> Running in a135a26a2412
Removing intermediate container a135a26a2412
 ---> ad66dbf2dd54
Step 12/24 : RUN mkdir -p $ROOT_PUPPET/test
 ---> Running in 47cd07e18e05
Removing intermediate container 47cd07e18e05
 ---> 796275d66173
Step 13/24 : COPY log $ROOT_PROJECT/log
 ---> c5891fedb486
Step 14/24 : COPY interface $ROOT_PROJECT/interface
 ---> a3b6792fb125
Step 15/24 : COPY hiera $ROOT_PROJECT/hiera
 ---> 89d16112dc1f
Step 16/24 : COPY brain $ROOT_PROJECT/brain
failed to export image: failed to create image: failed to get layer sha256:d62e933c1a2f16          571e91de48d5d23ebff208a85a75a97c1a7c03a310b0385d00: layer does not exist
jeff1evesque commented 6 years ago

Our corresponding webserver build succeeds:

root@trusty64:/vagrant# docker build -f dockerfile/webserver.dockerfile -t jeff1evesque/ml-webserver:0.7 .
Sending build context to Docker daemon  75.86MB
Step 1/13 : FROM jeff1evesque/ml-sklearn:0.7
 ---> 07888e7ae1d9
Step 2/13 : ENV ENVIRONMENT docker
 ---> Running in 65c7076a4780
Removing intermediate container 65c7076a4780
 ---> 33c43a5ade0a
Step 3/13 : ENV PUPPET /opt/puppetlabs/bin/puppet
 ---> Running in 1c2464a88775
Removing intermediate container 1c2464a88775
 ---> de069252ad47
Step 4/13 : ENV ROOT_PROJECT /var/machine-learning
 ---> Running in 47801912d3c1
Removing intermediate container 47801912d3c1
 ---> 6bbc84351b57
Step 5/13 : ENV ROOT_PUPPET /etc/puppetlabs
 ---> Running in ddec4b867044
Removing intermediate container ddec4b867044
 ---> aff7aaa3eba4
Step 6/13 : ENV MODULES $ROOT_PUPPET/code/modules
 ---> Running in d57d7a4fd9c4
Removing intermediate container d57d7a4fd9c4
 ---> f83f6c3f3213
Step 7/13 : ENV CONTRIB_MODULES $ROOT_PUPPET/code/modules_contrib
 ---> Running in 97079a076eac
Removing intermediate container 97079a076eac
 ---> 51291ff2473c
Step 8/13 : RUN mkdir -p $ROOT_PROJECT/interface $ROOT_PUPPET/brain $ROOT_PUPPET/test
 ---> Running in bcd4b392083a
Removing intermediate container bcd4b392083a
 ---> 46c19deb8b00
Step 9/13 : COPY log interface hiera brain test app.py factory.py __init__.py $ROOT_PROJECT/
 ---> 2b6b88dac341
Step 10/13 : COPY puppet/environment/$ENVIRONMENT/modules/webserver $ROOT_PUPPET/code/modules/webserver
 ---> c68b1b9dca78
Step 11/13 : RUN $PUPPET apply -e "class { webserver:     run => false, } " --modulepath=$CONTRIB_MODULES:$MODULES --confdir=$ROOT_PUPPET/puppet
 ---> Running in ebdd217e2704
Warning: Unknown variable: 'python::params::provider'. at /etc/puppetlabs/code/modules_contrib/python/manifests/init.pp:56:25
Warning: Unknown variable: 'python::params::source'. at /etc/puppetlabs/code/modules_contrib/python/manifests/init.pp:61:25
Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules
   (file & line not available)
Notice: Compiled catalog for ebdd217e2704.localdomain in environment production in 0.32 seconds
Notice: /Stage[main]/Webserver::Install/Package[pytest-cov]/ensure: created
Notice: /Stage[main]/Webserver::Install/Package[pyyaml]/ensure: created
Notice: /Stage[main]/Webserver::Install/Package[redis]/ensure: created
Notice: /Stage[main]/Mysql::Client::Install/Package[mysql_client]/ensure: created
Notice: /Stage[main]/Mysql::Bindings::Python/Package[python-mysqldb]/ensure: created
Notice: /Stage[main]/Webserver::Install/Package[gunicorn]/ensure: created
Notice: /Stage[main]/Webserver::Config/File[/var/machine-learning/log/webserver]/ensure: created
Notice: /Stage[main]/Webserver::Config/File[/var/machine-learning/log/application]/ensure: created
Notice: /Stage[main]/Webserver::Config/File[/var/machine-learning/log/application/error]/ensure: created
Notice: /Stage[main]/Webserver::Config/File[/var/machine-learning/log/application/warning]/ensure: created
Notice: /Stage[main]/Webserver::Config/File[/var/machine-learning/log/application/info]/ensure: created
Notice: /Stage[main]/Webserver::Config/File[/var/machine-learning/log/application/debug]/ensure: created
Notice: /Stage[main]/Webserver::Config/File[/var/machine-learning/entrypoint]/ensure: defined content as '{md5}f1fa84127e44ece43f6db7b6e200fd47'
Notice: Applied catalog in 25.86 seconds
Removing intermediate container ebdd217e2704
 ---> 14506301c373
Step 12/13 : WORKDIR $ROOT_PROJECT
Removing intermediate container 9deb5b893eb8
 ---> 904afe8f7815
Step 13/13 : ENTRYPOINT ["./entrypoint"]
 ---> Running in ca07164da6c9
Removing intermediate container ca07164da6c9
 ---> 4ae21c24f3de
Successfully built 4ae21c24f3de
Successfully tagged jeff1evesque/ml-webserver:0.7

However, running the container fails:

root@trusty64:/vagrant# docker run --hostname webserver-api --name webserver-api -it jeff1evesque/ml-webserver:0.7 0.0.0.0 6001 6 api
[2018-04-11 18:29:37 +0000] [5] [INFO] Starting gunicorn 19.7.1
[2018-04-11 18:29:37 +0000] [5] [INFO] Listening at: http://0.0.0.0:6001 (5)
[2018-04-11 18:29:37 +0000] [5] [INFO] Using worker: sync
[2018-04-11 18:29:37 +0000] [10] [INFO] Booting worker with pid: 10
[2018-04-11 18:29:37 +0000] [10] [ERROR] Exception in worker process
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 578, in spawn_worker
    worker.init_process()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 126, in init_process
    self.load_wsgi()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 135, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 65, in load
    return self.load_wsgiapp()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/util.py", line 352, in import_app
    __import__(module)
  File "/var/machine-learning/factory.py", line 19, in <module>
    from brain.cache.session import RedisSessionInterface
ImportError: No module named brain.cache.session
[2018-04-11 18:29:37 +0000] [10] [INFO] Worker exiting (pid: 10)
[2018-04-11 18:29:37 +0000] [13] [INFO] Booting worker with pid: 13
[2018-04-11 18:29:37 +0000] [13] [ERROR] Exception in worker process
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 578, in spawn_worker
    worker.init_process()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 126, in init_process
    self.load_wsgi()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 135, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 65, in load
    return self.load_wsgiapp()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/util.py", line 352, in import_app
    __import__(module)
  File "/var/machine-learning/factory.py", line 19, in <module>
    from brain.cache.session import RedisSessionInterface
ImportError: No module named brain.cache.session
[2018-04-11 18:29:37 +0000] [13] [INFO] Worker exiting (pid: 13)
Traceback (most recent call last):
  File "/usr/local/bin/gunicorn", line 11, in <module>
    sys.exit(run())
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 74, in run
    WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/base.py", line 203, in run
    super(Application, self).run()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/base.py", line 72, in run
    Arbiter(self).run()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 231, in run
    self.halt(reason=inst.reason, exit_status=inst.exit_status)
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 344, in halt
    self.stop()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 393, in stop
    time.sleep(0.1)
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 244, in handle_chld
    self.reap_workers()
  File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 524, in reap_workers
    raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>

This is likely because our flask application is wired up to connect it's corresponding redis client to a redis-server endpoint. If we determine that this dependency exists, then it may be suitable to invoke our gunicorn webserver via our install_rancher script. Specifically, after all our containers have been installed and running, we can implement a corresponding docker exec to kick off the gunicorn processes.

jeff1evesque commented 6 years ago

On second thought, the rancher-compose.yml can be configured to auto-restart. That way, when all other container dependencies have reached a stable running state, our webserver instances will run, given that corresponding logic and syntax are sufficient.

jeff1evesque commented 6 years ago

Our nginx containers now successfully builds:

root@trusty64:/vagrant# docker build --build-arg TYPE=api --build-arg VHOST=machine-learning-api.com --build-arg HOST_PORT=9090 --build-arg LISTEN_PORT=6000 --build-arg MEMBERS='localhost:6001' -f dockerfile/nginx.dockerfile -t jeff1evesque/ml-nginx-api:0.7 .
Sending build context to Docker daemon   75.9MB
Step 1/16 : FROM jeff1evesque/ml-base:0.7
 ---> b41e451df1a8
Step 2/16 : ENV ENVIRONMENT docker
 ---> Running in f1e337fac3b2
Removing intermediate container f1e337fac3b2
 ---> b0b3c9ec372b
Step 3/16 : ENV ROOT_PROJECT /var/machine-learning
 ---> Running in eef5a9c8dd62
Removing intermediate container eef5a9c8dd62
 ---> 602979c38ee3
Step 4/16 : ENV PUPPET /opt/puppetlabs/bin/puppet
 ---> Running in bebed3eef1db
Removing intermediate container bebed3eef1db
 ---> 3fbff5edd99b
Step 5/16 : ENV ROOT_PUPPET /etc/puppetlabs
 ---> Running in 41dfcae32649
Removing intermediate container 41dfcae32649
 ---> d9ac8aa5d39e
Step 6/16 : ENV MODULES $ROOT_PUPPET/code/modules
 ---> Running in c040b9379b5e
Removing intermediate container c040b9379b5e
 ---> 9541ea8ebfa0
Step 7/16 : ENV CONTRIB_MODULES $ROOT_PUPPET/code/modules_contrib
 ---> Running in a88e476d1a77
Removing intermediate container a88e476d1a77
 ---> c097b5967146
Step 8/16 : ARG TYPE
 ---> Running in 6a7208006c1a
Removing intermediate container 6a7208006c1a
 ---> 111ae1e49515
Step 9/16 : ARG VHOST
 ---> Running in f2d6b461bfc0
Removing intermediate container f2d6b461bfc0
 ---> 0f200f61a8d9
Step 10/16 : ARG HOST_PORT
 ---> Running in b8a6b83cddd7
Removing intermediate container b8a6b83cddd7
 ---> b80d95f62c26
Step 11/16 : ARG LISTEN_PORT
 ---> Running in 8918625cffb9
Removing intermediate container 8918625cffb9
 ---> 6cc27b786742
Step 12/16 : ARG MEMBERS
 ---> Running in 5c3d53af59ac
Removing intermediate container 5c3d53af59ac
 ---> 1c89b793f435
Step 13/16 : COPY hiera $ROOT_PROJECT/hiera
 ---> 129f7a0bcfda
Step 14/16 : COPY puppet/environment/$ENVIRONMENT/modules/reverse_proxy $ROOT_PUPPET/code/modules/reverse_proxy
 ---> 58f7b02b5cdb
Step 15/16 : RUN $PUPPET apply -e "class { reverse_proxy:     run            => 'false',     type           => '$TYPE',     vhost          => '$VHOST',     host_port      => '$HOST_PORT',     listen_port    => $LISTEN_PORT,     members        => '$MEMBERS', } " --modulepath=$CONTRIB_MODULES:$MODULES --confdir=$ROOT_PUPPET/puppet
 ---> Running in 21df1ebdef4f
Notice: Compiled catalog for 21df1ebdef4f.localdomain in environment production in 0.97 seconds
Notice: /Stage[main]/Reverse_proxy::Config/File[/root/build]/ensure: created
Notice: /Stage[main]/Reverse_proxy::Config/File[/root/build/ssl-nginx-api]/ensure: defined content as '{md5}7a86fc5ed782849b9193931493247bac'
Notice: /Stage[main]/Reverse_proxy::Config/Exec[create-certificate-api]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Nginx::Package::Debian/Apt::Source[nginx]/Package[apt-transport-https]/ensure: created
Notice: /Stage[main]/Nginx::Package::Debian/Apt::Source[nginx]/Apt::Key[Add key: 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from Apt::Source nginx]/Apt_key[Add key: 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62 from Apt::Source nginx]/ensure: created
Notice: /Stage[main]/Nginx::Package::Debian/Apt::Source[nginx]/Apt::Setting[list-nginx]/File[/etc/apt/sources.list.d/nginx.list]/ensure: defined content as '{md5}38dd3bfbe3d46866a7fc46a8eba7a763'
Notice: /Stage[main]/Apt::Update/Exec[apt_update]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Nginx::Package::Debian/Package[nginx]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/conf.stream.d]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/conf.mail.d]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/var/nginx]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/var/log/nginx]/owner: owner changed 'root' to 'www-data'
Notice: /Stage[main]/Nginx::Config/File[/var/log/nginx]/group: group changed 'root' to 'adm'
Notice: /Stage[main]/Nginx::Config/File[/var/log/nginx]/mode: mode changed '0755' to '0750'
Notice: /Stage[main]/Nginx::Config/File[/var/nginx/client_body_temp]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/var/nginx/proxy_temp]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/sites-available]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/sites-enabled]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/streams-enabled]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/streams-available]/ensure: created
Notice: /Stage[main]/Nginx::Config/File[/etc/nginx/nginx.conf]/content: content changed '{md5}f7984934bd6cab883e1f33d5129834bb' to '{md5}4ced68fa346e9856741b81d28261fcef'
Notice: /Stage[main]/Reverse_proxy::Config/Nginx::Resource::Upstream[localhost-api]/Concat[/etc/nginx/conf.d/localhost-api-upstream.conf]/File[/etc/nginx/conf.d/localhost-api-upstream.conf]/ensure: defined content as '{md5}9d816855afea8b4ae8aa0a53eab363f2'
Notice: /Stage[main]/Reverse_proxy::Config/Nginx::Resource::Server[machine-learning-api.com]/Concat[/etc/nginx/sites-available/machine-learning-api.com.conf]/File[/etc/nginx/sites-available/machine-learning-api.com.conf]/ensure: defined content as '{md5}5741d41bd72167a2019b475597b780c6'
Notice: /Stage[main]/Reverse_proxy::Config/Nginx::Resource::Server[machine-learning-api.com]/File[machine-learning-api.com.conf symlink]/ensure: created
Notice: /Stage[main]/Nginx::Service/Service[nginx]: Triggered 'refresh' from 1 events
Notice: Applied catalog in 22.65 seconds
Removing intermediate container 21df1ebdef4f
 ---> ce5e9f303a06
Step 16/16 : CMD ["nginx", "-g", "daemon off;"]
 ---> Running in 79c91aec7e04
Removing intermediate container 79c91aec7e04
 ---> fd338e5e9fd8
Successfully built fd338e5e9fd8
Successfully tagged jeff1evesque/ml-nginx-api:0.7

As well as successfully runs detached:

root@trusty64:/vagrant# docker run --hostname nginx-api --name nginx-api -d jeff1evesque/ml-nginx-api:0.7
dce059ff6bf1ef30af80908a9afcbe788a33f59d0ce0ebc06b3b6fedb14f90cc
root@trusty64:/vagrant# docker ps -a
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS               NAMES
dce059ff6bf1        jeff1evesque/ml-nginx-api:0.7   "nginx -g 'daemon of…"   3 seconds ago       Up 2 seconds                            nginx-api
root@trusty64:/vagrant# docker exec -it nginx-api /bin/bash
root@nginx-api:/# ps -e
  PID TTY          TIME CMD
    1 ?        00:00:00 nginx
    5 ?        00:00:00 nginx
    6 pts/0    00:00:00 bash
   19 pts/0    00:00:00 ps
root@nginx-api:/# top
top - 23:18:21 up 14:59,  0 users,  load average: 0.09, 0.30, 0.36
Tasks:   4 total,   1 running,   3 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.8 us,  0.0 sy,  0.0 ni, 99.2 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   1016464 total,   644396 used,   372068 free,    58712 buffers
KiB Swap:  1048572 total,     9248 used,  1039324 free.   250320 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
   20 root      20   0   19864   2388   2076 R  0.8  0.2   0:00.02 top
    1 root      20   0   41912   5584   4756 S  0.0  0.5   0:00.00 nginx
    5 www-data  20   0   42344   3052   1972 S  0.0  0.3   0:00.00 nginx
    6 root      20   0   18184   3248   2772 S  0.0  0.3   0:00.00 bash
jeff1evesque commented 6 years ago

We need to work through the following docker images:

Then, we can determine whether our webserver will run, when other container dependencies are running, as well as determining the proper syntax, to tie together all our containers within our docker-compose.yml.

jeff1evesque commented 6 years ago

Our sass container builds successfully:

root@trusty64:/vagrant# docker build -f dockerfile/sass.dockerfile -t jeff1evesque/ml-sass:0.7 .
Sending build context to Docker daemon  75.99MB
Step 1/10 : FROM node:9
 ---> aa3e171e4e95
Step 2/10 : USER node
 ---> Running in 4f94a970ca7a
Removing intermediate container 4f94a970ca7a
 ---> 509a1ae81fce
Step 3/10 : ENV ROOT_PROJECT /var/machine-learning
 ---> Running in dcb68d87add6
Removing intermediate container dcb68d87add6
 ---> b67648eb3063
Step 4/10 : COPY src/scss $ROOT_PROJECT/src/scss
 ---> 390b2ff27e38
Step 5/10 : COPY interface/static/css $ROOT_PROJECT/interface/static/css
 ---> ea2c86612137
Step 6/10 : RUN mkdir /home/node/.npm-global;     chown -R node:node /home/node/.npm-global
 ---> Running in bc3ac2774b1e
Removing intermediate container bc3ac2774b1e
 ---> d2d40d6142cd
Step 7/10 : ENV PATH=/home/node/.npm-global/bin:$PATH
 ---> Running in cdc4ae9e4d87
Removing intermediate container cdc4ae9e4d87
 ---> 2d9c6cafde34
Step 8/10 : ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
 ---> Running in eb60f5a70abd
Removing intermediate container eb60f5a70abd
 ---> 2488c151fc78
Step 9/10 : RUN npm install -g node-sass
 ---> Running in f98670359e7a
/home/node/.npm-global/bin/node-sass -> /home/node/.npm-global/lib/node_modules/node-sass/bin/node-sass

> node-sass@4.8.3 install /home/node/.npm-global/lib/node_modules/node-sass
> node scripts/install.js

Downloading binary from https://github.com/sass/node-sass/releases/download/v4.8.3/linux-x64-59_binding.node
Download complete
Binary saved to /home/node/.npm-global/lib/node_modules/node-sass/vendor/linux-x64-59/binding.node
Caching binary to /home/node/.npm/node-sass/4.8.3/linux-x64-59_binding.node

> node-sass@4.8.3 postinstall /home/node/.npm-global/lib/node_modules/node-sass
> node scripts/build.js

Binary found at /home/node/.npm-global/lib/node_modules/node-sass/vendor/linux-x64-59/binding.node
Testing binary
Binary is fine
+ node-sass@4.8.3
added 187 packages in 7.931s
Removing intermediate container f98670359e7a
 ---> 6c1bb90bf982
Step 10/10 : CMD node-sass --watch ${ROOT_PROJECT}/src/scss --output ${ROOT_PROJECT}/interface/static/css
 ---> Running in 0aa0b21670e8
Removing intermediate container 0aa0b21670e8
 ---> 20feb3d61774
Successfully built 20feb3d61774
Successfully tagged jeff1evesque/ml-sass:0.7

As well as running as a detached container:

root@trusty64:/vagrant# docker run --name sass -it jeff1evesque/ml-sass:0.7
^Croot@trusty64:/vagrant# docker run --name sass -d jeff1evesque/ml-sass:0.7
docker: Error response from daemon: Conflict. The container name "/sass" is already in use by container "5bd17af56bafb95d754da637b2805e444cf851c73d642431213d39ddc9430fe2". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
root@trusty64:/vagrant# docker ps -a
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS                        PORTS               NAMES
5bd17af56baf        jeff1evesque/ml-sass:0.7   "/bin/sh -c 'node-sa…"   51 seconds ago      Exited (130) 12 seconds ago                       sass
root@trusty64:/vagrant# docker rm sass
sass
root@trusty64:/vagrant#
root@trusty64:/vagrant#
root@trusty64:/vagrant# docker run --name sass -d jeff1evesque/ml-sass:0.7
d2850ce0daa4c96015ea016212fe2f9f8b029762c99d7559c8108a4751be8d22
root@trusty64:/vagrant# docker ps -a
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES
d2850ce0daa4        jeff1evesque/ml-sass:0.7   "/bin/sh -c 'node-sa…"   3 seconds ago       Up 2 seconds                            sass
root@trusty64:/vagrant# docker exec -it sass /bin/bash
node@d2850ce0daa4:/$ ps -e
  PID TTY          TIME CMD
    1 ?        00:00:00 sh
    5 ?        00:00:00 node-sass
   15 pts/0    00:00:00 bash
   21 pts/0    00:00:00 ps
jeff1evesque commented 6 years ago

We'll try to eliminate the webcompilers from puppet enforcement, to simplify the docker build (i.e. dockerfiles). Additionally, staging, and production environments would likely not have these puppet module, or manifests, since the corresponding docker containers would unlikely be implemented in these environments. It would make more sense to have some kind of triggered pipeline, where assets would be compiled, then deployed upon approval.

jeff1evesque commented 6 years ago

dc0477d: we removed uglifyjs, since minifying javascript assets, in our development environment, unnecessarily complicates the development workflow.