smebberson / docker-alpine

Docker containers running Alpine Linux and s6 for process management. Solid, reliable containers.
MIT License
596 stars 186 forks source link

alpine-nodejs: Dynamic loading not supported #52

Closed polarathene closed 8 years ago

polarathene commented 8 years ago

I'm compiling a native library for OpenZWave and using a node package node-openzwave-shared. The source seems to compile fine but when using the node package it runs process.dlopen() and throws an error Dynamic loading not supported. This is apparently due to using a static binary for node.js?(See this comment)

The suggestion at the linked comment is to remove --fully-static flag. I'm currently using alpine-nodejs but would like to move to using the consul variants at a later stage, any chance of removing the flag or having a variant to allow this?

The issue seems to be described on the nodejs wiki too.

smebberson commented 8 years ago

I'll look to remove it, I'll work up a :dev tagged version of the image now.

polarathene commented 8 years ago

I'm not too familiar with static vs dynamic compiling. Dynamic may affect performance a bit, perhaps there is a benchmark/profiling done for static vs dynamic compiled nodejs? I came across some advantages/disadvantages for dynamic linking here, might be worth a little description/notice on readme of the difference for developers that might not be aware of the difference.

smebberson commented 8 years ago

Yeah, to be honest, neither am I. I'm not sure if there is a benchmark available, I can't seem to find one. From reading that page on the Node.js wiki, it looks like it is a non-standard thing and might not allow all Node.js modules on NPM to work.

If this test goes well, I think I'll remove the tag permanently so that these images work in more situations. Just doing a test build now :)

smebberson commented 8 years ago

@polarathene, sorry it took a while, but you should be able to use the :dl tag now (smebberson/alpine-nodejs:dl) to preview this container sans the --fully-static flag.

Let me know how you get on.

polarathene commented 8 years ago

@smebberson Seems to be working great now :) Thanks!

A similar alpine/node image is mhart/alpine-node, they provide static compilation via a base tag and note to users about the difference(such as when needing to compile native binaries to use with project requiring dynamic linking). Might be good to have similar documentation on ReadMe if you offer both static and dynamic variants.

Will the other images such as consul and nginx combinations get this change as well?

smebberson commented 8 years ago

@polarathene, I'm glad it works now.

I'm not sure if I'll add this as standard or as an optional tag. I'll definitely add some information to the README.

Other images such as the nginx combinations and Consul will follow suit. I was planning on deprecating the nginx-nodejs combo - you were planning on using that?

I'll reopen this to track it until this change is made permanent.

@matthewvalimaki, @ncornag any input on this?

polarathene commented 8 years ago

I'm more than likely going to go with separate nginx and nodejs images, seems the better approach right? The extra combination rather than an example of the two images in use together adds to confusion a bit on where to start(new to docker). As a new user I'm keeping it simple with just a nodejs image, will probably add an nginx one in next, not sure if I should go straight to the consul variants or get it working without consul first. I haven't used consul yet either but been eager to from what I've read.

smebberson commented 8 years ago

@polarathene, yeah, I think having Nginx running in it's own container even if just to proxy requests is a better setup. The examples in this repository show you how to do this easily too.

While going straight to the Consul variants adds some extra overhead to your learning, and prototyping, ultimately, it's the best approach. Service discovery with Docker gets pretty tricky pretty quickly without it.

Good luck ;)

matthewvalimaki commented 8 years ago

@smebberson I only use consul based images. Nginx and node are running in separate containers, mainly because of separation of concerns and because I do not see a use for the combination. However based on docker hub stats the non-consul images are being used more than consul ones and based on that I'd keep what you have. On Jun 15, 2016 4:30 PM, "Scott Mebberson" notifications@github.com wrote:

@polarathene https://github.com/polarathene, yeah, I think having Nginx running in it's own container even if just to proxy requests is a better setup. The examples in this repository show you how to do this easily too.

While going straight to the Consul variants adds some extra overhead to your learning, and prototyping, ultimately, it's the best approach. Service discovery with Docker gets pretty tricky pretty quickly without it.

Good luck ;)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/smebberson/docker-alpine/issues/52#issuecomment-226349557, or mute the thread https://github.com/notifications/unsubscribe/AA_GtczHNxlJpxV14nRXbnvy_85dHjmvks5qMIsAgaJpZM4I13-u .

smebberson commented 8 years ago

@matthewvalimak, sorry, I think I might have miscommunicated to you. What are your thoughts on the static issue? Are you in favour of keeping the images fully static, or moving to a more standard non-static setup is okay with you?

matthewvalimaki commented 8 years ago

@smebberson oh haha no problem. I'm glad I gave my thoughts though :)

I do not have an opinion on this unfortunately. On Jun 15, 2016 6:25 PM, "Scott Mebberson" notifications@github.com wrote:

@matthewvalimak, sorry, I think I might have miscommunicated to you. What are your thoughts on the static issue? Are you in favour of keeping the images fully static, or moving to a more standard non-static setup is okay with you?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/smebberson/docker-alpine/issues/52#issuecomment-226365158, or mute the thread https://github.com/notifications/unsubscribe/AA_Gtd7QfaJGHIA01YjRSTzGE6dJwVOGks5qMKXvgaJpZM4I13-u .

ceymard commented 8 years ago

I do plan to keep using nodejs and nginx in the same container, simply because I like having my whole app bundled (especially when serving static files) rather than separated.

I like the idea of separation of concern, but the concern here being "serving my application" I don't see the point in splitting things further. I particularly want to avoid situations where a container was updated and not the other one -- thus simplifying update scenarios.

ceymard commented 8 years ago

sorry, realized I was a bit out of topic there.

smebberson commented 8 years ago

@polarathene, I'm moving away from --fully-static with all nodejs variants of these images. I just want to make sure that I'm covering your usage scenario and that this will build properly. Are you able to send through a copy of your Dockerfile and relevant package.json bits and pieces so I can recreate and test?

polarathene commented 8 years ago

@smebberson I am not able to provide a Dockerfile for the specific project I was working with at present. However I've recently come back to working with Docker and Alpine on something similar. The previous project used node-openzwave-shared, I'm now trying to get node-red with the package node-red-contrib-openzwave(depends on `node-openzwave-shared) to build with Alpine. I'll be happy to share the Dockerfile for that once that successfully compiles :)

Present issue is with node-gyp not being happy about permissions, I'm not sure what best practice for installing packages with npm is, I believe the issue is due to using -g flag. Should I be using a npm/node user with appropriate permissions? Redirect the global packages location to a users home directory?

There was another issue with installing serialport but if you specify the version such as serialport@4.0.1 it compiles fine, not sure if that issue is specific to Alpine images.

I've also read about suggestions that native dependencies should be compiled in a build container and then the compiled dependency moved into the production container, others just do cleanup and remove the packages needed to compile the dependencies. The latter method I've also read if not done right in the Dockerfile affects Image size due to split layers?

polarathene commented 8 years ago

I have the following Dockerfile currently compiling:

FROM smebberson/alpine-nodejs:dl

# coreutils is for `fmt`
# eudev-dev is required for libudev.h(open-zwave/cpp/hidapi/linux/hid.c) and `pkg-config`
# findutils is for additional `find` features
# linux-headers is required for linux/hidraw.h(open-zwave/cpp/hidapi/linux/hid.c)
RUN apk add --no-cache make gcc g++ python \
                       eudev-dev \
                       coreutils \
                       findutils \
                       linux-headers \
                       su-exec

RUN adduser -D -s /bin/false app
ENV APP=/app

# Install OpenZWave
# Some distro may require this or similar
# ENV LD_LIBRARY_PATH /usr/local/lib64

# TODO: Get project files via git
COPY open-zwave-1.4 $APP/src/open-zwave/
WORKDIR $APP/src/open-zwave/
RUN make && make install

# To avoid `npm install -g` issues, it's recommended to set these env var and provide a location to store the global packages
ENV NPM_CONFIG_PREFIX="${APP}/.npm-global" NPM_PACKAGES="${APP}/.npm-global"
ENV PATH="$NPM_PACKAGES/bin:$PATH"
# Could just use the users home directory instead?
RUN mkdir -p "${APP}/.npm-global"
RUN chown -R app:app $APP

# For some reason this file has permissions of 600 unlike the others which are 644
# node-gyp will error when using binding.gyp due to the wrong permissions
RUN chmod 644 /usr/lib/pkgconfig/libopenzwave.pc

# Install as non-root user to avoid node-gyp errors when installing globally as "nobody" user
RUN su-exec app npm install -g \
                            node-red \
                            node-red-contrib-knx \
                            node-red-contrib-openzwave

# node-red provides it's service on this port
EXPOSE 1880

# Start node-red when container is run
# Full path /app/.npm-global/bin/node-red
CMD ["node-red"]

I'll update that tomorrow to compile from the github project rather than local which should give you a test case?

polarathene commented 8 years ago

@smebberson I've been playing around with the Dockerfile for a bit. It works fine with both dl and 6.0.0 tags, fails on latest tag for some reason.

FROM smebberson/alpine-nodejs:latest

ENV APP_USER="app" \
    APP_HOME="/home/app"
ENV OPENZWAVE_SRC="${APP_HOME}/open-zwave-src" \
# To avoid `npm install -g` issues, it's recommended to set these env var and provide a location to store the global packages
    NPM_CONFIG_PREFIX="${APP_HOME}/.npm-global"
ENV PATH="${NPM_CONFIG_PREFIX}/bin:$PATH"

RUN adduser -D -s /bin/false $APP_USER

# make g++ gcc python to build
# coreutils is for `fmt`
# eudev-dev is required for libudev.h(open-zwave/cpp/hidapi/linux/hid.c) and `pkg-config`
# findutils is for additional `find` features that npm package node-openzwave-shared uses
# linux-headers is required for linux/hidraw.h(open-zwave/cpp/hidapi/linux/hid.c)
# openssl for git clone to use https
# su-exec to run as a user with elevated priviledges (needed for npm install -g)
RUN apk add --no-cache --virtual .build-dependencies \
        make \
        g++ \
        gcc \
        openssl \
        python \
        # open-zwave dependencies
        linux-headers \
        # node-openzwave-shared(node-red-contrib-openzwave) dependencies
        coreutils \
        findutils \
    # eudev-dev is required to run node-red/open-zwave, su-exec for running as $APP_USER.
    && apk add --no-cache \
        eudev-dev \
        su-exec \
    # Build/Install Open-ZWave
    && mkdir -p $OPENZWAVE_SRC && cd $OPENZWAVE_SRC \
    && git clone https://github.com/OpenZWave/open-zwave.git $OPENZWAVE_SRC \
    && make && make install \
    # Install node-red and OpenZWave npm packages globally as $APP_USER
    && cd $APP_HOME \
    && su-exec $APP_USER mkdir -p "${NPM_CONFIG_PREFIX}" \
    && su-exec $APP_USER npm install -g \
        node-red \
        node-red-contrib-openzwave \
    # Cleanup. This was all chained to one RUN to avoid a layer(s)
    # caching deleted content(bloats image size)
    && apk del .build-dependencies \
    && rm -rf $OPENZWAVE_SRC

# node-red service uses this port
EXPOSE 1880

# Start node-red as $APP_USER when container is run.
# Full path: /home/app/.npm-global/bin/node-red
CMD ["node-red"]

This output when it fails to load the serial and openzwave plugins for node-red:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 30-resolver: executing... 
[cont-init.d] 30-resolver: exited 0.
[cont-init.d] 40-resolver: executing... 
[cont-init.d] 40-resolver: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

Welcome to Node-RED
===================

1 Aug 02:36:13 - [info] Node-RED version: v0.14.6
1 Aug 02:36:13 - [info] Node.js  version: v6.2.2
1 Aug 02:36:13 - [info] Linux 4.6.4-1-default x64 LE
1 Aug 02:36:13 - [info] Loading palette nodes
1 Aug 02:36:18 - [warn] ------------------------------------------------------
1 Aug 02:36:18 - [warn] [rpi-gpio] Info : Ignoring Raspberry Pi specific node
1 Aug 02:36:18 - [warn] [serialport] Error: Dynamic loading not supported
1 Aug 02:36:18 - [warn] [zwave] Error: Dynamic loading not supported
1 Aug 02:36:18 - [warn] ------------------------------------------------------
1 Aug 02:36:18 - [info] Settings file  : /root/.node-red/settings.js
1 Aug 02:36:18 - [info] User directory : /root/.node-red
1 Aug 02:36:18 - [info] Flows file     : /root/.node-red/flows_linux-7f6v.json
1 Aug 02:36:18 - [info] Creating new flow file
1 Aug 02:36:18 - [info] Starting flows
1 Aug 02:36:18 - [info] Started flows
1 Aug 02:36:18 - [info] Server now running at http://127.0.0.1:1880/

This is output with no problems on dl and 6.0.0 tags:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 30-resolver: executing... 
[cont-init.d] 30-resolver: exited 0.
[cont-init.d] 40-resolver: executing... 
[cont-init.d] 40-resolver: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.

Welcome to Node-RED
===================

1 Aug 06:26:44 - [info] Node-RED version: v0.14.6
1 Aug 06:26:44 - [info] Node.js  version: v6.3.0
1 Aug 06:26:44 - [info] Linux 4.6.4-1-default x64 LE
1 Aug 06:26:44 - [info] Loading palette nodes
1 Aug 06:26:50 - [warn] ------------------------------------------------------
1 Aug 06:26:50 - [warn] [rpi-gpio] Info : Ignoring Raspberry Pi specific node
1 Aug 06:26:50 - [warn] ------------------------------------------------------
1 Aug 06:26:50 - [info] Settings file  : /root/.node-red/settings.js
1 Aug 06:26:50 - [info] User directory : /root/.node-red
1 Aug 06:26:50 - [info] Flows file     : /root/.node-red/flows_linux-7f6v.json
1 Aug 06:26:50 - [info] Creating new flow file
1 Aug 06:26:50 - [info] Starting flows
1 Aug 06:26:50 - [info] Started flows
1 Aug 06:26:50 - [info] Server now running at http://127.0.0.1:1880/

There was a bunch of JS stack trace and a warning, but I believe that's specific to npm package, nothing wrong with your image :)

smebberson commented 8 years ago

@polarathene, awesome. I'm glad it's all working with the latest setup. I'll move the Consul builds away from static too now. :latest will work for you too now.

Thanks for the Dockerfiles, I'll keep those as test cases.

sabrehagen commented 8 years ago

In c873e629df54eb25a94960e9984c168abf641b6f you removed the --fully-static flag from the alpine-nodejs image, however the --fully-static flag is still present in the current in the current alpine-consul-nodejs Dockerfile.

I assume the change was meant to be propagated to the alpine-consul-nodejs image too. If so, both the --fully-static flag and updating to the latest version of node.js can be done at the same time. I'd offer a pull request, but I'm not clear on your internal version to node.js version mapping scheme.