Applications are within docker images to be pulled and run by Hilbert.
Current top folders | Folder description |
---|---|
helpers |
shell scripts shared between images |
images |
our base and application's images |
NOTE: Some applications may need further services (applications) to run in background
Dockerfile
(e.g. from /chrome/
) and change it according to your needs (see below).Makefile
and docker-compose.yml
Makefile
:make pull
will try to pull the desired base imagemake
will try to (re-)build your image (currently no build arguments are supported)make check
try to run the default command withing your imagemake prune
will clean-up dangling docker images (left after rebuilding images)NOTE: make
needs to be installed only on your development host.
Dockerfile
For example see chrome/Dockerfile
.
Specify your contact email via MAINTAINER
Choose a proper base image among already existing (see above) - FROM
NOTE: currently we base our images on top of phusion/baseimage:0.9.18
hilbert/baseimage
which is based on ubuntu:14.04
and contains a usefull launcher
wrapper (/sbin/my_init
).
NOTE: we share docker images as much as possible by choosing closest possible base image to start new image from.
chrome/Dockerfile
) - RUN
NOTE: one may also need to add keys and packages repositories .
NOTE: it may be necessary to update repository caches before installing some packages. Also do not forget to clean-up afterwards.
NOTE: The best way to install something is RUN update.sh && install.sh YOUR_PACKAGE && clean.sh
ADD
or COPY
NOTE: use ADD URL_TO_FILE FILE_NAME_IN_IMAGE
to add something from
network in build-time.
NOTE: use COPY local_file1 local_file2 ... PATH_IN_IMAGE/
to copy
local files (located alongside with your Dockerfile
) into the image
(with owner: root
and the same file permissions).
RUN
NOTE: only previouslly installed/added (into the image) executables can be run.
Dockerfile
and later
override them in run-time.Dockerfile
or dynammically in run-time. NOTE: no need to put run-time specifications inside Dockerfile
(e.g. EXPOSE
, PORT
, ENTRYPOINT
, CMD
etc.) as they will be overrriden in run-time via docker-composer
.
TODO: create docker-compose.yml
for all images
make
to build the imageMakefile
and docker-compose.yml
What can be specified in run-time:
your docker image
default command
environment variables to be passed to executed command
exposed (and redirected) ports
mounted devices
mounted volumes (local and docker's logical)
restart policy: "on-failure:5" (e.g. see https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/)
labels attached to running container (e.g. is_top_app=0
for BG service and is_top_app=1
for top-front GUI application)
working directory
mode of execution: not privileged
in most cases
See for example mng/docker-compose.yml
or mng/Makefile
setup.sh
: Pull or build necessary starting images (hilbert/*
).
Previously our images were available via a different tag in malex984/dockapp
repository
(https://registry.hub.docker.com/u/malex984/dockapp/).
Run (and change) setup.sh
in order to pull the base image and build starting images.
We assume the host linux to run docker service.hilbert/dummy
contains customize.sh
which performs customization to the running :dummy container,
these customization changes can than be detected with docker diff
and
archived together (e.g. /tmp/OGL.tgz
) for later use by hilbert/base/setup_ogl.sh
.:up/customize.sh
: Customize each libGL-needing image (e.g. :x11
and :test
by default for now):
Running :up/customize.sh
such a host will enable one to detect known hardware or kernel modules (e.g. VirtualBox Guest Additions or NVidia driver)
in order to localize/customize some starting images:test.nv.340.76
or :x11.vb.4.3.26
),
which than will be tagged with local names (e.g. test:latest
or x11:latest
).
We assume host system to be fully pre-configured (and all necessary kernel modules installed and loaded).
Therefore we avoid installing/building kernel modules inside docker container (e.g. using dkms
).~~runme.sh
: Launch demo prototype application.
The shell script runme.sh
is supposed to be the demo entry point.
Using host docker it runs main
(or its alteration if available) image,
which contains a glue-together script main.sh
that
now overtakes the control (!) over the host system (docker service and /dev
).main.sh
(and its helpers, e.g. run.sh
ad sv.sh
) is the only piece which is supposed to be aware of docker!
The glue gives proposes a choice menu (e.g. via hilbert/menu/menu.sh
), which exits with some return code,
depending on which the glue script takes some action or quits the main infinite loop.X11Server
if your host was not running X11 server in order to witch the host monitor into graphical mode.
NOTE: please don't do that while using the host monitor in text mode since at the moment
menu.sh
is only suitable for console/text mode (but we are working on a GUI alternative). Better to do that via SSH.Test
) or Quit.xterm: Error 32, errno 2: No such file or directory
Reason: get_pty: not enough ptys
It seems that somebody clears permissions on /dev/pts/ptmx
in the
course of the docker mounting /dev
or using it by containers...
Since this problem happens rarely it may be related to unexpected "docker rm -vf" for a running container with allocated pty. Also the following may be related:
Quick Fix is sudo chmod a+rw /dev/pts/ptmx
NOTE: what about /dev/ptmx
?
According to http://stackoverflow.com/a/29546560 : if your machine had a kernel update but you didn't restart yet then docker freaks out like that.
docker rmi $(docker images -f "dangling=true" -q)
docker images | grep malex984/dockapp | awk ' { print $1 ":" $2 } ' | xargs docker rmi -f # old
docker images | grep 'hilbert/' | awk ' { print $1 ":" $2 } ' | xargs docker rmi -f
docker rmi -f x11 test dummy
docker ps -aq | xargs docker rm -fv
This project is licensed under the Apache v2 license. See also Notice.