Closed m273d15 closed 7 years ago
I tried it on an ubuntu 16.04, kernel version: 4.8.0, 64 Bit Prerequisites:
Steps:
mv .env.default .env
(contains dummy configuration)sudo docker-compose up
- if you configured docker for root
I was stupid! This does not work as i thought! We should find the right port and host!
I tried it on Windows 7, 64-Bit and got the following error message:
$ docker-compose up
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
Therefore I've created a new machine using docker-machine create swp
(but I'm not sure if this is correct).
Then I've executed docker-machine start swp
, configured the environment variables using docker-machine env swp
and also executed eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env swp)
.
Now using docker-compose up
gave me this error:
$ docker-compose up
Recreating eidfuswp_static_1 ... error
Recreating eidfuswp_db_1 ... error
ERROR: for eidfuswp_db_1 Cannot start service db: failed to initialize logging driver: Unix syslog delivery error
ERROR: for db Cannot start service db: failed to initialize logging driver: Unix syslog delivery error
ERROR: for static Cannot start service static: failed to initialize logging driver: Unix syslog delivery error
ERROR: Encountered errors while bringing up the project.
I couldn't resolve this issue, but commenting out all lines within docker-compose.yml
related to logging
it's now possible to execute docker-compose up
, however I didn't understand how to access the "homepage" and configure the .env
file properly.
This setup description was moved to a wiki page.
I had some issues getting the example running on Windows 7 64bit, too.
First of all, I installed the Docker Toolbox. After that, as @baris1892 already mentioned one has to create a docker machine:
docker-machine create swp
docker-machine start swp
docker-machine env swp
eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env swp)
Then, I had some issues with Windows vs. Linux line feeds so I would generally recommend to disable auto line feed recognition when cloning the repo:
git config --global core.autocrlf false
The original repo contained some linux symbolic links which were not working on windows. I created a new branch windows_fix
deleting them. I added a new line in www/conf/sites-available/envreplace.sh to copy them manually:
cp $DIR/*.conf $DIR/../sites-enabled
See also here.
Facing the same problem as @baris1892 I had to activate syslogd on the docker machine. One can do this by:
docker-machine ssh swp
syslogd -n &
exit
It can be more comfortable for testing to use the internal mechanism from docker for logging. This can be achieved by deleting all logging relevant lines from docker-compose.yml
. I added an example docker-compose.logging.yml
to the branch windows_fix. Logs can now be derived by calling something like this:
docker container logs eidfuswp_www_1 | tail
To use the example it is neccessary to have some self-signed certificates (or to have some real ones). They can be created by executing the following lines and confirming every prompt:
openssl req -newkey rsa:2048 -nodes -keyout MAIN.key -x509 -days 365 -out MAINcer
openssl req -newkey rsa:2048 -nodes -keyout www.key -x509 -days 365 -out www.cer
Following this question it is necessary to save the certificates and in general all files which should be mounted within the user directory C:/Users/[User]/certs/..
Having the certificates at the right place the .env file has to be configured properly. I chose eid.com
as domain name. The IPv4-Prefix has also to be changed depending on the network configuration (e.g. by looking at ifconfig
(Linux) or ipconfig
(Windows).
BOILERPLATE_DOMAIN=eid.com
# network
BOILERPLATE_IPV4_16PREFIX=192.168
BOILERPLATE_IPV6_SUBNET=bade:affe:dead:beef:b011::/80
BOILERPLATE_IPV6_ADDRESS=bade:affe:dead:beef:b011:0642:ac10:0080
# certificates
BOILERPLATE_WWW_CERTS=/c/Users/keller/certs
# API-related
BOILERPLATE_API_SECRETKEY=1234
BOILERPLATE_DB_PASSWORD=1234
This a workaround and should be replaced soon. The problem is that django is not allowing requests which do not contain the domain name (https://192.168.0.128 is refused). Therefore we manipulate the host file of our docker machine by adding the following line:
192.168.0.128 eid.com
This redirects the traffic from eid.com to the ip without dns.
Run docker-compose up --build
to compose.
After logging into our docker machine the following should work now:
docker@swp:~$ wget eid.com
Connecting to eid.com (192.168.0.128:80)
Connecting to eid.com (192.168.0.128:443)
index.html 100% |***************************| 79 0:00:00 ETA
docker@swp:~$ cat index.html
<!doctype html>
<html>
<body>
<h1>yo hey</h1>
</body>
</html>
The example page can be accessed by our docker machine but not from outside (meaning the Windows host). It is possible to use a ssh tunnel to forward the request to django but django refuses them as the domain name is wrong. A solution would be to configure the network to map to the Host-only-Adapter of the docker machine with proper dns. Any help and ideas are appreciated.
I hope I did not forget anything and you will be able to run the example on Windows now. See #2.
The webpage can be accessed from the host by addind the following line to C:/Windows/System32/drivers/etc/hosts
:
192.168.99.101 eid.com
where the ip address is the ip address of the docker machine (docker-machine ip swp
). It's only working after restarting the browser. This is a workaround until we find a better solution than editing the host file.
@BenjaminKeller : Thanks for your detailed instruction!
Unfortunately Running docker-compose up --build
gave me the following error:
ERROR: for aa690276ca80_0_eidfuswp_www_1 Cannot start service www: Invalid address 192.168.0.128: It does not belong to any of this network's subnets
However docker-machine ip swp
gave me 192.168.99.100
. The BOILERPLATE_IPV4_16PREFIX
should be the same like yours I think...
@baris1892 Try to build at first:
docker-compose build
Then start all containers:
docker-compose start
Then post the log of the www container here, please:
docker container logs eidfuswp_www_1 | tail
Connect to your docker machine swp and provide the output of
ifconfig
The problem is very confusing, especially as your docker machine ip is in the same subnet as the required ip. Did you try to setup a new docker-machine and rebuild? (I am assuming that you are using the code of branch windows_fix).
Yes, I'm using the branch windows_fix (and the content of docker-compose.logging.yml
). I've recreated the machine and now the error is gone, however something is still wrong regarding the hosts file. I've added 192.168.99.100 eid.com
where the IP was obtained through docker-machine ip swp
, but executing ping eid.com
it still connects to the real eid.com page:
docker@swp:~$ ping eid.com
PING eid.com (45.33.14.247): 56 data bytes
ifconfig
outputs the following: https://pastebin.com/LmJyFLSd
ServerFault is your friend. Regarding the hosts file the most common mistake is too many whitespaces between ip and domain. Be sure that the format is like
<ip><tab><domain><enter>
If that is not working, try the solutions of the mentioned link. As this is a specific problem we also might consider to solve that in a private session.
@BenjaminKeller: I've recreated again the whole machine (using your updated develop branch), but now I'm getting an error relating the SSL certificates:
www_1 | mkdir: cannot create directory '/etc/nginx/sites-available/../sites-enabled': File exists
www_1 | 2017/08/01 08:10:29 [emerg] 17#17: BIO_new_file("/etc/ssl/private/MAIN.cer") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/ssl/private/MAIN.cer','r') error:2006D080:BIO routine
s:BIO_new_file:no such file)
www_1 | nginx: [emerg] BIO_new_file("/etc/ssl/private/MAIN.cer") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/ssl/private/MAIN.cer','r') error:2006D080:BIO routines:BIO_new_file:no su
ch file)
The location of the certificates in .env
is:
BOILERPLATE_WWW_CERTS=/c/Users/Baris/certs
So it seems that the four generated certs files (in /c/Users/Baris/certs
) are not copied to the needed destination (etc/ssl/private
). Any ideas on how to resolve this?
docker-compose.yml looks fine I think:
www:
[...]
volumes:
- ${BOILERPLATE_WWW_CERTS}:/etc/ssl/private:ro
I have two guesses:
Try to manually copy the certs to the directory /etc/ssl/private
on your docker machine by ssh.
Adapt your .env-file:
BOILERPLATE_WWW_CERTS=C:\Users\Baris\certs
Or try to copy them (I don't know how).
@baris1892 There was also a typo in the certificate generation. See the updated version in wiki.
I tried to run the project as you see below in the code box.
My result is that i see the exptected result in the browser, under the restriction that ssl does not work properly and the browser warns me (this could be a misconfiguration of my browser + certificates), but the strange problem is that the api container will not boot successfully since he is unable to access the db host. Therefore the container starts and exits with value 1 in a loop.
Is someone familiar with this problem?
function createCert {
NAME_PREFIX=$1
openssl req -newkey rsa:2048 -nodes -keyout $NAME_PREFIX.key -x509 -days 365 -out $NAME_PREFIX.cer
}
function addToEnv {
echo "$1" >> .env
}
CERT_DIR=/etc/ssl/certs
echo Remove existing setup files
rm .env {www,MAIN}.{cer,key} > /dev/null
echo Shutdown compose
sudo docker-compose down
echo Create .env file
addToEnv "BOILERPLATE_DOMAIN=eid.de"
addToEnv "BOILERPLATE_IPV4_16PREFIX=$(hostname -I | cut -d' ' -f 1 | cut -d. -f 1,2)"
addToEnv "BOILERPLATE_IPV6_SUBNET=bade:affe:dead:beef:b011::/80"
addToEnv "BOILERPLATE_IPV6_ADDRESS=bade:affe:dead:beef:b011:0642:ac10:0080"
addToEnv "BOILERPLATE_WWW_CERTS=$CERT_DIR"
addToEnv "BOILERPLATE_API_SECRETKEY=$(date | md5sum)"
sleep 1
addToEnv "BOILERPLATE_DB_PASSWORD=$(date | md5sum)"
echo Create ssl certifactes
createCert MAIN
createCert www
sudo cp {MAIN,www}.{cer,key} "$CERT_DIR/"
sudo chown root:root $CERT_DIR/{MAIN,www}.{cer,key}
echo Start compose
sudo docker-compose up --build
The certifactes are self-signed so they are not allowed to work properly. If your api has problems accessing the db try to to build it completely from scratch and be sure that all volumes were deleted:
docker-compose down
docker volume rm
docker volume ls
After that, it should work fine. Otherwise provide the log files, please.
Nice docker volume rm $(docker volume ls -q)
solved the issue. I think docker volume rm eidfuswp_db_mysql
would be enough. Thank you
I added a setup script (setup.sh
, commit: ba9c446fe59ecd634a5cb4b94c96475a7f0bb0d2) to the linux_dev
branch in order to simplify and unify the setup process on linux.
On Windows 10 the Hello World program is running, too. In wiki I added the section "Troubleshooting" with some hints. Thanks for the detailed instruction!
Works on Bootcamp-Windows 10 on Mac 2012 as well.
Works for all participants, therefore closed.
In order to start the project, it would be useful if everybody is able to run the hello world example