amato-gianluca / docker-wims

Docker image for WIMS (Web Interactive Multipurpose Server)
0 stars 0 forks source link

Cannot enter to administration #2

Open jpahullo opened 1 week ago

jpahullo commented 1 week ago

Hi,

Thanks a lot for your work. It works like a charm, and it is difficult to accomplish with this tool WIMS.

Some time ago I tried your image and could enter administration easily by:

  1. adding: "manager_site=IP", to the log/wims.conf. ensured it is 0600 wims:wims.
  2. create a 0600 wims:wims log/.wimspass with a sample content like 1234, just for local deployment.

But just today I downloaded your last image and I cannot enter into the administration menu, making the same steps:

root@wims:/home/wims/log# ls -lah
total 20K
drwxr-xr-x 2 root root 4.0K Jun 30 20:04 .
drwxr-xr-x 1 wims wims 4.0K May 12 11:08 ..
-rw------- 1 wims wims    5 Jun 30 20:04 .wimspass
-rw------- 1 wims wims   66 Jun 30 20:04 wims.conf
root@wims:/home/wims/log# 

I always see this page:

image

and then this page, after introducing permanent password from log/.wimspass:

image

Also checked: there is no password generated of single use:

root@wims:/home/wims/tmp/log# ls -lah
total 24K
drwx------ 1 wims wims 4.0K Jun 30 20:03 .
drwxr-xr-x 1 wims wims 4.0K May 12 11:12 ..
srwxr-xr-x 1 wims wims    0 Jun 30 20:03 .wimslogd
-rw-r--r-- 1 wims wims    0 Jun 30 20:09 lastclean
-rw-r--r-- 1 wims wims    0 Jun 30 20:03 wimslogd.err
-rw-r--r-- 1 wims wims   42 Jun 30 20:03 wimslogd.out
-rw-r--r-- 1 wims wims    2 Jun 30 20:03 wimslogd.pid
root@wims:/home/wims/tmp/log# 

I do not know if it is something related to the docker image or to the WIMS service.

Have you entered as administrator lastly on WIMS? If so, what have you done to enter as admin?

Last time I could enter into administration, to check supported software and version, and so. But not with the last image.

Thanks a lot for your time.

Jordi

jpahullo commented 1 week ago

By the way, the IP to set up into wims.conf have to be the host IP from the container viewpoint:

image

In my case, apache started with IP 172.24.0.2, and the host IP is then 172.24.0.1.

root@wims:/home/wims/log# cat wims.conf 
threshold1=1200
threshold2=2400
manager_site=127.0.0.1,172.24.0.1
root@wims:/home/wims/log# 

In addition, from the example of a docker-compose.yml I had to remove the 127.0.0.1 from the mapping port, leaving just 10000:80.

jpahullo commented 1 week ago
services:
  app:
    image: amatogianluca/wims
    security_opt:
      - seccomp:unconfined
    hostname: wims
    restart: always
    volumes:
      - ./wims:/home/wims/log:Z
    ports:
      - 10000:80
jpahullo commented 1 week ago

Or... how do you enter into WIMS administration to check that all is working as expected when defining the docker image?

Thanks a lot!

Jordi

amato-gianluca commented 1 week ago

Hi Jordi, I am working on this issue, which is rather bizarre: everything works fine if I run the image using Docker Desktop, but I have exactly the same problem of yours using the standard system-wide Docker installation.

amato-gianluca commented 1 week ago

May I ask you:

  1. what is the OS on the host system (Linux Fedora 40 in my case)
  2. if the problem persists when you downgrade the wims image to version 4.24-1 (it does for me)
jpahullo commented 1 week ago

Hi @amato-gianluca ,

Thanks for the quick feedback.

I am using linux (Ubuntu 23.04) and the system-wide Docker installation, with docker compose as a docker plugin:

$ docker info
Client: Docker Engine - Community
 Version:    24.0.7
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.11.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.21.0
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

[...]

Let me check if I downgrading to 4.24-1 solves the issue.

Thanks a lot,

Jordi

jpahullo commented 1 week ago

Hi again @amato-gianluca ,

I tried (I think) what you asked before.

I edited my local docker-compose.yml like this:

services:
  app:
    image: amatogianluca/wims:4.24-1
    security_opt:
      - seccomp:unconfined
    hostname: wims
    restart: always
    #volumes:
    #  - ./wims:/home/wims/log:Z
    ports:
      - 10000:80

I commented out the volumes part to ensure there is nothing inherited between trials.

I have to say that there is no luck, yet:

image

Any idea? What I can do now?

Thanks for your support,

Jordi

jpahullo commented 1 week ago

The permissions from the files are as follows:

wims@wims:~/log$ ls -lah
total 80K
drwx------  8 wims wims 4.0K Jul  1 16:57 .
drwxr-xr-x  1 wims wims 4.0K Jul  1 16:54 ..
-rw-r--r--  1 wims wims  577 Jul  5  2021 .developers.template
-rw-------  1 wims wims    7 Jul  1 16:54 .wimspass
-rw-------  1 wims wims  435 Jul  1 16:57 access.log
drwxr-xr-x  3 wims wims 4.0K Jul  1 16:55 account
drwx------ 54 wims wims 4.0K Jul  1 16:55 classes
drwx------  4 wims wims 4.0K Jul  1 16:55 forums
-rw-r--r--  1 wims wims  196 Mar  7  2018 front.phtml.template
-rw-r--r--  1 wims wims  207 Aug 28  2015 manager_msg.phtml.template
drwxr-xr-x  2 wims wims 4.0K Jun 27  2022 modules
-rw-r--r--  1 wims wims  664 Sep 11  2013 motd.phtml.template
drwxr-xr-x  2 wims wims 4.0K Jul  1 16:55 referer
-rw-------  1 wims wims   37 Jul  1 16:54 referer.log
-rw-------  1 wims wims  116 Jul  1 16:54 session.log
drwxr-xr-x  2 wims wims 4.0K Jun 27  2022 stat
-rw-r--r--  1 wims wims    4 Sep 18  2022 update-version
-rw-------  1 wims wims   66 Jul  1 16:53 wims.conf
-rw-r--r--  1 wims wims  211 Feb  1  2019 wims.conf.access.template
wims@wims:~/log$ 

and the content of the two files:

wims@wims:~/log$ cat .wimspass 
123456
wims@wims:~/log$ cat wims.conf
threshold1=1200
threshold2=2400
manager_site=127.0.0.1,172.22.0.1
wims@wims:~/log$ 

It may be helpful.

Jordi

amato-gianluca commented 5 days ago

Hi Jordi, I've finally reached the root of problem. I am explaining it in details, as a form of documentation.

The problem lies in the way the wims.cgi script accesses the log/.wimspass file. It does so by setting its real user id to that of the wims user and calling a shell script. The problem is that:

Now, 1000 is most likely the uid of the first (non-root, non-system) user in the host system, so you are probably logged with the same uid. However, in any desktop system, 1024 processes is ridiculously low (each thread actually counts as a different process), and you are probably using far more than 1024 processes. Therefore, when wims.cgi in the container tries to launch the script that is supposed to read the log/.wimspass files, it fails silently.

Now, let's see the possible solutions:

  1. In the host system, use a different user with a different uid
  2. Use namespaces to separate uids in the container from the uids in the host system (https://docs.docker.com/engine/security/userns-remap/)
  3. Build a new docker image yourself, using a different uid and gid for the wims user. It should be enough, in the Dockerfile of this repository, to add options --uid xyz --gid xyz to the line adduser --disabled-password --gecos '' wims
  4. Wait for my new image that will fix the problem (but I want to think carefully what to do), and possibly speak to wims developers.
jpahullo commented 3 days ago

Hi @amato-gianluca ,

Thanks for the feedback. I did not answer before, since I wanted to test the case 3) in our image (based on your work). I created a group before that. Both gid and uid with id 10000.

But with that change, I cannot even activate the button to enter into administration section (that brings us to the page that shows us the field to introduce the manager password).

Something incredible.

Now I have other priorities and I will be disconnected a bit from this issue.

Once I have time to address this issue, I will be back.

However, any feedback on its evolution and resolution will be very welcome.

Thanks a lot for your work.

Once I conclude our work, I will try to share with your some changes and improvements, just in case you want to include them in your image.

Thanks again,

Jordi

amato-gianluca commented 2 days ago

I have update the docker image. The new version 4.26-1 should fix this problem, just avoiding to set a custom limit on the number of open processes.