Closed sebastian-luna-valero closed 1 month ago
Hi,
Yes, you can change the port where motley-cue
is running. If you installed the motley-cue
package using apt
or any other package manager, it also pulled nginx
as a dependency and created a site file in /etc/nginx/sites-enabled/nginx.motley_cue
. You can configure the port there, restart nginx
and it should work.
The line you mentioned in gunicorn.conf
contains the default settings, but the packaged configuration uses the BIND
variable (meaning gunicorn
listens on a socket, not a port, and is behind the nginx proxy). This is set in /etc/motley_cue/motley_cue.env
.
Check out this page in the docs for how this works: https://motley-cue.readthedocs.io/en/latest/running.html#gunicorn
Hi,
Thanks for your quick answer!
I tried again today to configure motley-cue
on a fresh VM running on port 8181
and when I try to log into the VM with mccli
it asks me for a password:
mccli ssh <ip> --oidc <account> --mc-endpoint http://<ip>:8181
Password:
However, I am expecting to log into the VM without providing a password.
Here are the errors in the log files on the VM:
$ sudo grep ERROR -r /var/log/motley_cue/
/var/log/motley_cue/motley_cue.gunicorn.error:[2024-09-23 06:48:35 +0000] [3581] [ERROR] Worker (pid:3601) was sent SIGTERM!
/var/log/motley_cue/motley_cue.gunicorn.error:[2024-09-23 06:48:35 +0000] [3581] [ERROR] Worker (pid:3600) was sent SIGTERM!
/var/log/motley_cue/motley_cue.gunicorn.error:[2024-09-23 06:52:10 +0000] [5171] [ERROR] Worker (pid:5176) was sent SIGTERM!
/var/log/motley_cue/motley_cue.gunicorn.error:[2024-09-23 06:52:10 +0000] [5171] [ERROR] Worker (pid:5177) was sent SIGTERM!
/var/log/motley_cue/motley_cue.gunicorn.error:[2024-09-23 06:52:23 +0000] [6209] [ERROR] Worker (pid:6217) was sent SIGTERM!
/var/log/motley_cue/motley_cue.gunicorn.error:[2024-09-23 06:52:23 +0000] [6209] [ERROR] Worker (pid:6214) was sent SIGTERM!
/var/log/motley_cue/feudal.log:[2024-09-23 06:54:20,905] { ...er/backend/hooks.py:15 } ERROR - Hook 'post_create' not implemented
/var/log/motley_cue/mapper.log:[2024-09-23 06:54:20] [ldf_adapter] ERROR - Hook 'post_create' not implemented
Is this expected?
Hi,
the error regarding the post_create
script is not an issue, I assume you don't configure one, which is perfectly fine. I'd be more interested in the logs before [ERROR] Worker (pid:3601) was sent SIGTERM!
, could you also send those?
Usually when you get prompted for 'Password' instead of 'Access Token', something likely went wrong with the PAM module. Did you also update the motley-cue port in /etc/pam.d/pam-ssh-oidc-config.ini
? Check out the docs here.
Thanks!
These are the log messages:
[2024-09-23 06:44:54 +0000] [3581] [INFO] Starting gunicorn 22.0.0
[2024-09-23 06:44:54 +0000] [3581] [INFO] Listening at: unix:/run/motley_cue/motley-cue.sock (3581)
[2024-09-23 06:44:54 +0000] [3581] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2024-09-23 06:44:54 +0000] [3600] [INFO] Booting worker with pid: 3600
[2024-09-23 06:44:54 +0000] [3601] [INFO] Booting worker with pid: 3601
[2024-09-23 06:44:57 +0000] [3601] [INFO] Started server process [3601]
[2024-09-23 06:44:57 +0000] [3601] [INFO] Waiting for application startup.
[2024-09-23 06:44:57 +0000] [3601] [INFO] Application startup complete.
[2024-09-23 06:44:57 +0000] [3600] [INFO] Started server process [3600]
[2024-09-23 06:44:57 +0000] [3600] [INFO] Waiting for application startup.
[2024-09-23 06:44:57 +0000] [3600] [INFO] Application startup complete.
[2024-09-23 06:48:35 +0000] [3581] [INFO] Handling signal: term
[2024-09-23 06:48:35 +0000] [3601] [INFO] Shutting down
[2024-09-23 06:48:35 +0000] [3601] [INFO] Error while closing socket [Errno 9] Bad file descriptor
[2024-09-23 06:48:35 +0000] [3600] [INFO] Shutting down
[2024-09-23 06:48:35 +0000] [3600] [INFO] Error while closing socket [Errno 9] Bad file descriptor
[2024-09-23 06:48:35 +0000] [3601] [INFO] Waiting for application shutdown.
[2024-09-23 06:48:35 +0000] [3601] [INFO] Application shutdown complete.
[2024-09-23 06:48:35 +0000] [3601] [INFO] Finished server process [3601]
[2024-09-23 06:48:35 +0000] [3581] [ERROR] Worker (pid:3601) was sent SIGTERM!
[2024-09-23 06:48:35 +0000] [3600] [INFO] Waiting for application shutdown.
[2024-09-23 06:48:35 +0000] [3600] [INFO] Application shutdown complete.
[2024-09-23 06:48:35 +0000] [3600] [INFO] Finished server process [3600]
[2024-09-23 06:48:35 +0000] [3581] [ERROR] Worker (pid:3600) was sent SIGTERM!
[2024-09-23 06:48:35 +0000] [3581] [INFO] Shutting down: Master
Then it seems to repeat the same operations several times.
This is the content of /etc/pam.d/pam-ssh-oidc-config.ini
:
[user_verification]
; if local is set to false then user verification is based upon verify_endpoint.
; This could be the motley-cue endpoint
local = false
http://localhost:8181/verify_user
I also checked:
sudo grep 'ChallengeResponseAuthentication' /etc/ssh/sshd_config
ChallengeResponseAuthentication yes
Hi,
the config looks good. There should be more informative logs in /var/log/motley_cue/mapper.log
, or even /var/log/motley_cue/feudal.log
. Could you check there?
Thanks, Diana
Everything looks correct in both log files except for the errors reported above.
/var/log/motley_cue/feudal.log:[2024-09-23 06:54:20,905] { ...er/backend/hooks.py:15 } ERROR - Hook 'post_create' not implemented
/var/log/motley_cue/mapper.log:[2024-09-23 06:54:20] [ldf_adapter] ERROR - Hook 'post_create' not implemented
In mapper.log
I see that motley
correctly fetches my access token, queries the issuer config, get my userinfo
, it maps my entitlements to local groups, and assigns me with a local user name. feudal.log
contains a subset of similar messages.
Should I check for a particular entry?
So there is no call to the verify_user
endpoint that fails? It sounds like everything related to motley_cue works fine, it's rather a PAM & SSHD config issue. You should check that PAM works as expected irrespective of motley_cue.
Note that depending on your Linux distribution and sshd version, ChallengeResponseAuthentication
has been replaced with KbdInteractiveAuthentication
, which also has to be set to yes
. Also you should have:
PasswordAuthentication no
UsePAM yes
Does ssh <your_user_name>@<ip>
prompt for an Access Token
?
So there is no call to the
verify_user
endpoint that fails?
Should this be reflected in a log message? sudo grep 'verify_user' -r /var/log/
returns nothing.
On the other hand, I have this config:
$ sudo grep -E 'PasswordAuthentication|UsePAM|KbdInteractiveAuthentication|ChallengeResponseAuthentication' /etc/ssh/sshd_config
KbdInteractiveAuthentication yes
UsePAM yes
PasswordAuthentication no
ChallengeResponseAuthentication yes
Does ssh your_user_name>@<ip prompt for an Access Token?
This is a VM configured with cloud-init
and my public SSH key is added that way. Therefore, I can access the VM via ssh <username>@<ip>
without being asked for a password.
Should this be reflected in a log message? sudo grep 'verify_user' -r /var/log/ returns nothing.
Yes, but that means something fails before the call to verify_user
This is a VM configured with cloud-init and my public SSH key is added that way. Therefore, I can access the VM via ssh username>@<ip without being asked for a password.
I meant the local user that motley-cue creates for you.
I meant the local user that motley-cue creates for you.
When I try that, I still get asked for a password.
To add more context, I am using Ansible with this role and this playbook:
- hosts: all
become: yes
gather_facts: yes
roles:
- role: "grycap.motley_cue"
ssh_oidc_other_vos_name: cloud.egi.eu
ssh_oidc_other_vos_role: auditor
- hosts: all
become: yes
gather_facts: yes
tasks:
- name: Disable default site in nginx
ansible.builtin.file:
path: /etc/nginx/sites-enabled/default
state: absent
- name: Move motley-cue to a different port (nginx)
ansible.builtin.lineinfile:
path: /etc/nginx/sites-available/nginx.motley_cue
regexp: ".*listen 8080;$"
line: " listen 8181;"
- name: Move motley-cue to a different port (pam-ssh-oidc)
ansible.builtin.lineinfile:
path: /etc/pam.d/pam-ssh-oidc-config.ini
search_string: "http://localhost:8080/verify_user"
line: http://localhost:8181/verify_user
- name: Restart nginx
ansible.builtin.service:
name: nginx
state: restarted
enabled: yes
- name: Restart motley-cue
ansible.builtin.service:
name: motley-cue
state: restarted
enabled: yes
Some more context: I am using a VM with Ubuntu 22.04 for this test.
This is strange, everything actually looks fine at a first glance. Does this happen with all the Linux distributions you tried? Maybe we have some issues with packaging of pam-ssh-oidc for specific distros.
Starting from scratch with a fresh VM, it looks like these tasks in the playbook above cause the issue:
- name: Move motley-cue to a different port (nginx)
ansible.builtin.lineinfile:
path: /etc/nginx/sites-available/nginx.motley_cue
regexp: ".*listen 8080;$"
line: " listen 8181;"
- name: Move motley-cue to a different port (pam-ssh-oidc)
ansible.builtin.lineinfile:
path: /etc/pam.d/pam-ssh-oidc-config.ini
search_string: "http://localhost:8080/verify_user"
line: http://localhost:8181/verify_user
- name: Restart motley-cue
ansible.builtin.service:
name: motley-cue
state: restarted
enabled: yes
Is there a way to increase the log level to find more details of what might be failing?
For motley-cue
, change log_level
to e.g. DEBUG
in /etc/motley_cue/motley_cue.conf
. The PAM module doesn't offer more logging unfortunately.
The first step in the playbook, you probably want to change the port to 8181 for ipv6 as well, i.e. listen [::]:8181;
.
For the second step, the full line should be verify_endpoint = http://localhost:8181/verify_user
.
Aha!
Up until now I was replacing
http://localhost:8080/verify_user
with
http://localhost:8181/verify_user
in /etc/pam.d/pam-ssh-oidc-config.ini
, but it didn't work until I used:
verify_endpoint = http://localhost:8181/verify_user
so verify_endpoint
was missing.
It works now, many thanks!
Great! I also noticed that late, in my last comment. The original file does contain the verify_endpoint =
, I assumed your script was doing a one-to-one replacement of the URL, not of the full line.
Hi,
I have a system with another web service using ports
80/443
. Can I deploymotley-cue
on the same system?I am looking in particular at this line: https://github.com/dianagudu/motley_cue/blob/v0.7.0/etc/gunicorn.conf.py#L12-L19
Thanks!