Seems the official image as shipped is currently not usable as OpenIDConnect Provider, at least not for production purposes (which the Satosa documentation does state in these generic terms).
This could be made more explicit in the Satosa documentation and be made easier for deployers by including the (optional) python packages for redis/mongodb usage within the official satosa docker image.
Satosa as OP fails at some point (depending on flows/usage maybe not always but often) with:
after having issued that same <authz-code> to the client only moments before when more than 1 gunicorn worker process and the default in-memory storage backend are being used. (Probably the incoming authz code is being processed by another worker that doesn't know about it and workers run separate copies of the application code and do not share memory.)
That issue is not completely clear from the documentation, IMO:
The OP example config says that the in-memory storage backend is the default and that configuring this with something else is optional:
supported storage backends:
This configuration is optional.
By default, the in-memory storage is used.
The documentation for the OP frontend plugin does call out that in-memory is "not suitable for production use" but remains opqaue as to why/how specifically it is not suitable or what even constitutes "production use". I.e., it's not explicit as to what deployment details will trigger problems here:
db_uri: [...] if it's not specified all data will only be stored in-memory (not suitable for production use).
Following the gunicorn docs you'd always end up with more than 1 worker process, even with only a single CPU core available:
Generally we recommend (2 x $num_cores) + 1
The official image does not include optional redis or pymonogo packages which would allow use of a suitable storage backend by simply setting db_uri.
(FWIW, I could not quickly find out how the stateless backend is supposed to be configured so I haven't tried that.)
Possible resolutions
Amend the Satosa documentation to be explicit just how in-memory [is] (not suitable for production use), e.g. by including something about multiple worker processes. (Let me know if you want me to file another issue about this in the satosa project.)
Include pyop[redis] and pyop[mongo] in the official Satosa docker image by adding pip install --no-cache-dir pyop[redis] pyop[mongo]
to the Dockerfile(s).
Including code for redis/mongodb in the offical satosa docker image would make it fully usable in this regard without deployers having to create (and update/rebuild, going forward) their own. This would also prevent them from having to debug this issue and ultimately having to learn how to work around it with a custom(ised) image.
Seems the official image as shipped is currently not usable as OpenIDConnect Provider, at least not for production purposes (which the Satosa documentation does state in these generic terms). This could be made more explicit in the Satosa documentation and be made easier for deployers by including the (optional) python packages for redis/mongodb usage within the official satosa docker image.
Satosa as OP fails at some point (depending on flows/usage maybe not always but often) with:
after having issued that same
<authz-code>
to the client only moments before when more than 1 gunicorn worker process and the defaultin-memory
storage backend are being used. (Probably the incoming authz code is being processed by another worker that doesn't know about it and workers run separate copies of the application code and do not share memory.)That issue is not completely clear from the documentation, IMO:
default
and that configuring this with something else isoptional
:in-memory
is "not suitable for production use" but remains opqaue as to why/how specifically it is not suitable or what even constitutes "production use". I.e., it's not explicit as to what deployment details will trigger problems here:Following the gunicorn docs you'd always end up with more than 1 worker process, even with only a single CPU core available:
The official image does not include optional redis or pymonogo packages which would allow use of a suitable storage backend by simply setting
db_uri
. (FWIW, I could not quickly find out how thestateless
backend is supposed to be configured so I haven't tried that.)Possible resolutions
in-memory
[is](not suitable for production use)
, e.g. by including something about multiple worker processes. (Let me know if you want me to file another issue about this in thesatosa
project.)pyop[redis]
andpyop[mongo]
in the official Satosa docker image by addingpip install --no-cache-dir pyop[redis] pyop[mongo]
to the Dockerfile(s).
Including code for redis/mongodb in the offical satosa docker image would make it fully usable in this regard without deployers having to create (and update/rebuild, going forward) their own. This would also prevent them from having to debug this issue and ultimately having to learn how to work around it with a custom(ised) image.