Closed peppelinux closed 2 years ago
Here a brief description about the main characteristics of this implementation.
Anyone can migrate its oidcop configuration, from flask_op or django-oidc-op or whatever, in SATOSA and without any particular efforts. Looking at the example configuration we see that config.op.server_info
have a standard SATOSA configuration with the only addition of the following customizations, needed in SATOSA for interoperational needs. These are:
autentication
authentication:
user:
acr: urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocolPassword
class: satosa.frontends.oidcop.user_authn.SatosaAuthnMethod
userinfo:
class: satosa.frontends.oidcop.user_info.SatosaOidcUserInfo
authentication inherits oidcop.user_authn.user.UserAuthnMethod
and overloads two methods involved in user authentication and verification. These tasks are handled by SATOSA in its authentication backends.
userinfo inherits oidcop.user_info.UserInfo
and proposes a way to store the claims of the users when they comes from the backend. The claims are stored in the session database (actually mongodb) and then they will be fetched during userinfo endpoint (and also token endpoint, for having them optionally in id_token claims).
MongoDB is the storage, here some brief descriptions for a demo setup. The interface to SATOSA oidcop storage is satosa.frontends.oidcop.storage.base.SatosaOidcStorage
and it have three methods:
satosa.frontends.oidcop.storage.mongo.Mongodb
overloads them to have I/O operations on mongodb.
At this time the storage logic is based on oidcop session_manager load/dump/flush methods. Each time a request is handled by an endpoint the oidcop session manager loads the definition from the storage, only which one are strictly related to the request will be loaded in the in memory storage of oidcop.
~Due to this fact, this implementation MUST be improved to properly detect the request to handle the introspection/exchange/registration endpoints.~
it's transactional because it loads data from mongo to oidcop's in_memory engine before handling a request and dump the session data to mongo before flushing the inmem storage and doing the response. Each workers does its own.
Workers can't grows data in concurrency between many concurrent requests, by design. Each workers loads and updates the data of a specific session by their own.
This approach CAN'T be deployed in a asyncio-based approach, but can't see any weakness with standalone workers.
In the case we have 2 threads that have 2 different client sessions for the same key (user_id;;client_id). The doubt is that in this draft implementation one of them will write its changes to the db first and the other will overwrite it.
Differently, in this implementation we have this behaviour:
The same client, with different browser/device, produce different sessions. We do not have a relation to any other_user_id;;clientid because the dumped session has all in it, another session will not touch the previous one, because:
This implementation definitely:
To get instead relations between many sessions there will be the need to query the storage engine, with its native query lookup method.
@c00kiemon5ter moved to https://github.com/UniversitaDellaCalabria/SATOSA-oidcop
Added to the docs under external contributions: https://github.com/IdentityPython/SATOSA/blob/master/doc/README.md#external-contributions
oidcop Satosa Frontend
endpoints
todo