Closed Gumbee closed 2 years ago
The strategy calls either passport's .fail
or .error
, in your case it's .fail
. You're only providing part of the data, e.g. the RPError instance message and stack trace would tell you more.
My guess is that timestamp validations are failing because of incorrectly set system time. Can't say without more details, you may start by setting a clock skew of 60 seconds.
That being said it's an RPError, ergo an assertion failure, and not a bug.
Found the issue. The kubernetes deployment was using localhost as jwt issuer. Providing the correct issuer solved the issue. Thank you for your quick reply!
Describe the bug When using node-openid-client with passport and node-oidc-provider as provider, passport fails to authenticate (and it seems the verify callback is never even called) and contains data of the form {"name": "RPError", "jwt": "somejwtdata"} inside the callback of passport's authenticate. This however only occurs when the applications are deployed to production (we are using a kubernetes cluster on google cloud). Locally everything works (even when locally running the applications in production environment)
Snippet
Expected behaviour Inside the passport authenticate callback, user should be populated with the correct user and info should not contain some weird error name and a jwt token when in production.
Environment:
Additional context