jessylenne / keycloak-event-listener-http

Apache License 2.0
47 stars 38 forks source link

KeyCloak v17 - Breaking changes #3

Open CesarD opened 2 years ago

CesarD commented 2 years ago

Hello. KeyCloak's new version v17 is released and with it, there are breaking changes to how custom providers are deployed in KeyCloak: https://www.keycloak.org/migration/migrating-to-quarkus#_migrating_custom_providers

What would be the new way to configure the event listener for it to work on the new v17?

Thanks!

CesarD commented 2 years ago

cc @darrensapalo In case you can help on this as well 🙏🏼

jessylenne commented 2 years ago

Hello @CesarD, and thank you for your patience, I look it up in the coming days, did you found a solution by yourself since ?

CesarD commented 2 years ago

No, sorry, I'm still using Keycloak <17 versions because of this and other reasons... Thank you!

darrensapalo commented 2 years ago

Hi there @CesarD , unfortunately I haven't had the time to tinker with this. I'll send updates here when I get to work on it.

ianwelsh commented 2 years ago

FWIW, I was able to get this working in Keycloak 18. After running make, I copied event-listener-http-jar-with-dependencies.jar to /opt/keycloak/providers/.

Config values can be set with either environment variables or adding options to the kc.sh start command:

--spi-events-listener-http-server-uri=something or KC_SPI_EVENTS_LISTENER_HTTP_SERVER_URI= --spi-events-listener-http-exclude-events=something,something2 or KC_SPI_EVENTS_LISTENER_HTTP_EXCLUDE_EVENTS=

etc.

sr258 commented 1 year ago

Closed in #6 .

CesarD commented 1 year ago

One question regarding this: will it notify events after the transaction has been committed in Keycloak or it happens before it's been committed? I ask this mainly to recognize if the data being pushed by this has already been recorded to KC's DB or it could happen that it might still be rolled-back (for whatever reason, which would leave the system feeding from this listener in an inconsistent state)

sr258 commented 1 year ago

@CesarD I'm not sure, to be honest. I've just started working with KeyCloak and needed the plugin to make it work for me. I'm not very deep into KeyCloak's internal. But as the plugin hooks into the logging mechanism, I'd suppose it happens after the transactions are commited and a rollback is not possible anymore. It's just a guess, though.

CesarD commented 1 year ago

Hmmm, I think it's all run inside a transaction, per the following documentation (I don't know if this is the most up-to-date one, but it's the most current I found): https://www.keycloak.org/docs-api/18.0/javadocs/org/keycloak/events/EventListenerProvider.html

Specifically:

Implementors can leverage the fact that the onEvent and onAdminEvent are run within a running transaction. Hence, if the event processing uses JPA, it can insert event details into a table, and the whole transaction including the event is either committed or rolled back. However if transaction processing is not an option, e.g. in the case of log files, it is recommended to hook onto transaction after the commit is complete via the KeycloakTransactionManager.enlistAfterCompletion(KeycloakTransaction) method, so that the events are stacked in memory and only written to the file after the original transaction completes successfully.

Which implies that it would be better to perhaps hook onto the transaction to push the events only after it has been effectively committed to Keycloak's DB... After all, this is an HTTP listener and we would never be able to include extra details in the event.

sr258 commented 1 year ago

You're absolutely right!

However, I don't know how you can connect the event with a transaction. None of the getters of the event object are helpful here.

At the moment, I'm content with how things are, however. I think that it's very difficult to have 100% consistency between databases if they are connected like this. Even if you fire the webhook after the transaction has been committed, there's still a chance that the second database transaction (the one that is created by the webhook listener) will fail, that there's a network error or that the webhook service is unreachable for some other reason (server restart or something like that). In these cases you will have inconsistent data between KeyCloak and your other service. My hunch is that an aborted transaction in KeyCloak is less likely than an error happening because of these other error sources combined.

I think you could achieve a higher chance of consistency if you:

a) queue events in memory as explained by the docs b) specify that the webhook returns a proper status code that reflects whether the event was processed correctly b) implement a retry mechanism for HTTP requests if the webhook failed c) implement some kind of failure logging and alerting if it's impossible to call the webhook successfully after several times, so that you can resolve issues manually

I think that the easiest way to do b-c would be to move to a message queue like RabbitMQ instead of making HTTP webhook calls.

For me it's fine if I get an occasional inconsistency between KeyCloak data and the other service, as I only use the events for back-channel user provisioning. The front-channel login also transmits the user data, so I can live with the very unlikely case of inconsistent data. Of course failed deprovisioning is not so great from a privacy rights perspective...

jessylenne commented 1 year ago

Thanks to all of you for your answers and researches !

I agree with all affirmations up there, and can just add that : Yes, you can't be 100% sure that the listener nor the webhook will do their jobs perfectly.

In my projects, i needed to have a 100% accuracy between the KC and my app : I implemented a task in my app that fetch KC's logs API to retrieve all logs since it's last iteration, and re-sync all datas once again : It's not ideal, but it adds a security.

CesarD commented 1 year ago

Yeah, my idea was that because the events didn't bring the entire set of data that I might require from some resource (User, Role, etc), that I could pull it through Keycloak's API, effectively pulling everything I need... But since at the moment of the webhook being triggered the information has not been committed to its DB yet, then I was only pulling the old data.

I understand and agree that one can't get the perfect consistency everywhere, but I do think that that is up to the system consuming it, not the one emitting the data, and this one should at least make sure that emits what has actually been recorded and not something that's not yet committed. If I'd have to hook this on my side, I would most likely make the endpoint to publish some message on a queue to ensure the webhook succeeds every single time, it's decoupled from any other logic and that I can be sure that Keycloak effectively recorded all the data if I ever need to pull it from its API at any time.

I have a few systems that would benefit from something like this, as I need to have some users' information on my backend for business purposes and it's required to have both Keycloak and my backend sync'ed in order for the users to be correctly represented in both; and so far I was forced to update Keycloak stuff only by sending the data from my system trough KC's API... If this would work, I could allow users to modify stuff on KC and push that data towards my API so it behaves more reactively.