BetterCloud / vault-java-driver

Zero-dependency Java client for HashiCorp's Vault
https://bettercloud.github.io/vault-java-driver/
335 stars 224 forks source link

Ensure writes to VaultConfig and SslConfig are guaranteed to be visible to other threads #228

Open adyang opened 4 years ago

adyang commented 4 years ago

46

Hi Team,

This is just a minor change to ensure the variables are visible to other threads. (as per Java Memory Model: https://docs.oracle.com/javase/specs/jls/se8/html/jls-8.html#jls-8.3.1.4)

While I understand that the vault client is mostly not designed to be multi-threaded, there is a use case where this might still be useful:

  1. Consider the task of token renewal for a long running service, that uses the vault client in irregular intervals.
  2. In order to ensure the vault token is still valid regardless of the interval, one would need to schedule the renewal call periodically. However, in java this means that one would need to start a background scheduler thread (either manually, scheduled executors or framework support).
  3. This means that the vault client will at least be used by 1 other thread (main & scheduler thread) and more than 1 thread would be accessing the fields in the VaultConfig/ SslConfig.
  4. Assuming, that one does not mutate the VaultConfig after construction in the main thread (and initializing calls are made to set the retries once), currently, there is still no guarantee that the scheduler thread will be able to see the values set by the main thread.

The fix makes all the variables volatile to ensure that all threads always see a consistent value after any particular write (there will still be race conditions if the variables are mutated by different threads, but that's not an issue for this use case).

I think this is an easy fix for a use case, where I think might be quite common for long running services. It is also hard to detect, as different architectures and different JVMs might behave differently.

While as much as possible I would adopt the one vault per thread approach, it seems unavoidable to do so for this use case, hence the fix.

Thanks for the time and for creating this Library!