Open steven-sheehy opened 7 months ago
This is already addressed by the JDK team’s EA build. That’ll avoid surprises as a better long term solution.
That's cool to see and would definitely help long term, but unfortunately my company only uses LTS versions of Java so that'd mean we'd have to wait until Java 25 (or whatever it is) to be released.
If you read the thread, you’ll see that VTs still have serious problems and must be used very carefully for now. You might not want to use them until 25 when hopefully they’ll be less error prone. Libraries trying to support them early might not really help; imho it’s best to wait.
the summary here is that addressing the issue with object monitors pinning is great but the hoorays may be short lived as the spot light moves to other cases where carriers are pinned, and specifically native frames due to resolving references to classes in the constant pool and the resulting class loading, or class initializers. There are some ideas around this that may provide some relief on these cases. We had to shake out issues with object monitors first.
Thanks for the info @ben-manes. Given that information, would you say that it's better to keep the current implementation as-is?
+1, thanks.
One other note: We did a little work to improve the memory usage of Suppliers.memoize
recently. I would suspect that a switch to ReentrantLock
would more than undo the gains from that. And the reason that we did the optimization is that it appeared to matter at least a little to our fleet in aggregate, so I'd expect a switch to ReentrantLock
hurt in a small but measurable way. [edit: We have subsequently made changes that offset those gains, but using ReentrantLock
would increase size even further.]
There is probably a way to write our own even more efficient implementation of Suppliers.memoize
, which would use LockSupport.park
and friends directly. (I'd played around with something similar for Dagger in my internal cl/209143332, as has another developer in cl/448147583.) That could not only be more compact but also avoid making waiters take turns in receiving the produced value. It hasn't been clearly worth the effort, but it's possible that virtual threads will change the calculus.
Whatever we decide for now, it's possible that we'll revisit as we get more experience with virtual threads. (If we encounter the level of problems that Ben Manes passes along, then we may not get that experience anytime soon :))
Oh, also: As the Java platform develops more broadly, we may also see alternatives to Suppliers.memoize
become available there, reducing the need for Guava's version. See, for example, https://openjdk.org/jeps/8312611 or even https://openjdk.org/jeps/8209964.
API(s)
How do you want it to be improved?
Change it from
synchronized
toReentrantLock
Why do we need it to be improved?
Suppliers.memoize()
internally uses the double-checked locking idiom with thesynchronized
keyword. Usingsynchronized
with virtual threads can cause thread pinning and slow down performance.Example
Current Behavior
Functions properly but causes performance degradation when used with virtual threads on Java 21
Desired Behavior
Performs optimally with virtual threads.
Concrete Use Cases
In our virtual thread enabled REST API, added lazy loading using
Suppliers.memoize()
around some of our database interactions to improve performance but ended up seeing worse performance than going directly to database.Checklist
[X] I agree to follow the code of conduct.
[X] I have read and understood the contribution guidelines.
[X] I have read and understood Guava's philosophy, and I strongly believe that this proposal aligns with it.