Closed dyolcekaj closed 1 month ago
Hi @dyolcekaj, thank you for the detailed description and patch. It's appreciated.
At a first glance, the added verbosity seems to be worth the improvement. I'll give it a more thorough review as soon as I can.
Filed internally as 255748.
Hi @dyolcekaj, thank you for the detailed description and patch. It's appreciated.
At a first glance, the added verbosity seems to be worth the improvement. I'll give it a more thorough review as soon as I can.
Filed internally as 255748.
Hi @cwaldren-ld,
Thanks again for considering this change. Just following up on any internal discussions, curious to know if any decisions on accepting or rejecting this change have been made.
Thanks!
Hi @dyolcekaj , this looks good to me. I need to make some (unrelated) changes in this repo first, but then I'll work on getting this merged in.
(I rebased the branch, just FYI in case you pull.)
Hi @dyolcekaj , thanks again for this. It went out as 3.2.0. This will be pulled into the Go SDK shortly (hopefully.)
Requirements
Describe the solution you've provided
Reduce memory allocations for some collection types by avoiding the slow and expensive JSON marshal->unmarshal path. Extend the list of type assertions to slices and maps of some primitive types.
Additional context
While memory profiling a production application we noticed that 50+% of our memory allocations were happening during
ldvalue.CopyArbitraryValue
. We use feature flags heavily, and for any HTTP request it is typical to evaluate 2-8 feature flags. We found that the reason for this high memory usage was due to a string slice with several hundred items being passed throughldvalue.CopyArbitraryValue
on mostldcontext.Builder
instantiations. This was occurring in a general use function likeIf any of the
map[string]any
values are themselves a[]string
or similar we hit theFromJSONMarshal
slow path. To partially solve the problem we have written a wrapper around this SDK with more or less the same code as I am submitting here. The reduction in memory allocation is especially noticeable when copying[]string
, but in all cases there is a fairly significant improvement in latency and total memory usage.Although this is verbose, alternatives that use generic functions like
don't save on allocations, and using reflection to determine the slice or map element types does have some benefit especially for slices but still has a high number of allocations for large maps. You can see an implementation of this change using reflection here
Here are benchmarks from my machine showing the improvement for these specific use cases. Command for all is
go test -benchmem '-run=^$$' -bench="CollectionCopy*" ./ldvalue
Before:
After:
Reflection based benchmark
Benchstat
Regardless of your decision to accept this change, I appreciate your time.