Closed jgraettinger closed 8 years ago
Overall this seems reasonable. I am a bit concerned with people not understanding the functionality and filing bug reports due to misuse. It also seems to add a small amount of overhead in the benchmarks, but that might be unavoidable.
Longer term, I would like to implement some sort of caching strategy, although that might make more sense in a higher-level library like geoip2
rather than in this library.
I am a bit concerned with people not understanding the functionality and filing bug reports due to misuse.
I've refactored and extended documentation on LookupOffset & Decode to document the behavior & the motivating use case. Let me know your thoughts.
It also seems to add a small amount of overhead in the benchmarks, but that might be unavoidable.
Here's what I'm seeing now:
master:
BenchmarkMaxMindDB-4 50000 33877 ns/op
BenchmarkCountryCode-4 500000 2225 ns/op
offset-lookups:
BenchmarkMaxMindDB-4 50000 35602 ns/op
BenchmarkCountryCode-4 500000 2224 ns/op
Longer term, I would like to implement some sort of caching strategy, although that might make more sense in a higher-level library like geoip2 rather than in this library.
Agreed, I don't think that's appropriate in maxminddb-golang
itself. Caching requires too much knowledge regarding the record type being extracted, and IMO it's a feature that maxminddb
doesn't impose such knowledge.
Looks great. Thanks! :+1:
Expose database offsets, and decoding records at those offsets, as a first-class concept via LookupOffset & Decode. Allow decoded records to indirect nested structures by introducing special handling for the uintptr type: fields with this type are decoded by storing the database offset of the indirected record.
Reflection is expensive, many database records ("continent", "country", etc) are embarrassingly cacheable, and database offsets make for excellent caching keys. These methods allow clients to trivially identify and re-use structures which have previously been extracted.
This change is