Java utility methods for geohashing.
Features
GeoHash.encodeHash
)GeoHash.decodeHash
)GeoHash.adjacentHash
), works on borders including the poles tooGeoHash.neighbours
)GeoHash.hashLengthToCoverBoundingBox
)GeoHash.coverBoundingBox
)GeoHash.heightDegrees
and GeoHash.widthDegrees
)long
values from geohashes (Base32.encodeBase32
and Base32.decodeBase32
)GeoHash.encodeHash
calls per second on an I7, single thread)Status: production, available on Maven Central
Maven site reports are here including javadoc.
Add this to your pom:
<dependency>
<groupId>com.github.davidmoten</groupId>
<artifactId>geo</artifactId>
<version>VERSION_HERE</version>
</dependency>
Databases of events at specific times occurring at specific places on the earth's surface are likely to be queried in terms of ranges of time and position. One such query is a bounding box query involving a time range and position constraint defined by a bounding lat-long box.
The challenge is to make your database run these queries quickly.
Some databases may either not support or suffer significant performance degradation when large datasets are queried with inequality conditions on more than one variable.
For example, a search for all ship reports within a time range and within a bounding box could be achieved with a range condition on time combined with a range condition on latitude combined with a range condition on longitude ( combined with = logical AND). This type of query can perform badly on many database types, SQL and NoSQL. On Google App Engine Datastore for instance only one variable with inequality conditions is allowed per query. This is a sensible step to take to meet scalability guarantees.
The bounding box query with a time range can be rewritten using geohashes so that only one variable is subject to a range condition: time. The method is:
Base32.decodeBase32
to convert a hash to a long).(startTime <= t < finishTime) and (hash3='drt' or hash3='dr2')
The last step is necessary because the set of geohashes contains the bounding box but may be larger than it.
So how long should the hashes be that we try to cover the bounding box with? This will depend on your aims which might be one or more of minimizing: cpu, url fetch time, financial cost, total data transferred from datastore, database load, 2nd tier load, or a heap of other possible metrics.
Calling GeoHash.coverBoundingBox
with just the bounding points and no additional parameters will return hashes of a length such that the number of hashes is as many as possible but less than or equal to GeoHash.DEFAULT_MAX_HASHES
(12).
You can explicitly control maxHashes by calling GeoHash.coverBoundingBoxMaxHashes
.
As a quick example, for a bounding box proportioned more a less like a screen with Schenectady NY and Hartford CT in USA at the corners:
Here are the hash counts for different hash lengths:
m
is the size in square degrees of the total hashed area and a
is the area of the bounding box.
length numHashes m/a
1 1 1694
2 1 53
3 4 6.6
4 30 1.6
5 667 1.08
6 20227 1.02
Only testing against your database and your preferrably real life data will determine what the optimal maxHashes value is. In the benchmarks section below a test with H2 database found that optimal query time was when maxHashes is about 700. I doubt that this would be the case for many other databases.
A rigorous exploration of this topic would be fun to do or see. Let me know if you've done it or have a link and I'll update this page!
This is the relationship between a hash of length n and its height and width in degrees:
First define this function:
parity(n) = 0 if n is even otherwise 1
Then
width = 180 / 2(5n+parity(n)-2)/2 degrees
height = 180 / 2(5n-parity(n))/2 degrees
The height and width in kilometres will be dependent on what part of the earth the hash is on and can be calculated using Position.getDistanceToKm
.
For example at (lat,lon):
double distancePerDegreeWidth =
new Position(lat,lon).getDistanceToKm(new Position(lat, lon+1));
Inserted 10,000,000 records into an embedded H2 filesystem database which uses B-tree indexes. The records were geographically randomly distributed across a region then a bounding box of 1/50th the area of the region was chosen. Query performed as follows (time is the time to run the query and iterate the results):
hashLength numHashes found from time(s)
2 2 200K 10m 56.0
3 6 200k 1.2m 10.5
4 49 200k 303k 4.5
5 1128 200k 217K 3.6
none none 200k 200k 31.1 (multiple range query)
I was pleasantly surprised that H2 allowed me to put over 1000 conditions in the where clause. I tried with the next higher hash length as well with over 22,000 hashes but H2 understandably threw a StackOverFlowError.
To run the benchmark:
mvn clean test -Dn=10000000
Running with n=1,000,000 is much quicker to run and yields the same primary result:
multiple range query is 10X slower than geohash lookup if the hash length is chosen judiciously