Closed GoogleCodeExporter closed 9 years ago
Original comment by helixblue
on 18 Oct 2010 at 11:19
Original comment by helixblue
on 18 Oct 2010 at 11:21
[deleted comment]
[deleted comment]
until this can be implemented, I've found this workaround for me, using the
ability to search at several locations at once.
1. create a gpx-track for routing
2. use a converter to reduce track points to, e.g. 50 (depending on length of
track & number of caches you want to find) - save as .csv
3. use a text-editor & search&replace to get the coordinates into one line
looking like "47.2702 10.1785 | 47.3115 10.2598 | 47.4103 10.2774 | 47.5147
10.2662 | ....."
4. copy&paste all as geotoad location - run several times if it doesn't take
all coordinates at once
5. reduce distance maximum (1 mile or less) to get a reasonable amount of
caches
6. start search
you don't get every cache on the way - but probably enough to keep you busy.
Original comment by andi.man...@gmail.com
on 26 Jun 2011 at 2:17
Just to comment that I'm doing the same that andi described before and it works
perfectly (ej. to get the caches along the Camino de Santiago).
Original comment by sadurni@gmail.com
on 27 Jun 2011 at 6:15
[deleted comment]
A first shot using only gpsbabel (the man page of which is as horrible as the
program is powerful) in (Linux) shell context:
---8<--- snip
#!/bin/bash
input=/your/input/file.gpx
# values tested with a short trip of about 5km
error=0.100k
distance=0.500k
circle=0.500km
# simplify input with Douglas-Peucker,
# then interpolate for point distance,
# output as series of coordinates
searchargs=$(
cat $input |
gpsbabel -i gpx -f - -x simplify,crosstrack,error=$error -o gpx -F - |
gpsbabel -i gpx -f - -x interpolate,distance=$distance -o csv -F - |
tr ',' ' ' |
awk '{printf("%.5f,%.5f|",$1,$2)}'
)
geotoad -u... -p... -y$circle -q coord \"$searchargs\"
---8<--- snap
The tricky part is to adjust the three variables, error (for DP reduction),
distance (for interpolation along straight segments), and circle (radius of
search circle).
distance/radius (my guts tell me) should be about 1 to 1.5, and error/radius
not exceed 1/4. The larger the values the less points you end up with, of
course.
Original comment by Steve8x8
on 28 Jun 2011 at 2:48
I have done some more testing of the script suggested above, with "real-world"
data, and submitted the final version to the Wiki (OtherSearches page).
"error" and "distance" should scale linearly with the "circle" radius for the
search, which in turn should be 3/2 times the final "search corridor width" not
to miss anything important.
The total number of overlapping circles is given primarily by the length of the
route/track divided by interpolation "distance", plus the number of bends.
Although overlapping search areas will result in overlapping cache lists, there
will only be little overhead since lists are merged before retrieving
individual cache descriptions.
Is this sufficient to close the issue as fixed?
Original comment by Steve8x8
on 29 Jun 2011 at 10:34
Works very well for me, thanks!
As a noob to shell scripting I had to read up a bit to get the script running
in OSX, and the numbers are still a bit trial and error for me.
Is there a "best practice" when working with long tracks? (thinking of a 1000
miles bike ride)
Is it advised to split up the tracks to spread/minimize the load on gc.com
servers?
Original comment by andi.man...@gmail.com
on 1 Jul 2011 at 11:51
Well, it depends... if you're planning a 1000-miles trip, you'd probably be
willing to look for interesting caches more than 5 miles off-route? Or do you
only want to pick up the roadside boxes, and miss the multi that would
introduce you to gorgeous places just around the bend? - You decide.
(That's why it is hard to give a rule of thumb to get the few "magic numbers"
right, it's all a matter of what you expect and what you'd be willing to miss.
A map with the circles can help a lot.)
Speaking of "best practices": I have seen gc.com's servers getting angry when I
asked for more than 1000 caches in a single run; that's why geotoad has a
throttling mechanism built in which triggers at a count of 350, and caches
files up to an age of 6 days. If you split your workload into chunks, let them
be dropped into the filecache, and in a final run combine them, that wouldn't
harm, I suppose.
With too many locations in a single command line, one also may probably reach
OS limits - I haven't tried too hard to reach one with Linux and bash (32k
characters?).
And there's another small problem: if you plan too far ahead, your dataset will
be outdated long before you reach half the distance. How will you get noticed
of disabled caches, even archived ones, log your finds and trackables? (BTW,
this reminds me of a little bug I've still got to report - and perhaps fix...)
Original comment by Steve8x8
on 1 Jul 2011 at 4:25
While enhancements (and adaptions to other OSes) are still welcome, I'm closing
this bug now.
Original comment by Steve8x8
on 16 Sep 2011 at 9:06
Original issue reported on code.google.com by
r.brink...@gmail.com
on 16 Apr 2010 at 11:52