Open inconnu26 opened 6 years ago
How much memory (RAM) do the machines have? It sounds like it's swapping to disk, which you should definitely avoid. The way you can avoid that for larger maps is a spatial sharding of your map region. To support this, the library allows to load only a region of the map that is stored in the database. You must then assign regions to different servers. I can give you some more hints on how to proceed on that, but please check first if the latency problem is caused by swapping.
(Sorry for the delay to answer your question, I was kind of offline for one month.)
Thank you for your answer. Yes it is definitely a RAM issue. We're using 32GB RAM servers. And a matcher server of only France fills 18GB. So we we've started spatial sharding and it seems a good solution. Do you have other hints for such sharding ? Thank you !
The basic idea is as follows:
The technical difficulties with Spark are:
Hello,
We've been using the barefoot project for a few month now, and it has been an amazing discovery, and of a great help. But we're stuck with an issue of scaling our project.
We were using a bfmap of France (~1.4G), and now that we want to use the system of multiple countries at a time, we've made a larger bfmap(~4.2GB). But taking over 20GB or RAM and 100% or 16-core processors, it takes over 10 minutes for an average trip (~10km) to be mapmatched. Whereas with the bfmap of France, and with a smaller server, it used to take about 30 seconds to be mapmatched, for a same trip.
So we tried to work on your spark jobs implementation, but it seems that it only allows to mapmatch many trips at a time by dispatching the multiple trips to multiple spark slave machines. So It seems that it would not fix our latency issue, which needs to be reduced for EACH trip.
Would you have an idea or an advice to make the Barefoot work with a map of multiple countries (and ultimately with the entire world) ? must we use docker for each country ? a different server ?
Thank you very much for your help