Closed adl1995 closed 6 years ago
@vissarion, @awulkiew - I just modified the tests to calculate execution time for each algorithm. I'm getting these results:
HaverineTest: average execution time (seconds): 0.000266135
SphericalLawOfCosinesTest: average execution time (seconds): 7.1773e-05
EquirectangularApproximationTest: average execution time (seconds): 2.7248e-05
EllipsoidalApproximationTest: average execution time (seconds): 7.973e-05
TunnelDistanceTest: average execution time (seconds): 8.5208e-05
VincentysFormulaTest: average execution time (seconds): 0.000339849
BoostGeometryDefaultStrategyTest: average execution time (seconds): 9.7177e-05
BoostGeometryThomasStrategyTest: average execution time (seconds): 0.000280531
BoostGeometryVincentyStrategyTest: average execution time (seconds): 0.000397888
BoostGeometryAndoyerStrategyTest: average execution time (seconds): 0.000139169
Could you please review the other features I introduced in this PR? I will share my GSoC application shortly, but want to make sure this library follows the best practices etc.
Thanks!
Thanks for improving your test. Looking forward to your application.
Side comment: I am surprised to see that andoyer is two times faster than havershine formula.
@vissarion Yes, quite interesting indeed. In some cases, haversine formula's execution time is close to / slightly better than andoyer. The execution time listed above is actually the average over 150 runs of each function with different inputs.
This bar chart tries to visually compare the execution time performance metric:
150x seems to be a little low number. I'd rather use a number allowing to gather results for around a second. E.g. on windows AFAIR you cannot reliably measure times shorter than 0.001. Are you measuring the time of each call or all 150 calls? Even if you are measuring the time of all 150 calls, for times like 2e-5 per call the number 150 is too low.
Furthermore when you're sharing the results you should also write which compiler was used and which options were passed, in particular which optimization level.
@awulkiew Thanks for pointing this out.
I updated the GeodTest.dat
file to include 15000 entries. The time is measured for each individual function call and the result is pushed in a vector. The time given below is the average over all the entries.
I am using the g++ compiler (version 7.2.0 on Ubuntu 14.04). To compile this, I used the O1 optimization level i.e. optimize minimally. The options passed to the compiler are:
g++-7 geodistance_test.cpp -o geodistance_test -O1 -std=c++11 -lboost_serialization -lboost_system -lboost_chrono
Listed below are the new test results:
HaverineTest: average execution time (seconds): 0.00348685
SphericalLawOfCosinesTest: average execution time (seconds): 0.00347074
EquirectangularApproximationTest: average execution time (seconds): 0.00126575
EllipsoidalApproximationTest: average execution time (seconds): 0.00421749
TunnelDistanceTest: average execution time (seconds): 0.00345171
VincentysFormulaTest: average execution time (seconds): 0.0173187
BoostGeometryDefaultStrategyTest: average execution time (seconds): 0.00315965
BoostGeometryThomasStrategyTest: average execution time (seconds): 0.0086729
BoostGeometryVincentyStrategyTest: average execution time (seconds): 0.0141453
BoostGeometryAndoyerStrategyTest: average execution time (seconds): 0.0048596
@vissarion @awulkiew - I just submitted my proposal draft. I would greatly appreciate if you could provide feedback regarding any changes that are required, especially in the Project Proposal and Timeline section.
Link: https://docs.google.com/document/d/1PlFoX3d_gzhTYTlc8Bk0WGYOrKdGsU6Bb4K1gVKf9b4/edit?usp=sharing
Thanks!
Thanks for sharing. In general your proposal is well structured and makes sense. I added some comments.
Just merged this branch into master. Please let me know if you have any further feedback regarding this repository, or the proposal.
Thanks!
Following issue #1, this PR aims to add these functionalities / design changes: