Open angoca opened 4 months ago
To start, with OpenStreetMap-NG, the API and the website are much less coupled than the current Ruby implementation. Because of that, OSM-NG is able to provide website functionality, API 0.6, and API 0.7 simultaneously without any loss of functionality. I believe it's due to Ruby's high code coupling and monolithic design that we have seen such a high degree of stagnation.
If I understand correctly, what you suggest is that OSM-NG should be able to work on the old database schema.
It's definitely possible, but I don't see it as a worthy goal to pursue at the moment. Let me explain why:
Firstly, many of OSM-NG's performance improvements come from the new, more optimal database schema. If we make OSM-NG work on the old schema, many optimizations will be lost. As a result, it's highly possible that the cgimap project will be faster (and more tested) than OSM-NG. After all, pure C++ is faster than C with Python library (Cython). I am worried that if this is the case, there will be no incentive to switch to OSM-NG. It would be slower, less tested, and require more work from everybody involved. With the new database design, I am convinced that OSM-NG will outperform the cgimap implementation, making it a clearly better choice.
Secondly, as I have already hinted in the first point, it would require more work to develop both for the new schema and the old schema. Given that there's already plenty of work to do, I want to focus on innovation.
Thirdly, testing OSM-NG on the old schema would not be very useful. Only a small portion of the code would be correlated with the new schema. So any issues detected with that implementation would be unlikely to occur in the other one and vice versa. After all, the pure OSM API is just a wrapper around SQL calls (which are heavily dependent on the schema). Only code responsible for formatting and parsing data would be tested, so we may as well write unit tests.
I believe the proper way to test OSM-NG would be to mirror the API requests to a shadow instance and compare the planet diffs produced. This way, we know that the whole chain of operation works as expected, and this is what I am advocating for. Please let me know if you are satisfied with my answer or if you want to discuss further!
Hi Zeczero, thank you for your detailed answer.
However, I still have some doubts about how this project will work once stable.
Is it going to be a separate and parallel project from current OSM (API, database)?
If this is the case, I suppose apps should point to this project (URL). But not everybody is ready to perform modifications to use this new tool. Even small changes in the API drastically affects the ecosystem and some apps take months or years to update. For example, the change from oauth 1 to oauth 2. And there are still people in the chats asking why JOSM is not working for them.
Because not everybody will switch to the OSM-ng, how are you going to deal with the changesets in the current OSM? Do you plan to monitor them, and replicate the modifications here? What are you going to do with change collisions. For example if someone delete a node in OSM-ng and instead, it is moved in OSM? What will be the resulting value for that node here?
Recently, OSM has suffered some attacks. However, the platform is robust and stable, with many monitoring tools. How is it going to be the monitoring for this new platform?
I am asking all these question because I am a database specialist, and these migrations is my daily work.
Is it going to be a separate and parallel project from current OSM (API, database)?
Nope, it's just an alternative implementation. There are no plans for making it its own separate thing (like openhistoricalmap is for openstreetmap). Our goal is to provide better alternative for core OSM software.
How is it going to be the monitoring for this new platform?
The same as currently, plus, we plan to introduce new website-builtin monitoring and statistics tools.
Because of the strict backwards-compatibility there will be no breaking changes for the existing tooling :+1:.
The API is the OSM core, and it progresses very slowly. The same 0.6 version has been for almost 10 years with a small extension (notes-bugs) included after the initial 0.6 definition. But the rest has been unmodified.
The API implementation is used for OSM clients, and it is the only way to access the OSM data on the database. Therefore, it has to be extensively tested and very robust, and it is the most preciously cared thing in the osm world.
On the other side, the osm website is like any other OSM client that uses the API to render the page. The problem is that this code is in the same code repository as the API. I think this is one of the reasons the website is stuck in a 2000s look-and-feel. And website maintainers do not want to break things or take risks with new developments.
If we want a new osm website, the code should use the current API implementation to convince people of the benefits of the new design, language, design, evolution, time-to-market, etc. But, changing the API implementation is much more complex, and many people will criticize the initiative because if an error is critical, it could block the whole osm ecosystem.
Also, API 0.7 is still being defined. And osm-ng could be the opportunity to be the official implementation, but this is only possible if the website is separated from API implementation. There could be people interested in testing the osm website and others the API, and having them separated is better.
We could think a adoption roadmap like this:
By 0.6+, I mean the features you could include that are not really part of the initial definition (retrieve all traces, all notes for a user, etc.), but it does not affect the current version.
Also, open historial map could eventually be migrated to osm-ng code, but separating the API and the code will also benefit them.
I already talked about this in communities and this idea received a lot of likes: https://community.openstreetmap.org/t/the-next-generation-of-openstreetmap-in-python/105621/96