Radverkehrsatlas provides access to bicycle infrastructure data from OpenStreetMap (OSM) for administrative staff. The OSM data is processed and then visualized in multiple map views. The integrated verification process provides a way for administrations to check the given data and provide feedback – internally and to the community. Based on this data, administrations can plan new bike lanes and networks and maintain existing infrastrucutre.
Please contact FixMyCity GmbH to learn more.
This project is split into two major parts. The processing of OpenStreetMap data and the frontend for the visualization and Web GIS.
If you find any bugs, feel free to open an issue in this repository.
The frontend visualizes our processed data it also provides options to annotate and export the data.
For VS Code we recommended some extensions.
To test the login, you need to setup your own OSM OAuth 2-Application, see osm-auth and update the credentials.
In the app/
directory do the following:
npm run dev
works as expected. This will make sure all packages are patched..env.production.local
with settings like
NEXT_PUBLIC_APP_ORIGIN=http://127.0.0.1:3000
NEXT_PUBLIC_APP_ENV='staging' # 'staging', 'production'
npm run build
and npm run start
to test the production bundle. There is also a dockerized version of our frontend which one can run with docker compose --profile frontend up
.icon.svg
https://nextjs.org/docs/app/api-reference/file-conventions/metadata/app-icons
Generator for favicon.ico
https://realfavicongenerator.net/All helper scripts run with bun.
This project is licensed under the AGPL-3.0 License - see the LICENSE.md file for more information.
It contains dependencies which have different Licenses, see package.json
.
The processing downloads the OpenStreetMap (OSM) data, filters and processes it into a PostgreSQL/PostGIS database which are then made available as vector tiles with martin
.
The data gets selected and optimized to make planning of bicycle infrastructure easier.
See https://github.com/FixMyBerlin/atlas-app/blob/develop/processing/run-5-process.sh#L45-L50
We use the public Germany export from Geofabrik which includes OSM Data up until ~20:00 h of the previous day. All processing is done on this dataset.
main
").run.sh
for details.Skip CI Actions:
ATM, the CI runs on every commit. To skip commits add [skip actions]
to the commit message. This is a default behaviour of Github Actions.
.env
file. You can use the .env.example
file as a template.The workflow is…
Edit the files locally
Rebuild and restart everything
docker compose build && docker compose up
Inspect the new results, see "Inspect changes"
Note Learn more about the file/folder-structure and coding patterns in
processing/topics/README.md
Whenever SKIP_DOWNLOAD=1
is active we store a hash of all .lua
and .sql
per folder.
During run-5-process.sh
we only run code if the hash has changed.
If any helper in (topics/helper
)[processing/topics/helper] changed, we rerun everything.
Whenever we talk about hash
es in this code, this feature is referenced.
Whenever you need to force a rerun, open any lua helper and add a temporary code comment, save and restart the processing. Use the helper run-full.sh
to do this automatically.
Whenever SKIP_DOWNLOAD=1
and COMPUTE_DIFFS=1
, the system will create <tablename>_diff
tables that contain only changed entries.
It will compare the tags
column to the previous run.
Whenever we talk about diff
s in this code, this feature is referenced.
FREEZE_DATA=0
you see the changes to the last run on every runFREEZE_DATA=1
you see the changes to the last reference-run, allowing you to compare your changes to a certain version of your data. The reference will be the last time you ran with FREEZE_DATA=0
. In this case the system will not update the <tablename>_diff
tables. This flag will be ignored if COMPUTE_DIFFS=0
.Use run-full.sh
to toggle FREEZE_DATA
and force a full rerun for a fresh reference.
age
diffsIf age
diffs show up, you need to create a fresh reference run of all the data.
You may use run-full.sh
to set FREEZE_DATA=0
and modify the helper folder to trigger a full rerun.
For the development process it's often useful to run the processing on a single object.
For that you can specify an id (list) as ID_FILTER
in the processing/run-3-filter.sh
.
See the osmium-docs for more information.
Processing:
The first iteration of the processing pipeline was inspired by gislars/osm-parking-processing
Frontend: