For source code contributions please see our developer guide.
The COL backend is a Dropwizard application, that drives the COL ChecklistBank API.
webservice
is the maven module that builds the application.
webservice
mvn clean install
to build your applicationjava -jar target/webservice-1.0-SNAPSHOT.jar init --num 4 config.yml
. --num 4 will configure the number of partitions to use for external datasetsjava -jar target/webservice-1.0-SNAPSHOT.jar server config.yml
http://localhost:8080
For development tests you can also run the application straight from your IDE
by executing the main WsServer.java
class and passing it the right arguments server /path/to/config.yml
In order to avoid real authentication against the GBIF registry you can change the AuthBundle and use a LocalAuthFilter instead of the real AuthFilter. This authenticates every request with a test account with full admin privileges.
To see your applications health enter url http://localhost:8081/healthcheck
The main API with model classes and shared common utilities classes.
The postgres persistence layer.
Code dealing with (dataset) DOI registration and management in DataCite.
Various parsers/interpreters used mostly for importing. Contains a GBIF name parser wrapper.
The Dropwizard based JSON webservices, importer and assembly code.
The admin server should be used to import known datasets from their registered data access URL. Imports are scheduled in an internal, non persistent queue. Scheduling a dataset for importing is done by POSTing an import request object to the importer resource like this:
curl -X POST -d "{'datasetKey'=1000, 'priority'=false}" "http://localhost:8080/importer/queue"
The priority parameter places the request on the beginning of the queue.
All data is normalized prior to inserting it into the database. This includes transforming a flat classification into a parent child hierarchy with just a single record for a uniue higher taxon.
We have built the importer to fail early when encountering issue to not overwrite existing good data. Examples of data errors that cause the importer to abort are:
The importer does gracefully handle empty lines and skip lines with less columns than expected (this shows as warning logs as bad delimiter escaping is often the root cause).
The dataset import flags records that have problems. For each entire dataset import aggregate metrics are stored and can be retrieved even for historic versions for comparison and change analytics.
All potential issues that are handled can be found here: https://github.com/Sp2000/colplus-backend/blob/master/colplus-api/src/main/java/org/col/api/vocab/Issue.java#L6
For example: