Streaming pipeline stops syncing patients when number of patients = jdbcMaxPoolSize
Pre-requisites to reproduce the issue
Change the openmr_convert method for the Patient class in resources.py to code in [1]. This is to get rid of the code that does not allow the patient to be upload more than once. Having duplicates does not affect the reproduction
Next, open the log file and search for the string:
Fetching FHIR resource at /Patient
You will see that this stops at jdbcMaxPoolSize, even though there are more patients that get uploaded
Cleanup
To stop the pipeline, first run the command jps to find the process number for the Java process running the pipeline. Once you have that, run kill -9 PID_NUMBER to stop and kill the process.
After each time you stop the pipeline, make sure to remove the data and streaming folders by running:
Problem
Streaming pipeline stops syncing patients when number of patients = jdbcMaxPoolSize
Pre-requisites to reproduce the issue
Change the
openmr_convert
method for thePatient
class in resources.py to code in [1]. This is to get rid of the code that does not allow the patient to be upload more than once. Having duplicates does not affect the reproductionChange the value of utils/dbz_event_to_fhir_config.json to
localhost
, as opposed toopenmrs-fhir-mysql
Check you can run
jps
andjstack
. These should be part of your JDK installationMake sure bunsen and the pipelines are compiled. If not run:
Make sure OpenMRS and HAPI Server are up and running. If they are not running, run:
Repro Instructions
You need three terminal windows. One to run the pipeline, and the other to upload data.
First, run the pipeline using the command:
In the second terminal, run:
This will upload 79 patients to OpenMRS.
Next, open the log file and search for the string:
You will see that this stops at
jdbcMaxPoolSize
, even though there are more patients that get uploadedCleanup
To stop the pipeline, first run the command
jps
to find the process number for the Java process running the pipeline. Once you have that, runkill -9 PID_NUMBER
to stop and kill the process.After each time you stop the pipeline, make sure to remove the data and streaming folders by running:
[1]