Closed stefpiatek closed 4 months ago
Create local test case with generate_dicom_dataset()
, so we can debug locally.
Run pipeline with ~1000 messages.
Alternative: orthanc_raw
might be the culprit blocking everything else. We're querying it with 10 messages at once, but by default it can only handle 2 concurrent processes.
Couple of questions:
csv
with 1000 messages, I assume these need to be added to the system-test-fake-star-db
database, like we're doing in test/scripts/insert_data.sh
? Is there a more efficient way to do this?Couple of questions:
* What is the error we're getting and where can I find it? I.e. how do I know I've reproduced the issue locally?
If rabbitmq closes the connection beause of missed heartbeat during consumption of messages (this doesn't take down the service but it will stop consuming messages). Also imaging-api
REST api requests time out
* For the test case, I've created a `csv` with 1000 messages, I assume these need to be added to the `system-test-fake-star-db` database, like we're doing in `test/scripts/insert_data.sh`? Is there a more efficient way to do this?
Doesn't need to be fake emap star, just process the imaging queue
* Do we need a DICOM dataset for each message as well?
yes definitely, need to have DICOM in the fake VNA for each
Definition of Done / Acceptance Criteria
Testing
Solution should be tested on a sample of data on the GAE, where data gets to the point of anonymisation API
Documentation
Document what is required for this as a design decision
Dependencies
No response
Details and Comments
We've temporarily disabled async message consumption for imaging and ehr apis, because the startup ask of consuming messages would block the main thread. This caused the rabbitmq heartbeat to be missed so rabbitmq disconnects, also the REST api calls time out. Happens after ~10 minutes even at the slow rate of 0.2 messages per second.
Suggestions: