nextgenhealthcare / connect

The swiss army knife of healthcare integration.
Other
906 stars 274 forks source link

[IDEA] Add AWS SQS as File Reader/Writer Method #5749

Open joshmc82 opened 1 year ago

joshmc82 commented 1 year ago

Is your feature request related to a problem? Please describe. Not a problem, per se. It would be nice to see SQS Implemented along side S3 in the File Reader/Writer Connectors.

Describe your use case AWS SQS is a useful feature used commonly like S3 to read smaller data streams. It would be nice to have native SQS Support since you're already supporting AWS S3 and already have most of the AWS Core SDK jars distributed.

Describe the solution you'd like Add "AWS SQS" as an entry in the "Method" box for the File Reader/Writer Connectors or some new connector type. This should use the existing File Reader/Writer Connector and exist alongside S3. The same Advanced Settings fields used for S3 would need to be exposed to this SQS Method as well as AWS Access Key ID and AWS Secret Access Key. User should also be prompted for a Queue Name (and have Mirth build the URL) or provide the URL to the Queue directly.

Describe alternatives you've considered Currently, we have a function that will use the SQS jar as a custom resource with a Code Template Function that we have to execute with a JS Reader in order to poll an SQS queue. It works, but it would be ideal if we didn't have to find a compatible version of the jar and manually upload it when we want to use it.

Additional context Code reference from AWS https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-sqs.html I've been successful just adding the SQS jar matching the AWS Core jar version already included with the specific Mirth Connect version. I've also tried adding the older AWS SDK v1 jars and they also work with no conflicts noted.

jonbartels commented 1 year ago

Can you refine the scope a little more?

Would this be a separate connector type or an addition to file reader (local, FTP, STFP, S3)? I think it is and would be an implementation of https://github.com/nextgenhealthcare/connect/blob/b3bd6308b789d16e4b562bd5686cef883fa1faf1/server/src/com/mirth/connect/connectors/file/filesystems/FileSystemConnection.java#L20 . I had a scrounge and these file system connection types are not independently pluggable, they're part of the File connectors https://github.com/nextgenhealthcare/connect/blob/b3bd6308b789d16e4b562bd5686cef883fa1faf1/server/src/com/mirth/connect/connectors/file/filesystems/FileSystemConnectionFactory.java#L61 .

When you loaded the custom JAR to a resource directory, did you encounter any version conflicts with core MC libraries from the AWS libraries it imports?

What UI elements would be necessary for an SQS implementation? AWS credentials as with S3? A "path"? What else does SQS need to read?

SQS is "Simple Queue Service" - Should it be implemented as a file reader (eg polling) or would it be better implemented like JMS and RabbitMQ. A true queue implementation would let messages be pushed from the queue service to the listeners. I also think a true queue implementation would speed development because it can be implemented as an independent, pluggable connector type. The sample open-source plugin and its associated guide show how to do this. A risk to manage with the plugin is to ensure that it's libraries and versions do not conflict with core MC. In Maven terms many of the common libraries should be marked as provided scope.

joshmc82 commented 1 year ago

@jonbartels I added some additional context to the scope. I don't have any experience with other Queue services so I cannot comment on how SQS is different than those implementations. I have used the JS Reader and Function to poll SQS. From some basic research, SQS is not quite a traditional Queue service. For example, SQS doesn't support "push"; you have to actively poll for your messages. Behind the scenes, SQS (and S3) are basically just an HTTP REST API. Based on this, it seems the File Reader is more aligned.

alpha-arpitjain commented 1 year ago

I have been using SQS to send messages out from mirth as well and it would be great to have something like AWS/SQS as destination. I am using it with javascript writer but for some reason it doesn't work very well when a little burst of messages come from source. To me, it seems like the problem is with the way the authentication works with AWS and how every time a new message comes, my script will have to fetch the auth token from AWS (internally AWS library would do so I believe). If it could be implemented natively or as a plugin with mirth, then the connection and authentication can be handled separately in another process and sending the message to SQS can be very smooth.