nextgenhealthcare / connect

The swiss army knife of healthcare integration.
Other
905 stars 273 forks source link

[IDEA] Response Processor and Global Response Processor Scripts #6265

Open thorst opened 1 month ago

thorst commented 1 month ago

Is your feature request related to a problem? Please describe. We have most of our tcp/ip hl7 destinations set to not wait for the previous destination and also to queue always. Because of this, on the postprocessor or global postprocessor we do not have access to the sent data/time or the ack in the response body. For situations like this it would be nice if there was another processor for responses, both at the channel and the global levels. It would have access to all the same data that the normal postprocessor has but has the addition of the details from when a queued message was sent.

OR

If you cannot do that, because you would have to track when the message was sent, errored, or filtered on all destinations, then you could just trigger it with information about the channel, connector, and message details. I'm not sure how things work internally. If using the global response processor, it would allow you to easily get ack and sent times, among other data points, and save them off or perform some other actions.

This seems like a natural extension of the current features, allowing users another injection point. Most importantly, this would be triggered even after a queued message was sent, currently seems to only be accessible from the response transformer.

Describe your use case I need this feature so I can save the acks and sent times off to our message repository, in an external db.

Describe the solution you'd like Add a "Response Processor" that is basically the same as the "Postprocessor" both at the channel and global levels.

Describe alternatives you've considered Until this feature is implemented, Ill have a code template that users call on the response transformer. This will fulfill my needs but requires me to "touch" each interface instead of applying it to a global level.

jonbartels commented 1 month ago

For situations like this it would be nice if there was another processor for responses, both at the channel and the global levels. It would have access to all the same data that the normal postprocessor has but has the addition of the details from when a queued message was sent.

https://docs.nextgen.com/bundle/Mirth_User_Guide_4_5_0/page/connect/connect/topics/c_Response_TransformersResponse_Transformers_connect_ug.html

thorst commented 1 month ago

Right @jonbartels that's where I'll be calling the code template from, but that requires me to touch every destination on every channel.

tonygermano commented 1 month ago

I think this issue might be relevant https://github.com/nextgenhealthcare/connect/issues/4941. In effect, if that ticket was implemented, then you could force your code template to run in the response transformer without needing to touch every destination.

thorst commented 1 month ago

Interesting, I don't use compiled code blocks, but on the surface it does seem like both our requests are two ways to accomplish the same thing. I will look into it to see where else I can use that feature.

Is that js code like a code template, or is it java that you include?

thorst commented 1 month ago

Ok, I looked at that, and I think that's a pretty obtuse way to implement what I'm asking. Unless Im misunderstanding it. It's not obvious to the user where that code is running from. As a user Id first check the destination response transformer, then my proposed response processor, and then my proposed global response processor.

The last place Id think to look would be a compiled code block. It would also need to be in its own library most likely, to ensure it runs on each channel, since the channels it runs on is at the library level. That said, I don't think I would CHOOSE to use compiled code blocks for anything, I would most likely choose a different solution if possible. Not hating, I'm sure your use case makes sense.

tonygermano commented 1 month ago

It is a code template, but you don't need to call a function for it to execute.

https://docs.nextgen.com/bundle/Mirth_User_Guide_4_5_0/page/connect/connect/topics/c_Edit_Code_Template_Panel_connect_ug.html

Compiled Code Block: The template will be compiled in with scripts, but drag-and-drop will not be available at all. Use this to declare initial variables or anything else you want to have executed at the beginning of your scripts.

This use case is what they are designed for, and you can be very granular about where code templates run. You can specify which channels at the library level (so it can be global with exceptions,) and which javascript context at the code template level (so you call tell it to only run in the response transformer.) The only problem currently is that you still need to add a dummy step to every response transformer in order to get the script to run in the first place, and then you might as well just call a function.

I think if you named your library "Auto-send ACKs to repo," which you would see included in the channel resources, it wouldn't be too mysterious where the code was coming from. Probably less so than another global script that gives you less control. Especially considering these are an existing tool.

thorst commented 1 month ago

you would see included in the channel resources

Do you mean "Channel Dependencies"? I assume. It's something we could get used to, I don't really like that it could/would lead to having a multitude of libraries with one or only a handful of templates underneath them. I know you don't want one library to house all your code templates, but it has to be a balance.

tonygermano commented 1 month ago

Do you mean "Channel Dependencies"? I assume.

Yes, that is what I meant.

I'm actually moving towards more libraries with only a handful of (or even one) templates underneath them because it makes it possible to only include in the channel what I need, and I have a better idea of which templates are being used and where.

So far I have not created so many that they are difficult to manage.