Closed patstave closed 3 years ago
As you see, that "TODO" is likely catching you out.
The thing I REALLY want to do with that node is what I did for OMRON node & utelise the SID (Sequence ID) so that I can permit multiple in-flight transmissions without worrying about waiting for the response from the PLC. However, only the 4E protocol supports this & it is a lot of work.
Until I can find the time to improve this node, my only suggestion is break that array up and fire the writes separately.
Question... What are you writing - 4000 bools? 4000 ints? 4000 strings?
Question... What are you writing - 4000 bools? 4000 ints? 4000 strings?
I'm doing a batch write of several arrays. These are the individual write nodes, that are all fired simultaneously:
So in total 3800 words at one go. I guess breking each step up in succession probably would help. The largest array of 2000 elements is of course the one that needs most time, so probably close to 1 second on this one alone.
For now, i think that is one of 2 avenues.
One more question if you dont mind - how often are you writing these values? (is it a once i a while recipe download or highly frequent updates)?
If frequent updates, there may be a more intelligent way of sending only changes (smaller amounts)
Thank you Steve. We are building this solution for docker, that means all npm dependencies are installed as part of the build. For option 2 i think we might try forking the repo, and modify only this line (timeout value). Is 1000 ms a bit short anyways, as a default value? Would you consider upping the default a little bit, to lets say 4-5 seconds?
The values are written in a recipe style, so only based on a "recipe changed" trigger (not frequent). By the way, does 1E/4E transmission impact performace in any way? For now we use 1E- and UDP.
Hi again,
the impact of 4E over 3E or 1E is negligible - BUT worth it. 4E adds a Sequence ID thus ensuring responses are correctly tied to requests. Especially useful when using UDP.
As for 1000ms being too short, typically, response on a LAN are around 17ms (for small requests on Q series UDP requests) so anything that spans the MAX packet size (depends on the PLC model) get chunked into multiple requests.
As you are to use docker then I suppose I could add a quick change and somehow expose the timeout - but as this is a recipe style (infrequent) download, I would definately look to creating a slower sequenced transfer in sizes around 200 WDs per transaction.
assuming a good connection with an above average round trip of 100ms, 3800 WDs @200 per transmission should take less than 2 sec ... 3800 / 200 = 19 19 * 100ms == 1900 == 1.9sec
Thanks for excellent help and input Steve. Instead of forking the repository we did a file inline edit operation in our dockerfile:
COPY --from=build /usr/src/node-red/prod_node_modules ./node_modules
# Modifiy the hardcoded mcprotocol timeout value
RUN sed -i "s|node.busyTimeMax = 1000;|node.busyTimeMax = 10000;|g" node_modules/node-red-contrib-mcprotocol/nodes/read.js && \
sed -i "s|node.busyTimeMax = 1000;|node.busyTimeMax = 10000;|g" node_modules/node-red-contrib-mcprotocol/nodes/write.js && \
# Chown, install devtools & Clean up
chown -R node-red:root /usr/src/node-red && \
apt-get update && apt-get install -y build-essential python-dev python3 && \
rm -r /tmp/*
We will also look into breaking up the writes as well, but for now i think we have a workaround. Thanks for all your help.
Added timeout option in 1a1aed2 (will push to NPM soon)
We are writing some quite big arrays (more than 4000 elements). During high load i find that the write node times out. It seems like the default write timeout is 1000 ms: https://github.com/Steve-Mcl/node-red-contrib-mcprotocol/blob/54571b28da9392f8a5adc4f72edd99c28adb0dc2/nodes/write.js#L48
Is there any way i can extend / specify the timeout from node-red? Or would this require modifications to the source code of this library?