Closed flacohenao closed 4 years ago
This doesn't look like to be caused by the DB not being available, but by an unexpected message format. Reading a non-existent memory area should return an error code from the PLC, that bubbles up as an "Object does not exist" message. In any case, such errors can never crash the process.
Could you please reproduce the error again, but with the environment variable NODE_DEBUG
set to nodes7
, so it will print a lot of debug messages with the received buffers, so we can investigate it better.
Also, what is the part number of the PLC you're connecting to?
I'm connecting to a 6ES7 315-2EH14-0AB0.
Im sorry to ask.. how to I set the env to run nodesy on debug mode? i actually dont know how to do that?
I see you're running it on Windows. If you're running on command prompt, you can do
set NODE_DEBUG=nodes7
and then just call node
on the same prompt. If you are on WSL you can just put that in the front of the command:
NODE_DEUBG=nodes7 node ...
Ok, there you go:
NODES7 22580: S7Connection connect
NODES7 22580: S7Serializer _transform
NODES7 22580: S7Serializer serialize
NODES7 22580: S7Serializer _serializeRequest
NODES7 22580: S7Serializer serialize <Buffer 32 01 00 00 00 00 00 08 00 00 f0 00 00 08 00 08 03 c0>
NODES7 22580: S7Parser _transform <Buffer 32 03 00 00 00 00 00 08 00 00 00 00 f0 00 00 08 00 08 03 c0>
NODES7 22580: S7Parser _parseResponseParameter 12 8
NODES7 22580: S7Connection _onIncomingData {
header: { type: 3, rid: 0, pduReference: 0, errorClass: 0, errorCode: 0 },
param: {
function: 240,
maxJobsCalling: 8,
maxJobsCalled: 8,
pduLength: 960
}
}
NODES7 22580: S7Connection _processQueue
NODES7 22580: S7Endpoint _onConnectionConnected
NODES7 22580: S7Endpoint getModuleIdentification
NODES7 22580: S7Endpoint getSSL 17 0
NODES7 22580: S7Connection sendUserData 4 1 9 17
NODES7 22580: S7Connection sendUserData loop 0
NODES7 22580: S7Connection sendRaw {
header: { type: 7 },
param: {
method: 17,
type: 4,
function: 4,
subfunction: 1,
sequenceNumber: 0
},
data: { returnCode: 255, transportSize: 9, payload: <Buffer 00 11 00 00> }
}
NODES7 22580: S7Connection _processQueue
NODES7 22580: S7Serializer _transform
NODES7 22580: S7Serializer serialize
NODES7 22580: S7Serializer _serializeUserData
NODES7 22580: S7Serializer serialize <Buffer 32 07 00 00 00 01 00 08 00 08 00 01 12 04 11 44 01 00 ff 09 00 04 00 11 00 00>
NODES7 22580: S7Parser _transform <Buffer 32 07 00 00 00 01 00 0c 00 7c 00 01 12 08 12 84 01 00 00 00 00 00 ff 09 00 78 00 11 00 00 00 1c 00 04 00 01 36 45 53 37 20 33 31 35 2d 32 45 48 31 34 ... 96 more bytes>
NODES7 22580: S7Parser _parseUserDataParameter 10 12
NODES7 22580: S7Parser _parseUserDataData 22 124 {
method: 18,
type: 8,
function: 4,
subfunction: 1,
sequenceNumber: 0,
dataUnitReference: 0,
hasMoreData: false,
errorCode: 0
}
NODES7 22580: S7Connection _onIncomingData {
header: { type: 7, rid: 0, pduReference: 1 },
param: {
method: 18,
type: 8,
function: 4,
subfunction: 1,
sequenceNumber: 0,
dataUnitReference: 0,
hasMoreData: false,
errorCode: 0
},
data: {
returnCode: 255,
transportSize: 9,
payload: <Buffer 00 11 00 00 00 1c 00 04 00 01 36 45 53 37 20 33 31 35 2d 32 45 48 31 34 2d 30 41 42 30 20 00 c0 00 04 00 01 00 06 36 45 53 37 20 33 31 35 2d 32 45 48 ... 70 more bytes>
}
}
NODES7 22580: S7Connection _processQueue
NODES7 22580: S7Connection sendUserData res <Buffer 00 11 00 00 00 1c 00 04 00 01 36 45 53 37 20 33 31 35 2d 32 45 48 31 34 2d 30 41 42 30 20 00 c0 00 04 00 01 00 06 36 45 53 37 20 33 31 35 2d 32 45 48 ... 70 more bytes>
NODES7 22580: S7Endpoint getComponentIdentification
NODES7 22580: S7Endpoint getSSL 28 0
NODES7 22580: S7Connection sendUserData 4 1 9 17
NODES7 22580: S7Connection sendUserData loop 0
NODES7 22580: S7Connection sendRaw {
header: { type: 7 },
param: {
method: 17,
type: 4,
function: 4,
subfunction: 1,
sequenceNumber: 0
},
data: { returnCode: 255, transportSize: 9, payload: <Buffer 00 1c 00 00> }
}
NODES7 22580: S7Connection _processQueue
NODES7 22580: S7Serializer _transform
NODES7 22580: S7Serializer serialize
NODES7 22580: S7Serializer _serializeUserData
NODES7 22580: S7Serializer serialize <Buffer 32 07 00 00 00 02 00 08 00 08 00 01 12 04 11 44 01 00 ff 09 00 04 00 1c 00 00>
NODES7 22580: S7Parser _transform <Buffer 32 07 00 00 00 02 00 0c 01 60 00 01 12 08 12 84 01 00 00 00 00 00 ff 09 01 5c 00 1c 00 00 00 22 00 0a 00 01 53 4e 41 50 37 2d 53 45 52 56 45 52 00 00 ... 324 more bytes>
NODES7 22580: S7Parser _parseUserDataParameter 10 12
NODES7 22580: S7Parser _parseUserDataData 22 352 {
method: 18,
type: 8,
function: 4,
subfunction: 1,
sequenceNumber: 0,
dataUnitReference: 0,
hasMoreData: false,
errorCode: 0
}
NODES7 22580: S7Connection _onIncomingData {
header: { type: 7, rid: 0, pduReference: 2 },
param: {
method: 18,
type: 8,
function: 4,
subfunction: 1,
sequenceNumber: 0,
dataUnitReference: 0,
hasMoreData: false,
errorCode: 0
},
data: {
returnCode: 255,
transportSize: 9,
payload: <Buffer 00 1c 00 00 00 22 00 0a 00 01 53 4e 41 50 37 2d 53 45 52 56 45 52 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 43 50 55 20 33 31 ... 298 more bytes>
}
}
NODES7 22580: S7Connection _processQueue
NODES7 22580: S7Connection sendUserData res <Buffer 00 1c 00 00 00 22 00 0a 00 01 53 4e 41 50 37 2d 53 45 52 56 45 52 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 43 50 55 20 33 31 ... 298 more bytes>
NODES7 22580: new S7ItemGroup
NODES7 22580: S7ItemGroup _initParams
NODES7 22580: S7ItemGroup addItems [ 'DB1,X0.0', 'DB2,X0.1' ]
NODES7 22580: S7ItemGroup addItems item DB1,X0.0
NODES7 22580: new S7Item DB1,X0.0 DB1,X0.0 undefined
NODES7 22580: S7Item parseAddress_NodeS7 DB1,X0.0
NODES7 22580: S7Item parseAddress_NodeS7 match [
'DB1,X0.0',
'1',
'X',
'0',
'0',
undefined,
index: 0,
input: 'DB1,X0.0',
groups: undefined
]
NODES7 22580: S7Item parseAddress_NodeS7 result {
addrtype: 'DB',
datatype: 'X',
dtypelen: 1,
offset: 0,
bitOffset: 0,
arrayLength: 1,
dbNumber: 1,
readTransportCode: 2,
writeTransportCode: 3,
areaCode: 132,
byteLength: 1,
byteLengthWrite: 1,
byteLengthWithFill: 2
}
NODES7 22580: S7ItemGroup addItems item DB2,X0.1
NODES7 22580: new S7Item DB2,X0.1 DB2,X0.1 undefined
NODES7 22580: S7Item parseAddress_NodeS7 DB2,X0.1
NODES7 22580: S7Item parseAddress_NodeS7 match [
'DB2,X0.1',
'2',
'X',
'0',
'1',
undefined,
index: 0,
input: 'DB2,X0.1',
groups: undefined
]
NODES7 22580: S7Item parseAddress_NodeS7 result {
addrtype: 'DB',
datatype: 'X',
dtypelen: 1,
offset: 0,
bitOffset: 1,
arrayLength: 1,
dbNumber: 2,
readTransportCode: 2,
writeTransportCode: 3,
areaCode: 132,
byteLength: 1,
byteLengthWrite: 1,
byteLengthWithFill: 2
}
NODES7 22580: S7ItemGroup _invalidateReadPackets
NODES7 22580: S7ItemGroup readAllItems
NODES7 22580: S7ItemGroup _prepareReadPackets
NODES7 22580: S7ItemGroup _prepareReadPackets maxPayloadSize 942
NODES7 22580: S7ItemGroup _prepareReadPackets item-new-packet S7Item DB1,X0.0:[DB1,X0.0]
NODES7 22580: S7ItemGroup _prepareReadPackets item-new-packet complete
NODES7 22580: S7ItemGroup _isOptimizable false
NODES7 22580: S7ItemGroup _prepareReadPackets item-new-part S7Item DB2,X0.1:[DB2,X0.1]
NODES7 22580: S7ItemGroup _prepareReadPackets item-new-part complete
NODES7 22580: S7ItemGroup _prepareReadPackets pkt # 0
NODES7 22580: S7ItemGroup _prepareReadPackets part # 0 0 132 1 0 1
NODES7 22580: S7ItemGroup _prepareReadPackets item # 0 0 0 S7Item DB1,X0.0:[DB1,X0.0]
NODES7 22580: S7Item _getCopyBufferOffsets 0 1
NODES7 22580: S7Item _getCopyBufferOffsets positions 0 1 0 1
NODES7 22580: S7Item _getCopyBufferOffsets result 0 0 1
NODES7 22580: S7ItemGroup _prepareReadPackets part # 0 1 132 2 0 1
NODES7 22580: S7ItemGroup _prepareReadPackets item # 0 1 0 S7Item DB2,X0.1:[DB2,X0.1]
NODES7 22580: S7Item _getCopyBufferOffsets 0 1
NODES7 22580: S7Item _getCopyBufferOffsets positions 0 1 0 1
NODES7 22580: S7Item _getCopyBufferOffsets result 0 0 1
NODES7 22580: S7ItemGroup _prepareReadPackets pkt # 0 36 24
NODES7 22580: S7ItemGroup readAllItems requests [
[
{
items: [Array],
offsets: [Array],
area: 132,
db: 1,
transport: 2,
address: 0,
length: 1
},
{
items: [Array],
offsets: [Array],
area: 132,
db: 2,
transport: 2,
address: 0,
length: 1
}
]
]
NODES7 22580: S7Endpoint readVars [
{
items: [ [S7Item] ],
offsets: [ [Object] ],
area: 132,
db: 1,
transport: 2,
address: 0,
length: 1
},
{
items: [ [S7Item] ],
offsets: [ [Object] ],
area: 132,
db: 2,
transport: 2,
address: 0,
length: 1
}
]
NODES7 22580: S7Connection requestReadVars 2
NODES7 22580: S7Connection sendRaw {
header: { type: 1 },
param: { function: 4, items: [ [Object], [Object] ] }
}
NODES7 22580: S7Connection _processQueue
NODES7 22580: S7Serializer _transform
NODES7 22580: S7Serializer serialize
NODES7 22580: S7Serializer _serializeRequest
NODES7 22580: S7Serializer serialize <Buffer 32 01 00 00 00 03 00 1a 00 00 04 02 12 0a 10 02 00 01 00 01 84 00 00 00 12 0a 10 02 00 01 00 02 84 00 00 00>
NODES7 22580: S7Parser _transform <Buffer 32 03 00 00 00 03 00 02 00 09 00 00 04 02 0a 00 00 04 ff 04 00 08 02>
NODES7 22580: S7Parser _parseResponseParameter 12 2
NODES7 22580: S7Parser _parseResponseData 14 9 { function: 4, itemCount: 2 }
internal/buffer.js:76
throw new ERR_OUT_OF_RANGE(type || 'offset',
^
RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range. It must be >= 0 and <= 22. Received 23
at boundsError (internal/buffer.js:76:9)
at Buffer.readUInt8 (internal/buffer.js:242:5)
at S7Parser._parseResponseData (C:\Users\ACER\Documents\NODE PROJECTS\tot-bend\node_modules\nodes7\src\s7protocol\s7parser.js:308:47)
at S7Parser._transform (C:\Users\ACER\Documents\NODE PROJECTS\tot-bend\node_modules\nodes7\src\s7protocol\s7parser.js:465:37)
at S7Parser.Transform._read (_stream_transform.js:191:10)
at S7Parser.Transform._write (_stream_transform.js:179:12)
at doWrite (_stream_writable.js:441:12)
at writeOrBuffer (_stream_writable.js:425:5)
at S7Parser.Writable.write (_stream_writable.js:316:11)
at ISOOnTCPClient.ondata (_stream_readable.js:714:22)
at ISOOnTCPClient.emit (events.js:311:20)
at ISOOnTCPClient.EventEmitter.emit (domain.js:482:12)
at addChunk (_stream_readable.js:294:12)
at readableAddChunk (_stream_readable.js:275:11)
at ISOOnTCPClient.Readable.push (_stream_readable.js:209:10)
at ISOOnTCPClient._incomingData (C:\Users\ACER\Documents\NODE PROJECTS\tot-bend\node_modules\iso-on-tcp\src\client.js:199:26)
at ISOOnTCPParser.<anonymous> (C:\Users\ACER\Documents\NODE PROJECTS\tot-bend\node_modules\iso-on-tcp\src\client.js:81:43)
at ISOOnTCPParser.emit (events.js:311:20)
at ISOOnTCPParser.EventEmitter.emit (domain.js:482:12)
at addChunk (_stream_readable.js:294:12)
at readableAddChunk (_stream_readable.js:275:11)
at ISOOnTCPParser.Readable.push (_stream_readable.js:209:10) {
code: 'ERR_OUT_OF_RANGE'
}
Very weird, but it indeed looks like to be receiving a malformed packet from the PLC. It is returning the length of an nonexistent field!
This is the response sent by two different PLCs I have here (different addresses, but expecting the same response)
0a 00 0000
ff 04 0008 00
and this is the response from your logs
0a 00 0004 <-(!!!)
ff 04 0008 02
Note on the first line, the 4-digit value represents the length of the data following. In your case, it says 4 bits following, but none actually there! Maybe the PLC has an old firmware?
Taking a look on the master branch, the code would also bail out as a malformed packet.. but it doesn't externalizes the cause anyway.
For sure I'll put a test checking if we have enough bytes before parsing each element, this is the first fix. But I'm going to check if there's any workaround for this weird behavior of your PLC... after all, if it's happening to you, it may happen to others as well
For sure! Maybe, I can update its firmware and try again!
Let me know when the first fix is done!
thank you!!
Before updating the firmware, let me try fixing it without breaking anything else... there may be PLCs in the wild where we may not be able to update its firmware, so being able to handle it properly would be very good
So, I've come up with a workaround for this.. if there's an error code and no transport code is set, we shouldn't be expecting data anyway, so we ignore the length field (that should be 0 anyway, but looks like isn't sometimes).
The existing tests passed, and I've added new ones covering this case... testing with real PLCs seems to be OK too, so it should be fine
I see that it's not ending the whole process now!! Also, I saw that the connection is not being dropped now....
Now, I did the same test:
itemGroup.addItems(["DB1,X0.0", "DB2,X0.1"]);
itemGroup.readAllItems()
.then(result => {
console.log('result: ', result);
})
.catch(err => {
console.log('ERRORS: ', err);
});
And this is the error I'm getting:
Error: Error returned from request of Area [undefined] DB [undefined] Addr [undefined] Len [undefined]: "Object does not exist"
at S7ItemGroup.readAllItems (C:\Users\ACER\Documents\NODE PROJECTS\tot-bend\node_modules\nodes7\src\s7itemGroup.js:573:27)
Is there any way to apart from the message (with propper values) to define the property code
to the error with the name of the failing variable?
Thing is... now the connection is not being dropped, but now, on this kinds of erros we need to clear the interval clearInverval()
on wich we are scanning... if not erros will be appearing every X amount of miliseconds defined on the interval of our scanning....
so.. in order to let the user update the area of the variable, we need to inform what's happening due to non of the other Items are gonna be readed...
I don't know what's better.. close the connection or what...
Is there no really chance to ignore that specific variable to continue with reading?? I mean... in #105 we discussed the reason of not putting a null
value on the incorrect properties, and that was fine, because we were talking about the addItems()
wich can be handled differently... but now for the reading.. if the user puts a variable syntactically right but the area doesn't exist... non of the Items are gonna be read..
so.. is either way stop reading and reset all the items (or remove that specific one, wich right now we don't know wich one it is.. because of the error) or just drop the connection to indicate the user that something is wrong instead of ignore that particular variable...
I say... maybe putting the value of that one null... would be good... coz if not handling that error is gonna be madness... OR... Am I skipping something here? and is there something that maybe i'm not seeing?
let me know...
I think I may need to update the firmware of my PLC... that error is very weird....
So, let's organize this discussion into the following topics, so we can take actionable decisions:
That "Object does not exist" is exactly what we're expecting to see here when we try to read from items that do not exist on the PLC. I agree that we should expose the error code too, so one can take decisions from the code based on the exact error that happened. Somewhat easy to do on errors from the PLC, but ideally we need to do this on every throw
on the code. I'll open a new issue for this so we can track and discuss separately.
There's another issue there, these "undefined" in the message should be populated with the request item that failed, we need to fix that too. We don't know the exact item that has the problem (more on that later), but we should print the information about the request we've made.
The issue with ignoring offending variables is that we don't have an easy mapping of request -> variable, because of all this optimization stuff. Some variables may span multiple requests, some requests may embed multiple variables. Even if we figure this out, we may ignore (e.g. set to null
) variables that would otherwise have a value, but we'll set as null because the whole request has an error. Let's take an example.
Suppose the PLC has a DB 10 that is 4 bytes long. That is, reading them as bits, you could read DB10,X0.0
to DB10,X3.7
.
DB10,BYTE3
. We'll prepare one request reading one byte from address 3, and on response we get the requested byte. So far, so good.DB10,WORD0
to the list, so that we have two items. Here the optimizer kicks in, and instead of two requests, we make one request reading three bytes from address 0, and upon receiving a response with four bytes, we extract both items from the answer.DB10,BYTE5
, the optimizer will do its job and still create a single request reading five bytes from address zero (the optimizer knows nothing about what exists in the PLC after all). The response from the PLC will have no data and the error code indicating that the required area is not reachable. We'd have no option as to set everything to null
anyway, even knowing that the first two items exist in the PLC and have a valid value.You could disable optimization at all, but then there's the case of items that span multiple requests that needs special care too. And anyway if this is the case, one could request the areas manually by reading each S7Item
individually.
We could think about something configurable that would set items as null when errors happen, but having in mind that it would affect other variables as well. It would take a bit more effort to model and implement too, as a bit more state needs to be kept in memory about valid memory areas, we need to think on other error cases too, a.s.o.
readAllItems()
Keep in mind that all errors from this function may be transient. Imagine you have all setup, everything working, and somebody needs to change something on the PLC and deletes a DB. Or then you maybe want to setup a variable that somebody is working on right now and will make it available in a few minutes. It would start with errors, but then as soon as the variable is made available, it would start reading normally. Even transient connection problems. There may be failures reading once, and by the next request everything is ok.
The point is, you don't need to drop the connection or do anything else in such errors (unless you want to), because they may "fix themselves". By handling the exception you can still somehow inform the user and keep the connection open, or then drop everything and request an action... in the end it's up to you how you want to design your system :)
I think I may need to update the firmware of my PLC... that error is very weird....
Just to be clear, from what you're writing, I see no weird behavior there... so no need to update the firmware because of this.. except it's always good to have your equipment updated :)
Ok, I'll try to throw some comments first and then, a little resume!!
- Handling errors of readAllItems() Keep in mind that all errors from this function may be transient. Imagine you have all setup, everything working, and somebody needs to change something on the PLC and deletes a DB. Or then you maybe want to setup a variable that somebody is working on right now and will make it available in a few minutes. It would start with errors, but then as soon as the variable is made available, it would start reading normally. Even transient connection problems. There may be failures reading once, and by the next request everything is ok.
I see what you're saying... basiclly That answers the reason why for:
- Ignoring variables that doesn't exist
may not be a good idea, although if you think about it, disable the optimization wouldn't be a great option to take adventage of what nodeS7 offers. Also reading all S7Item
one by one even if its an option, still don't fit with the great philosophy that the optimizer brings..
So... In my opinion.... this all reduces to:
Error message and error code
So... basiclly, the caller could do whatever he wants handling the erros on the readAllItems()
having in mind that those erros may be transient.
Then, at the end, it all conclude with just add the error code to the error.
but let's consider, specially for the "maybe transients erros" on the readAllItems()
to add an easy way to get the failing property, so, if the case of e.g a non defined area, and all the attached items to the group not being able to be readen, we can tell the user easily what's happening and the reason all the other items aren't being readed (in case of reading then in group).
Consider for example the many possibles transient erros that may be thrown. so if we add the code and propper reason (of course a technical reason), would be way too easy not just understand what's happening as developers, but an easy way to E.G delivers propper messages to the end user (non technical users) and to do I18n
So... lets not model something complicated like you explaned here:
We could think about something configurable that would set items as null when errors happen, but having in mind that it would affect other variables as well. It would take a bit more effort to model and implement too, as a bit more state needs to be kept in memory about valid memory areas, we need to think on other error cases too, a.s.o.
because actually the way is designed is very good..
so, do you think if would be OK closing this issue, and move the discussion to: #110 ???
I have to say I'd really like to be able to just ignore the invalid items, even for the Node-RED node use case. But I haven't been able to come up with a good solution for this except for reading all items individually before adding them into the reading list (and I think I'll do like this in the future there)
The main issue of this topic is closed, so we can focus on the above mentioned one for the error codes. Thanks for the great discussion!
so.. this is the case:
in this oportunity, I have (DB1,X0.0 - NOT defined area) and.. (DB2,X0.1 - well defined and readable area)
The error that comes here, its just more like a "lower level" error. here is the trace:
It has to be with the iso-on-tcp module... I think is basicly the same... we can manage that as mentioned on #105
This error i think is mandatory to fix or to handle by us ( the users of nodeS7) because in this case, this error just ends the nodejs process