apache / plc4x

PLC4X The Industrial IoT adapter
https://plc4x.apache.org/
Apache License 2.0
1.26k stars 401 forks source link

[Bug]: Reading Datablock got a Error Class: 133, Error Code 0. on Siemens 1200 using S7 #1734

Closed lzago closed 2 months ago

lzago commented 2 months ago

What happened?

PLc Model: Siemens 1200 I can read other datablocks but i got an error when i try to read the block %DB52:0:BYTE[3000]

I am trying to read it, building a simple read request with such tag:

readrequestBuilder.addTagAddress("DB52_bytes", "%DB52:0:BYTE[3000]");
readRequest = readrequestBuilder.build();

the block is perfectly readable from Snap7 C# version and Moka7 java version. I attach the wireshark dump.

Version

v0.12.0

Programming Languages

Protocols

chrisdutz commented 2 months ago

No S7 device has a PDU size allowing to transport this much data ... this has been brought up several times before: While the current optimizer is able to auto-split a request with many fields into multiple individual ones, however it's not able to rewrite single fields that exceed the max PDU size of the PLC.

This would require a significant effort to rewrite and extend the existing Optimizer ... work nobody else here volunteered to do. I would be willing to do it, but not without a significant donation 400$+.

glcj commented 2 months ago

Hello,

Thanks for your information,

As was pointed out, this amount of data is separated by the optimization routine given the limitations of the PDU size. For an S7-1200, that would be between 11 to 12 requests to move that amount of data.

If you can capture the traces with SNAP7, it would be very useful and help us with this improvement.

A separate note is that you should filter the captures with Wireshark since there are traces that do not apply.

Best regards,

chrisdutz commented 2 months ago

Yeah ... I would also like to see a recording of that ... because Mokka7 doesn't at all respect the PDU size ... they do hard-coded checks like the "num of items <= 20" which is too big for an s7-1200 and way too small for an S7-1500. And having a look at the C# implementations I see the same hard-coded limitations. Possibly the driver auto shrinks the request to the max possible (or a hard-coded limit) and doesn't return the full 3000 bytes. But I'm the first to admit I was wrong if I see a recording.

I think it really depends on the type of data you are reading ... when reading bytes of bits, it's I think 12 for an S7-1200 ... however if there's a string in it that already might be too much. The current driver keeps on adding items until adding the next would exceed the size in the request or in the expected response, as soon as it would exceed, it creates a new message and keeps on adding items until this is also full.

Unfortunately adding one item that already exceeds is something the driver currently can't handle. In the old version of the driver (prior to the mspec versions) I had some hard-coded code in there to rewrite items and join them later. I never ported that back as I was expecting a more general purpose optimizer, but I think that's not going to come. So if anyone wants to get his hands dirty, that old code could help.

lzago commented 2 months ago

Hi @chrisdutz they are bytes, i read the area of bytes and then i tag on the client the chunk of bytes from the buffer to get the values, real or int. I need to read the plc 1 time per sec, so it would be too slow to read more than 1000 tags singularly in a multivar approach. I tried. I have captured the reading with Mokka7. I attach the dump, if it can help. here i am reading 3838 bytes

20082024_s1200_bigchuck_bug4.pcapng.pcapng.gz

chrisdutz commented 2 months ago

Interesting ... so it seems to automatically split things up ... but well ... what should I say: I said that we haven't implemented that feature yet ... you could manually do what mokka7 does and read chunks of 222 bytes ... that would be 5 request every second, which should be doable. Of if you'd like to see the feature in plc4x: invest the time yourself or find someone to do it for you (I did offer to do it, in return for a donation) ... I am currently constantly fighting to fix issues in the 4h per week which I am given, no chance to tackle something bigger in that time (Or someone else starts working on open issues)

glcj commented 2 months ago

Thanks for the information,

Thanks for the pcaps, I am working on a similar functionality and I require more or less the same amount of data consecutively to process it. With this capture it gives us a good indication of response time (200 msec) for almost 4K of data.

I will look for a time window, but if you can advance, I will gladly support you with the tests.

Kind regards,

chrisdutz commented 2 months ago

Ok ... so in the C# I think I found the code that they split up the array:

image

However it still seems as if when requesting multiple items, they have a hard-coded limit of 20 items.

chrisdutz commented 2 months ago

Ok ... I pushed some changes, that should probably enable your large array to be read ... please test.

And if you liked it, you can consider leaving a tip here: https://buymeacoffee.com/christoferu ;-)

lzago commented 2 months ago

@chrisdutz thanks a lot I own you more than a cofee, i noticed that the data structure return as ReaRespose is not a PlcList. I can see in the dedbugger that the data are there. But the readResponse.getPlcValue(tag) instanceof PlcList return fase. so i think even the standard ManualTest should fail.

glcj commented 2 months ago

Hello,

I tested the routine with different devices to validate its operation and with different lengths.

For packets that enter within a PDU the routine effectively returns a PlcList in the case of long sections it returns a PvStructure.

I show you the requests step by step:

` PlcStruct plcStruct = (PlcStruct) response.getAsPlcValue();
PlcValue plcValue = plcStruct.getValue("TEST"); List plcValues = (List) plcValue.getList();

ByteBuf buffer = Unpooled.buffer(plcValues.size());

plcValues.stream().forEach(b -> buffer.writeByte(b.getByte()));

logger.info("Es una lista... \r\n" + ByteBufUtil.prettyHexDump(buffer));`

`

It works for me in all cases.

chrisdutz commented 2 months ago

Aaaah... I think we changed something a while ago about returning arrays of bytes... I tested it with uint as I wanted more elements than for into a byte when counting them. Will have another look on Friday.

chrisdutz commented 2 months ago

Sooooo ... even when reading an array of 4000 bytes I'm getting a list of 4000 bytes. And even if I use a smaller array it never returns our specialized PlcRawByteArray. Even if this might be an optimization, right now the S7 driver doesn't use that and I am not seeing the issues you are having.

image

My code I'm using for testing is this:

package org.apache.plc4x.java.s7.readwrite;

import org.apache.plc4x.java.DefaultPlcDriverManager;
import org.apache.plc4x.java.api.PlcConnection;
import org.apache.plc4x.java.api.messages.PlcReadRequest;
import org.apache.plc4x.java.api.messages.PlcReadResponse;

public class DatatypesTest {
    public static void main(String[] args) throws Exception {
        try (PlcConnection connection = new DefaultPlcDriverManager().getConnection("s7://192.168.24.83")) {
            final PlcReadRequest.Builder builder = connection.readRequestBuilder();
            builder.addTagAddress("huge-array", "%DB2:0:BYTE[4000]");
            final PlcReadRequest readRequest = builder.build();
            final PlcReadResponse readResponse = readRequest.execute().get();
            System.out.println(readResponse);
        }
    }
}
lzago commented 2 months ago

image the code is vert similar but i am doing a readResponse.getPlcValue(tag) instanceof PlcList

it's similar to the test code on org.apache.plc4x.test.manual.ManualTest

chrisdutz commented 2 months ago

Please try the updated version which is in the repo right now (I hope the CI/CD will pass soon and we'll have SNAPSHOTS soon) otherwise pull the latest changes and build locally ... you should now get a PlcRawByteArray where you can directly call "getRaw()" and access the real byte[].

lzago commented 2 months ago

@chrisdutz it works like a charm. with the very last version it can read the byte[] and all the data blocks returns PlcRawByteArray coherently now, so no if cases is needed for large or short arrays are needed. for me the issue can be closed.

chrisdutz commented 2 months ago

Perfect ;)

And thanks for the donation ... highly appreciated ;-)