jcurl / RJCP.DLL.SerialPortStream

SerialPortStream is an independent implementation of System.IO.Ports.SerialPort and SerialStream for better reliability and maintainability. Default branch is 2.x and now has support for Mono with help of a C library.
Microsoft Public License
624 stars 197 forks source link

ReadAsync and ReatTimeout #83

Closed nicolasr75 closed 5 years ago

nicolasr75 commented 5 years ago

I know the ReadAsync method is not provided by your code and my question probably is also not an issue with your code but I ask nevertheless since this library is so useful for running software cross-platform, f.e. on Raspberry PIs and many people may encounter the same issue.

My software runs on a Raspberry PI with Linux and wants to read data from a device. For every request the number of bytes expected is known in advance and I pass this to ReadAsync.

The device is connected via a virtual serial port over USB. It happens now and then that ReadAsync returns less bytes than requested. For example I request 127 bytes and the USB driver (I guess it's the USB driver?) splits up the response into to two packets with 124 and 3 bytes respectively.

My original understanding was that the ReadTimeout property lets me control how long it is waited before ReadAsync returns (and thereby concatenating the packets) but that doesn't seem to be so. Microsoft is not very explicit about this but the docs for ReadTimeout say

This property does not affect the BeginRead method of the stream returned by the BaseStream property

And as I understand ReadAsync uses BeginRead of SerialPortStream.

One could conclude that the ReadTimeout property has no meaning for asynchronous reading but when I set it to 1 ms, it in my case totally breaks the reading, almost every packet is invalid. So I set the ReadTimeout to 100 ms which fixes most errors but not the ones I get from split up packets.

I now implemented a loop over ReadAsync myself to concatenate the different packets (if there is more than one) like this:

public async Task<uint> Read(byte[] buffer, uint bytesToRead)
{
    //Create/resize the buffer incrementally
    if (internalReadBuffer == null || internalReadBuffer.Length < bytesToRead)
        internalReadBuffer = new byte[bytesToRead];

    uint count = bytesToRead;

    while (count > 0)
    {
        uint bytesRead = (uint)await portStream.ReadAsync(internalReadBuffer, 0, (int)count);
        if (bytesRead == 0)
            break;
        Array.Copy(internalReadBuffer, 0, buffer, bytesToRead - count, bytesRead);
        count -= bytesRead;
    }

    return (bytesToRead - count); 
}

But now the question remains: is there any timeout at all that would quit this loop in case the device becomes unavailable during the operation? I appreciate any ideas or pointers to other resources. F.e. do you know where I find the Microsoft codes you refer to in https://github.com/jcurl/SerialPortStream/issues/35 ? All my attempts to find code for ReadAsync on .Net Core Github resulted in too many search results.

Thanks in advance!

.

jcurl commented 5 years ago

Hello! The Read and BeginRead methods are identical, in that they only guarantee to return when there is at least ONE bytes available. The Timeout parameters at these interfaces only indicate how long to wait for the first byte.

That doesn't mean you'd always read only a single byte, it depends on how many bytes have been returned by the underlying OS as well as any timing from the Operating Scheduler.

The thread that is reading from the OS is decoupled to the stream methods completely using a producer/consumer buffer pattern, So the Timeout parameters technically only apply from the buffer. The buffer abstracts from the OS allowing more flexibility un not having to know the buffer sizes for actual kernel drivers themselves (e.g. prolific has a limitation of a few kilobytes, FTDI appears to have no limitations)

There are some facilities offered by the OS for the behaviour you describe. However I have seen them as being unreliable (code has a path called a P2303 workaround because of this). That means it will not be exposed at this time. It would require a bit of redesign.

I would also argue however, that it is difficult to rely on timing behaviour for packets. The only reliable assumption for a good protocol would be to make the assumption of bytes only, and then to packetize adding another layer on top as you've described in your "workaround". This has real benefits for example, if you later use USB serial (different timing may cause new issues), or you now decide you want to tunnel over TCP, which in .NET has the same interfaces.

jcurl commented 5 years ago

The code for the ticket you reference is in this GitHub repository. Jist check it out and do a grep (sorry, writing from my phone, I can't do that right now)

nicolasr75 commented 5 years ago

Thanks, that is very valuable information!

The workaround together with a timeout of 500ms fixed most issues.