rscada / libmbus

Meter-bus library and utility programs
http://www.rscada.se/libmbus
BSD 3-Clause "New" or "Revised" License
217 stars 137 forks source link

Cannot find or read devices because of timeouts #186

Open tsr8 opened 3 years ago

tsr8 commented 3 years ago

Hello, In my case: Unipi G100 + M-Bus to RS485 converter + 9 x Qundis heat meters, libmbus cannot find or read any device. Increasing time from default 300ms to min. 1100ms (baud: 2400) solves this issue. I suggest to merge "adjustable-timeout" branch to master, it will make that tweaks easier.

Apollon77 commented 3 years ago

That branch is already in when I see it correctly https://github.com/rscada/libmbus/blob/master/mbus/mbus-serial.c#L119

tsr8 commented 3 years ago

Oh sorry, I mean branch "adjustable-timeout". https://github.com/rscada/libmbus/blob/adjustable-timeout/mbus/mbus-serial.c

Apollon77 commented 3 years ago

I honestly would question the approach of the branch ... thet offset thing noone will really understand (whats the timeout value now in the end if I set tit to X?). I think it is better to just allow to overwrite timeout (unless it is smaller value as the default) ... but there is also no PR for that branch so :-(

@lategoodbye do you remember the purpose of that branch from your perspective?

lategoodbye commented 3 years ago

@Apollon77 Unfortunately i don't remember exactly why i created the 'adjustable-timeout' branch, but it's might be related to #121 . M-Bus is a pretty old protocol (Byte oriented, timing critical) which doesn't work well with current hardware (USB) and software (non RTOS). The original idea was to let libmbus calculate the serial timeout by itself. But nowaways pure serial interfaces on PCs has been become rare and it's hard to predict the timeout by all these adapters and other layers.

So i wanted to give the user the chance to adjust the timeout. Since the timeout still depends on the baudrate, i tought it would be the best to add an offset which represent the expected delay. So the user doesn't have to calculate the timeout on it's own. Sure this branch is poorly documented and the unit of 1/10 secs is little bit atypical. Please keep in mind, that setting the timeout to high would make a serial scan very lenghty. At the end i'm open for better ideas.

Apollon77 commented 3 years ago

I understand the idea now better. In fact it is a valid approach.

Both ways are having strange issues ... like yours that do not allow to really "know" what its the final timeout" but also mine idea to say "we calculate a minimum timeout but it can be increased" which would ignore "too short user provided timeout values by just ignoring them. That was my idea: allow to specify a value but still calculate the "default" one and use the higher one in fact :-)

Both ways work. Now that there is a real user usecase needing higher ones would be good to do one :-)