christiansandberg / canopen

CANopen for Python
http://canopen.readthedocs.io/
MIT License
449 stars 196 forks source link

Multiple Client applications using one can / canopen interface #143

Open arne123 opened 5 years ago

arne123 commented 5 years ago

In order to use different applications with different purposes over one can-open instance or hardware- interface I am thinking about the best method to do or implement this. Those applications could be e.g. an machine control tool which should work in parallel to a curve recorder

So first of all, is someone aware of any solution or attempt in this direction which is already existing?

If not, I just a few thoughts:

For that purpose I think that a server like instance has to manage accesses to the can hardware and establish a interprocess communication to/from different clients.

Might it be a reasonable idea to register a new interface to the python-can library, which will communicate e.g. via a socket interface to an wrapper which contains an python-can library connected to a real can-hw interface? I think the main issue here will be that these in between server has to be aware of the actual ongoing communication format and needs to block the interface until responses to requested have been arrived or on multi frame commands. In order to do that, it is maybe easier to add resource requesting/blocking request already in the python CANopen library.

Any comments on this?

christiansandberg commented 5 years ago

Could be related to issue #125.

I've been experimenting a little with asynchronous SDO, i.e. instead of having the main thread (or whatever thread that requests an SDO transfer) handle the communication and therefore block until it is complete, the communication is handled completely by the background thread receiving the messages. The main thread will just initiate the transfer or if a transfer is already in progress it will be queued. It can then choose to wait for the response, attach a callback, read the response at a later time, or ignore the response completely.

There are some issues I have not figured out yet so I'm not sure it will work. Alternatively we could introduce a lock somewhere as discussed in #125.

arne123 commented 5 years ago

The points in #125 might work if I can ensue every application would be build based on python and I would be able to run them all form a single instance in different threads.

I started a little bit of coding using the custom backend and implementing an udp server which is will instantiates a can bus interface.

As I said, since I am on the can message level here, in need to treat them in the and also need know about their meaning in order to set or release locks. I think the more proper cut would be on a higher level, then e.g. block transfers could be handled with one udp request (more payload data), and the server would take care about the canopen handling.

But as far I have seen (please correct me if I am wrong here) this would require a more or less complex restructuring of the canopen lib.

If of interest, I can share my thoughts in code so far, but its at the very very beginning and not even half working yet.

liamw9534 commented 4 years ago

Different applications should really run in different processes and hence use different canopen instances. Each should have its own node-id and EDS. It is much cleaner in my view as long as your bus doesn't have too many nodes. There is no restriction on how many node-ids you use from a single physical hardware interface aside from the usual restrictions imposed by the standard. And I believe socketcan already handles arbitration across different processes ie it is part of the kernel driver.

If you really want multiple applications (threads) using the same canopen instance (ie same node-id) then one way to approach this is to use the thread-safe bus variant in python-can. It ensures locking and is functionally identical to the default bus. It should be a minor change to network.py to switch to the thread safe python-can bus.

arne123 commented 4 years ago

I think this approach would just work with PDOs, initially I had the same thought.

My understanding so far is, that SDO Transfer can just be initiated from one master in the system. A slave node receiving a SDO request while already answering an previous SDO request will get confused since it can not differentiate between different Masters because the request would arrive via the same Identifier. Moreover it is not allowed to send on the same can Identifier from different masters from the CAN bus itself.

The only way seems to be to use different additional Identifies to allow SDO Transfers between e.g. two slave nodes.

liamw9534 commented 4 years ago

Thanks for the heads up on this. I hadn't appreciated the intricacies with multiple masters.

On Thu, 2 Apr 2020 at 18:59, arne123 notifications@github.com wrote:

I think this approach would just work with PDOs, initially I had the same thought.

My understanding so far is, that SDO Transfer can just be initiated from one master in the system. A slave node receiving a SDO request while already answering an previous SDO request will get confused since it can not differentiate between different Masters because the request would arrive via the same Identifier. Moreover it is not allowed to send on the same can Identifier from different masters from the CAN bus itself.

The only way seems to be to use different additional Identifies to allow SDO Transfers between e.g. two slave nodes.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/christiansandberg/canopen/issues/143#issuecomment-608013414, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTRST2WS3BKDGQPDOUPRW3RKTG6ZANCNFSM4G2CIUCQ .

acolomb commented 4 years ago

The CANopen specification handles this by defining SDO as objects rather than explicit "server" and "client", which are just roles. As I understand it, an SDO has a global state. The server is either handling a request or not. It cannot be distinguished who initiated the request, just that the client is requesting something. So the combination of client and server, the SDO, just exists as a singleton object, logically tied to a specific node. Which node is usually recognizable through its CAN ID, which includes the server's node ID. That however can also be configured differently.

Since we're on a bus, every node can theoretically listen to the request and response SDO objects, thus everyone knows what state the SDO client and server roles are in. That information allows every node to avoid thrashing each other's SDO communications. Especially when multiple "masters" (actually clients, SDO has no master / slave relationship) are involved, they should all listen to and respect SDO messages that they have not put on the bus themselves.

Note that this even applies to reading and writing object dictionary entries. Multiple clients reading from the same server node via SDO can not expect to be served when the requests are sent without waiting for prior responses. In other words, an SDO server is not required to manage a queue of requests, although of course it may do just that. At least that's my understanding of CiA 301.

acolomb commented 4 years ago

Sorry for the double post, GitHub just had a 504 Gateway Timeout for me, so I tried via e-mail as well.

arne123 commented 4 years ago

Theoretically yes, but this is not clean, there would be a risk that to clients are initiating an SDO request at the same point in time, means two nodes are starting to transmit on the same CAN ID, which not allowed. Well this seems not to be stated in the original CAN spec, but I found some ref here: http://www.esd-electronics-usa.com/CAN-Remote-Frames.html I am also not aware that typical CAN Open Stacks are supporting permanent Monitoring to do so.

acolomb commented 4 years ago

If two nodes start transmitting the same CAN ID, there is no problem in the arbitration phase. In the data phase, there can be two cases. Either both are sending the same SDO request, so it is still a singleton although two nodes started with the same intention. The server will reply and everything is fine, nobody will ever notice this non-conflict.

The other case happens when they are sending different SDO requests. Then at some bit they will differ and the dominant bit will win. The node trying to send a recessive bit will notice and either back off or send an error frame, destroying the other ongoing message. Its CAN controller will signal a sending failure to the MCU which should be interpreted as "my request failed and the SDO server is busy with some other exchange".

Remote frames are not meaningful for the SDO protocol.

I am also not aware that typical CAN Open Stacks are supporting permanent Monitoring to do so.

They should at least check whether sending was successful, which is common practice. Most CANopen networks don't have this problem, because only the NMT master controls SDO exchanges to all other nodes. So it's rather theoretical and I absolutely expect that most implementations ignore this corner case.

Worst case for them would be to get a mismatching SDO response or an abort transfer message. The application would treat that as an error and possibly retry. My suggested behavior would avoid such a (highly unlikely) situation in advance though.

arne123 commented 4 years ago

The official CAN Open way seems to be to setup more CANopen SDO Servers on a Node. The Objects 1200 to 127F will describe these SDO Server parameters.