lorenzodonini / ocpp-go

Open Charge Point Protocol implementation in Go
MIT License
262 stars 125 forks source link

Rel. 0.16 setTimeout #134

Closed andig closed 2 years ago

andig commented 2 years ago

Added a timeout option for outgoing requests: if a response is not received within the configured timeframe, the request is discarded and an error is returned to the sender

I've looked at the code and it seems that this is only available for chargepoints, correct? It might make sense to make the readme/ release notes clearer?

lorenzodonini commented 2 years ago

There is a SetTimeout method available both in the ClientDispatcher and in the ServerDispatcher.

Release 0.16 actually only contains the one for the server, while the one for the client was in the codebase beforehand already (see 7c9b55bc). I've updated the release notes, but the feature is indeed supported by both endpoints.

andig commented 2 years ago

Great, thank you!

andig commented 2 years ago

@lorenzodonini have to come back once more. What I'm looking for is a round trip timeout from CS to CP and back, not a physical websocket timeout. I'm currently doing something like

rc := make(chan error, 1)
err := cs.ChangeConfiguration(id, func(resp *core.ChangeConfigurationConfirmation, err error) {
    if resp.Status == core.ConfigurationStatusRejected {
        rc <- fmt.Errorf("configuration change rejected")
    }

    rc <- err
}, ocpp.KeyMeterValuesSampledData, ocpp.ValuePreferedMeterValuesSampleData)

if err := c.wait(err, rc); err != nil {
    return nil, err
}

which seems very cumbersome.

Is there any timeout that controls the roundtrip and could be configured?

Side note: it would imho also be nice if rejections were returned as errors ;)

lorenzodonini commented 2 years ago

Hey,

the timeout functionality referred to in the issue actually doesn't depend on a websocket timeout, but simply throws an error if the other endpoint doesn't respond to a Request within the specified timeout. This is not necessarily caused by network failure.

If you are looking for an RTT measurement, that's currently not supported (neither by the OCPP specs nor by the websocket library). The closest possible solution for a server-initiated RTT would be your current workaround (t would still require a Request + Response to occur), maybe stored deeper in the library stack. This approach will not be accurate though, since the full processing overhead of the other endpoint will be included in the value.

Side note: it would imho also be nice if rejections were returned as errors

Well, technically a rejection is a perfectly valid response on an OCPP-level, so it cannot be considered an error 😅

andig commented 2 years ago

The closest possible solution for a server-initiated RTT would be your current workaround (t would still require a Request + Response to occur), maybe stored deeper in the library stack. This approach will not be accurate though, since the full processing overhead of the other endpoint will be included in the value.

That would be ok- I'm really only interested in "do I get an answer within a reasonable time", no matter what it takes. The way the api is build- can I assume that the answer (i.e. callback) will always be the one matching the request (matching ids or similar)? If yes it could be feasible to have such a roundtrip in the lib?

andig commented 2 years ago

the timeout functionality referred to in the issue actually doesn't depend on a websocket timeout, but simply throws an error if the other endpoint doesn't respond to a Request within the specified timeout. This is not necessarily caused by network failure.

I just re-read your message. That's exactly what I'm looking for! Seems I need to create a ServerDispatcher by hand.

andig commented 2 years ago

Mhhm, not quite:

err = ocpp.Instance().GetConfiguration(id, func(resp *core.GetConfigurationConfirmation, err error) {
    // handle message
    // but cannot return inner error here
}, []string{})

I think to make this pattern useful, it should be possible to handle/update the error passed to the callback. I'm wondering why this pattern with err as parameter is used?

lorenzodonini commented 2 years ago

It's a design choice. The function is asynchronous by nature, so a callback seemed like the easiest solution. Also, since every message can return either a response or an error (protocol specs), it made sense to provide both within the callback.

You can always propagate the error in other ways (e.g. channels).

andig commented 2 years ago

You can always propagate the error in other ways (e.g. channels).

I realise this would be a large and breaking change; would you theoretically consider a structure like:

err := cs.GetConfiguration(id, func(resp *core.GetConfigurationConfirmation) error {
    // handle message
    return err
}, []string{})

The error going into the callback function currently would instead never reach the inner function.

lorenzodonini commented 2 years ago

With your suggested change, the callback would not contain a potential OCPP error, so handling it would need to be done outside of the callback, which is imho annoying because of this reason:

err := cs.GetConfiguration(id, func(confirmation *core.GetConfigurationConfirmation, err error) {
    // err here is an OCPP error, more specifically *ocpp.Error, which just contains the info of an ocppj.CallError struct, received from the other endpoint
}, []string{})
// err here is NOT an OCPP error (just a regular Go error), since it is caused by:
// - local endpoint not being started
// - invalid message (json marshaling failed)
// - attempting to send the message to an unknown id
// - full queue

The difference is really just:

Mixing these two together and handling all errors in the same place would lead to some more if/switching cases to analyze the error.

Feel free to open another issue with this suggestion, but as you correctly mentioned it would be a big change, which I'm not planning in the near future.