I have a program where I need to connect to multiple servers and want to handle requests asynchronously.
I've created a NetworkRequest class (See below) which is essentially a state machine with its own EthernetClient, responsible for handling a request and its response. I have a NetworkRequest instance for each server which I need to connect to. (In this case, two)
I call the Update() method of each NetworkRequest instance every frame.
Right now the first of the servers doesn't exist so it's expected to fail to connect to that IP. My issue is that the connection attempt of the first NetworkRequest instance causes the second EthernetClient's received data buffer to be cleared before it gets read.
Essentially: (Assume both NetworkRequest instances have outstanding requests and are therefore in State::SendingRequest)
Update 1
Network request 1 - Connect times out (Socket index = max)
Network request 2 - Connect succeeds (Socket index = 0) and state is progressed to State::AwaitingResponse
Internally:
Socket index 0 request has now been completed, the response data is now in its receive buffer and the connection is closed by the server
Update 2
Network request 1 - Connect times out (Socket index = max) - During this Ethernet.socketBegin() will try to use the first closed socket (Socket index 0) and appears to clear the receive buffer)
Network request 2 - Tries to read its data but the buffer is empty :(
My workaround for this is to do two separate Update() loops - The first updates any NetworkRequest instances which are in the State::AwaitingResponse state and the second then updates any NetworkRequest instances which are in the State::SendingRequest state.
Am I missing something here? It would be nice if Ethernet.socketBegin() would rotate through the unused sockets, or even if we could reserve a specific socket index for an EthernetClient instance.
Hello,
I have a program where I need to connect to multiple servers and want to handle requests asynchronously.
I've created a
NetworkRequest
class (See below) which is essentially a state machine with its ownEthernetClient
, responsible for handling a request and its response. I have aNetworkRequest
instance for each server which I need to connect to. (In this case, two)I call the
Update()
method of eachNetworkRequest
instance every frame.Right now the first of the servers doesn't exist so it's expected to fail to connect to that IP. My issue is that the connection attempt of the first
NetworkRequest
instance causes the secondEthernetClient
's received data buffer to be cleared before it gets read.Essentially: (Assume both
NetworkRequest
instances have outstanding requests and are therefore inState::SendingRequest
)max
)0
) and state is progressed toState::AwaitingResponse
max
) - During thisEthernet.socketBegin()
will try to use the first closed socket (Socket index 0) and appears to clear the receive buffer)My workaround for this is to do two separate
Update()
loops - The first updates anyNetworkRequest
instances which are in theState::AwaitingResponse
state and the second then updates anyNetworkRequest
instances which are in theState::SendingRequest
state.Am I missing something here? It would be nice if
Ethernet.socketBegin()
would rotate through the unused sockets, or even if we could reserve a specific socket index for anEthernetClient
instance.