Closed Jollyfant closed 5 years ago
Hi @Jollyfant, a good point which I already took into consideration. I will check for a proper solution. cheers
Implemented with 2b623b7.
Note: This feature does not work when using the development server with an enabled debugger.
Closed.
Note: This feature does not work when using the development server with an enabled debugger.
Does this implementation work for station requests too? When I try cancelling a dataselect
request the Federator stops in my FDSNWS
log files but the metadata requests are still coming in after being killed.
@Jollyfant,
fdsnws-station
metadata requests are killed for format=text
, right?
For format=xml
eida-federator
operates on network level (in order to return a valid StationXML
). Currently, the close event is not propagated to the underlying StationXMLNetworkCombinerTask
. As a consequence, once StationXML
metadata is fetched for a single network the task is completed (even if the client closes the request) for this network.
When using https://docs.python.org/3.7/library/multiprocessing.html#multiprocessing.pool.Pool or rather the corresponding thread pool communicating with tasks applied asynchronously is not trivial.
For format=text
they are killed yes! I'm a little worried about the situation when level=response
requests when users repeat their request multiple times because they are impatient.. like I accidentally ended up overloading our FDSNWS service.
Hmm.. not possible to call pool_terminate
or use a state variable when the request is closed and checked during every request in StationXMLNetworkCombinerTask
?
That might be critical.
Hmm.. not possible to call
pool_terminate
or use a state variable when the request is closed and checked during every request inStationXMLNetworkCombinerTask
?
One possibility propagating such an event IMO is a shared variable to be used for interruption, that's right.
@Jollyfant,
One possibility propagating such an event IMO is a shared variable to be used for interruption, that's right.
However, the event only can be propagated as soon as a closed connection is detected (i.e. when writing to the socket fails). Due to the distributed architecture of EIDA network elements are combined firstly and streamed as soon as a StationXML
network element is ready. This fact still may cause endpoint requests to be executed by eida-federator
despite the client already closed his connection previously. As a consequence, for a request such as e.g.
curl -o - "http://localhost:5000/fdsnws/station/1/query?net=CH&level=channel"
event propagation brings no improvement at all. Same for several network elements where federation takes approximately the same time.
Ah so the disconnect can only be detected when data is written over the socket? And this happens only when a full network is completed? Is there no disconnection event that is fired when the client hangs up? Or maybe send out some ping messages once in a while.. or some extra spaces?
Ah so the disconnect can only be detected when data is written over the socket?
Right.
And this happens only when a full network is completed?
Yes. In order to serve valid StationXML
.
Or maybe send out some ping messages once in a while.. or some extra spaces?
To be verified ...
It's a pretty interesting problem.. are you still developing the bulk request implementation for StationXML? It would reduce the number of requests that are queued by a significant amount.
... or some extra spaces?
As soon as data is sent to the client the headers are gone. However, the real result might be HTTP 204 (even with explicit routing).
See also this discussion.
are you still developing the bulk request implementation for StationXML
It is basically implemented: https://github.com/EIDA/mediatorws/tree/feature/fdsn-station-bulk (for a previous version). @kaestli and me decided to give it a chance and get it run together with the most recent version. Coming soon ...
We have actually two solutions for the "number of requests" issue
use granular GET requests with a loopback reverse proxy on Apache level (resulting in granular requests, but each granular request going to an endpoint only once within a defined time, over all users and federator threads & instances)
Just a cache what you mean right?
Implemented with 2801d8e37a1b97c10c79b67530cb31391e9213a1.
The Federator should kill any pending requests to the end points when the client disconnects from the Federator. Right now it seems like the Federator attempts to finish all requests regardless of whether the client is connected.