travelping / ergw

erGW - Erlang implementations of GGSN or P-GW
GNU General Public License v2.0
82 stars 33 forks source link

Draining contexts by criteria #253

Open vkatsuba opened 4 years ago

vkatsuba commented 4 years ago

Feature Request

Name Type Description
apn ... ...
imsi ... ...
mccmnc ... ...
gtp_version ... ...
fholzhauser commented 4 years ago

I think we should include filters beyond the context record. A very useful one would be the UPF where the context is active (e.g. when upgrading a UPF).

RoadRunnr commented 4 years ago

Most of the context information is totally unsuitable for a draining selection. Only the version, APN and maybe the MCC/MNC of the IMSI would make sense.

There are also things missing that are not part of the context, e.g. UPF instance, IP pool, SGW and/or PGW, things from the ULI (RAI, TAU, ...)

Collecting this as idea might make sense. A implementation has IMHO be delayed after the stateless work has made some progress. The move to an external storage for the session state will have a major impact on how sessions can be selected.

mgumz commented 4 years ago

apn, imsi(range), gtp-version, mccmnc √

use-case: to phase out one of n attached upfs one must be able to pick the sessions associated with it. yes, that information is not directly in the context datastructure. but it is available in the pfcp-ctx associated with the session.

@vkatsuba please clean out the table from the attributes are not that much relevant right now.

@RoadRunnr why not have a first implementation which traverses over the sessions, find a match in the context-data for the attributes and kill it? it does not have to be the optimal algorithm in the first place … and yes, if the session state is stored externally there wont be the need to traverse all session contexts. but i see this as an optimisation step.

vkatsuba commented 4 years ago

@mgumz, @RoadRunnr the table was updated.

RoadRunnr commented 4 years ago

@RoadRunnr why not have a first implementation which traverses over the sessions, find a match in the context-data for the attributes and kill it? it does not have to be the optimal algorithm in the first place … and yes, if the session state is stored externally there wont be the need to traverse all session contexts. but i see this as an optimisation step.

because the data will be in the UDSF and the only way to traverse that data will be a full load of all data in there into the process. I'm certain that will work well with > 100k sessions.

vkatsuba commented 4 years ago

because the data will be in the UDSF and the only way to traverse that data will be a full load of all data in there into the process. I'm certain that will work well with > 100k sessions.

No sure but looks like we do the same when try use /api/v1/contexts/count. Maybe we can create a part of logic use the criterias based on context record? Eg apn and imsi is required fields and based on result we can try do second filter if by some reason we will not have enough data for filter by contexts Pids what was provided in context record.

RoadRunnr commented 4 years ago

because the data will be in the UDSF and the only way to traverse that data will be a full load of all data in there into the process. I'm certain that will work well with > 100k sessions.

No sure but looks like we do the same when try use /api/v1/contexts/count. Maybe we can create a part of logic use the criterias based on context record? Eg apn and imsi is required fields and based on result we can try do second filter if by some reason we will not have enough data for filter by contexts Pids what was provided in context record.

With the move to USDF storage, that function has to be rewritten. And I'm not sure that we can retain that functionality at all.

vkatsuba commented 4 years ago

@RoadRunnr, @mgumz, @fholzhauser how we should go with this ticket and what main plan should be?

fholzhauser commented 4 years ago

Indeed doing this atomically is really difficult (local or UDSF). Also not practical, as draining should normally be controlled/slow(ish) to avoid excessive signalling load. To iterate through the contexts "slowly" we'd need to stop the node from accepting new requests with the same draining conditions before we start the iteration (e.g. mark an UPF or PGW to be skipped in node selection if that's a condition). In that case I think the mechanism could also be implemented with UDSF. Also I believe we might actually need this with UDSF too as long as the connectivity (SGW/PGW/UPF/AAA) is bound to particular ERGW nodes.

vkatsuba commented 4 years ago

During of current discussion it looks like for implementation f current task we need first of all have implementation of USDF storage and then pick up draining contexts by criteria.

fholzhauser commented 4 years ago

Actually, in my opinion the implementation could be started already as outlined above, and adopted to the UDSF solution later.

RoadRunnr commented 4 years ago

Actually, in my opinion the implementation could be started already as outlined above, and adopted to the UDSF solution later.

I don't want this API stuff to get in the way of stateless changes. So while you could start with it, it can not be merge until the Nusdf and cluster solution is in place. It is highly likely that adapting the API code to those changes will be as much work as implementing it in the first place.

fholzhauser commented 4 years ago

That is a valid argument indeed.

vkatsuba commented 4 years ago

Ticket of USDF storage https://github.com/travelping/ergw/issues/260 was created. Looks like we need to back to current discussion after implementation of USDF storage.