kytos-ng / telemetry_int

Kytos Telemetry Napp
MIT License
0 stars 2 forks source link

stress test task: link flaps + enable/disable INTs on hundreds of EVCs #101

Closed viniarck closed 2 months ago

viniarck commented 4 months ago

This is a stress test task placeholder.

I'll use this use to document the next results of stress tests with the events being implemented on https://github.com/kytos-ng/telemetry_int/issues/90

viniarck commented 2 months ago

I've stress tested with the following cases:

I've also collected convergence metric during a link_down for these cases:

In summary, telemetry_int is performing relatively similar to mef_eline, it's always going to be slightly worse then mef_eline since it follows it, it's pretty much on par especially for EVCs with failover. For static 100+ static EVCs the convergence takes longer for both mef_eline and telemetry_int:

case 1 -> 1 EVC with failover Initial Gbps Avg Gbps / 30 secs TCP retransmissions Data plane reaction time (back at initial rate)
telemetry_int 4.78 4.66 2588 < 1s
mef_eline 4.78 4.66 2411 < 1s
ratio (telemetry_int/mef_eline) 1 1 1.073413521  
         
case 2 -> 402 EVCs with failover Initial Gbps Avg Gbps / 30 secs TCP retransmissions Data plane reaction time (back at initial rate)
telemetry_int 4.78 3.72 1672 5s
mef_eline 4.78 3.74 1704 5s
ratio (telemetry_int/mef_eline) 1 0.9946524064 0.9812206573  
         
case 3 -> 1 static EVC Initial Gbps Avg Gbps / 30 secs TCP retransmissions Data plane reaction time (back at initial rate)
telemetry_int 4.78 4.53 1823 < 1s
mef_eline 4.78 4.67 1941 < 1s
ratio (telemetry_int/mef_eline) 1 0.9700214133 0.9392065945  
         
case 4 -> 102 static EVCs Initial Gbps Avg Gbps / 30 secs TCP retransmissions Data plane reaction time (back at initial rate)
telemetry_int 4.78 2.65 7059 14 s
mef_eline 4.78 2.66 5805 14 s
ratio (telemetry_int/mef_eline) 1 0.9962406015 1.216020672