Closed ananyaalways closed 4 years ago
Hello Ananya, thanks for your interest in our simulation model. I hope I can answer your questions in the following.
TSN mainly specifies switches nevertheless it can be useful to play around with hosts allowing the use of GCL's in simulation. You can leave the GCL empty so it won't have any effect on the network or use a standard Q host that doesn't have 802.1Qbv gate control.
All Traffic Source apps have a parameter to set the starting time, which is inherited from TrafficSourceAppBase:
double startTime @unit(s) = default(0s);
If you would like to randomize this value for all apps you can use OMNeT++ random numbers, such as uniform(lowerBound, upperBound)
(more info in the manual: https://doc.omnetpp.org/omnetpp/manual/#sec:sim-lib:random-variate-generation).
Best, Timo
Hello Timo,
Thank you for the clarification. I have another question. There is a parameter 'payload' in TrafficSourceAppBase. So I can provide the payload for a frame for each periodic flow via this parameter. However, can you let me know the header size of a frame used in the model? I am using the total size of a frame for my calculations and formulations. So I need to subtract the header size to provide the payload size for each frame to CoRE4INET.
Thanks and Regards, Ananya
Hello Ananya,
Yes, this is correct you can set the size of the payload transmitted periodically in the payload parameter of each source app.
//Size of the payload of the message (size of encapsulated cPacket) that is being generated
volatile int payload @unit(Byte) = default(46Byte);
Our applications directly transmit their data via Layer 2 Ethernet frames.
BGTrafficSourceApps send inet::EthernetIIFrame
with the header size of a standard Ethernet frame.
All AVB and 802.1Q applications use the IEEE802.1Q VLAN header which adds 4 Byte to the standard Ethernet Header for PCP, DEI, VID, and an additional EtherType.
If you would like to simulate higher-layer traffic, you can use the applications of the INET framework and configure their traffic class with our adapted network layer modules in src/core4inet/networklayer/inet
(check out the examples). In this case, you will have to add the upper layers to the calculations.
Best. Timo
Hello,
I wish you all a happy new year. Is there some other delay that you take into account apart from transmission delay and propagation delay? Also what is the propagation speed you consider?
For my scenarios, I was getting the delays greater than the expected values. Therefore, I created a simple scenario: only 2 nodes connected directly to each other and transmitted packets of a certain size periodically. The delay came out to be 0.032us more than sum of transmission delay and propagation delay considering propagation speed of 2*10^8 m/s.
Thanks and regards, Ananya
Hello,
a happy new year for you as well. The packet delivery time is depending on the channel model you use. In most of our examples, we use the standard EtherLink from the INET framework (Eth100M, Eth1G, etc.), which based on DatarateChannel from OMNeT. The propagation delay is set to delay = replaceUnit(length / 2e8, "s"); and is depending on the length of the link that you set in your ned files. If you would like to know more about the channel models please refer to the omnet/inet documentation.
Apart from that our nodes allow setting additional processing delays which all default to 0, e.g. double hardware_delay @unit(s) = default(0us);
In my small test setup recreated from your description, I could verify that our modules do not introduce an additional delay. The delay is solely dependent on the channel. If you have opposing findings please send me the network so I can check out where the delay is created.
There might be an error in your calculation though as 0.032us would be introduced by 4 additional bytes on a gigabit link. Maybe you did not account for the added header information in Q-frames of 4 Bytes.
Best, Timo
Hello Timo,
I am sending 100 byte payload packets at 1 ms interval over a Eth1G link of 20metres. Considering standard Ethernet header size of 18 bytes and additional header size of Qframes of 4 bytes, the total frame size should be 100+18+4 = 122 bytes, right?
So sum of transmission delay and propagation delay in microseconds = 122*8/1000 + 20/200 = 0.976+0.1 us = 1.076us. However, the rxlatency stats is 1.108 us (which is 0.032us higher).
Thanks and Regards, Ananya
Hello Ananya,
I've built a network using your specifications and I think I found the mistake in the delay calculation you provided. In my simulations, I measured a transmission delay of 1.14 us using an IEEE8021QTrafficSourceApp and 1.108 us BGTrafficSourceApp. Thus, I assume you are using a BGTrafficSourceApp as well.
For the calculation, it is important to note that the transmission is handled on the Ethernet physical layer (layer 1), which adds 7 Bytes preamble and 1 Byte SFD as a header. This needs to be taken into account for the calculation of transmitted bytes: Payload + EthLayer2 Header + EthPhyLayer1 Header = 100 + 18 + 8 = 126. 126 Bytes are transmitted using BE and 130 Bytes using Q. The rest of your calculation is correct and I think this explains the missing bytes.
Regards, Timo
Hello Timo, Thank you for the explanation. However, I am facing issues with the delay results. I would really appreciate if you could answer the following queries.
Consider the scenario of transmitting a single flow using IEEE8021QTrafficSourceApp from node0 to node1 via switch0. node0 and node1 are connected to switch0 via 20m Eth1G links. The flow is of priority 7 and 100 bytes payload (so effectively 130 bytes) and the gate for queue7 of switch0 is opened for exactly the transmission delay of a frame (130*8/1000 = 1.04us) at the beginning of cycle (cycle duration 97.72us). So, with default value of startTime=0s, if I send only 1 frame(I did so by making nodes[0].app[0].sendInterval in nodes.ini and sim-time-limit in omnetpp.ini both =1s), the rxlatency at node1 for 1 frame is 98.86us = 97.72(cycle length)+1.04(transmission delay) + 0.1(propagation delay). This can be understood by considering the packet missed the gate opening of the 1st cycle and hence transmitted at the beginning of the next cycle.
1st question: since both node0 and node1 are connected to switch by 20m links, for calculation of rxlatency, why is the propagation delay not taken 2 times. Also, shouldn't transmission delay be taken two times (transmitted from node0 and also from switch)?
2nd question: In no case should the rxlatency exceed 98.86us (considering the delay calculation is correct). However, if I run for longer simulation time and more frames of the flow (for example, sendInterval 20ms), it is coming upto as high as 110us from the statistics (histogram for rxlatency queue7). What could be the reason? I have changed the scheduler tick length to 1 ns and precission to 0.1ns in omnetpp.ini file. Keeping the scheduler tick = 80ns (as in the tsn example), delay was coming even higher than 98.86us for the scenario of sending 1 frame. Also, could the parameters oscillator max_drift and drift_change be causing this impact?
3rd question: Shouldn't the gate open duration per cycle be equal to the transmission delay of the flow? If I try with gate open duration much lesser than 1.04us, still the frame goes through.
I am pasting parts of the configuration here for your reference:
Configuration in nodes.ini file:
.nodes[].phy[].taggedVIDs = "1" .nodes[0].numApps = 1 .nodes[1].numApps = 1 .nodes[0].app[0].typename = "IEEE8021QTrafficSourceApp" .nodes[1].app[].typename = "IEEE8021QTrafficSinkApp" .nodes[0].app[0].vid = 1 .nodes[0].phy[].mac.address = "0A-00-00-00-00-01" .nodes[1].phy[*].mac.address = "0A-00-00-00-00-02" .nodes[0].app[0].destAddress = "0A-00-00-00-00-02" .nodes[0].app[0].payload = 100Byte .nodes[0].app[0].sendInterval = 20ms .nodes[0].app[0].priority = 7 .nodes[1].app[0].srcAddress = "0A-00-00-00-00-01" .nodes[1].bgIn.destination_gates = "app[0].in"
Configuration in switches.ini file:
*.switches[0].phy[].taggedVIDs = "1" *.switches[0].phy[].shaper.gateControlList.controlList = "C,C,C,C,C,C,C,o:0;C,C,C,C,C,C,C,C:0.00000104"
connections in the network.ned file:
connections: for i=0..1{ nodes[i].ethg <--> Eth1G { length = 20m; } <--> switches[0].ethg++; } Configuration in omnetpp.ini:
.scheduler.tick = 1ns .scheduler.numPeriods = 1 .scheduler.period[0].cycle_ticks = sec_to_tick(97.72us) .scheduler.oscillator.max_drift = 200ppm .scheduler.oscillator.drift_change = uniform(-50ppm,50ppm) .precission = 0.1ns **.gateControlList.period = "period[0]"
Thanks and Regards, Ananya
Hello Timo, I would like to report some findings as part of the previous doubts. sendInterval of the flow has been taken as 20ms.
If I simulate <= 48 packets (sim-time-limit <= 960ms), there are no packets exceeding 100us rxlatency. However, with increase in number of packets, the the number of outliers keeps on increasing. With number of packets equal to 100, 500, 1000, 1500, 2000, the number of outliers are 19,87,87,166,174 respectively. Considering these numbers as percentages of outliers, the values are significant.
I changed the parameters oscillator max_drift and drift_change to very low values but it does not have an impact on the number of outliers.
Another question, how long should a gate be open to transmit a frame? Just the transmission delay or less or more?If I increase the gate open duration beyond 1.04us, the number of outliers decreases. But that should not be my approach since I want to keep the gate open times minimum. Also, since I am keeping cycle duration much smaller than the flow period 20ms, any frame should be transmitted by atmost the next cycle, therefore within 97.92+1.04+0.1 = 98.86 us, or if you consider transmission delay and propagation delay two times 97.92 + 2 1.04 + 2 0.1 = 100us.
Thanks and regards, Ananya
Hello Ananya,
Transmission delay via multiple hops
To calculate the latency of a packet you need to take into account the processing delay, transmission delay, and the propagation delay. The processing delay of the switches forwarding defaults to 8us. You can set this delay in the ini-file like this **.switch1.hardware_delay = 0s
.
The expected latency from node0 via switch0 to node1 is 10,28us with processing delay enabled and 2.28 without processing delay. Sorry if I confused you by saying our hosts add no additional delay by default, as our switches add 8us delay by default.
Max latency in your setup The first question also impacts the 2nd one as your max latency increases to 107,28 us which should never be exceeded.
Frame preemption If a frame is allowed to be transmitted because the gates are open, the frame transmission is completed even if the gate closes during transmission, as we currently have no frame preemption 802.1Qbu implemented.
Probably the problem with your calculations is the processing delay of the switches. If you disable it the results should match your expectations.
Regards, Timo
Hello Ananya,
to add on to my last comment, I forgot to comment on the clock jitter description. The clock jitter should have no impact on the max. transmission delay. On the other hand, it will influence how precise the application can hit the time slot in the gate controls. We do not yet have a 802.1QSourceApp that is time-synchronized to the period. If you would like to have a time-synchronized Q-Source consider implementing it yourself. You could use the AVBTrafficSourceApp or TTTrafficSourceApp as an example of time-synchronized source apps. If you decide to implement it feel free to create a pull request.
Best regard, Timo
Hello Timo, Thank you for the clarification. I will reiterate your last comment to ensure that I understood you correctly. Do you mean that the Q-source apps are not synchronized to the cycle/period of the gate control list? In my scenarios, the hosts are not TSN compliant so that is not a problem.
I have another question. As I understand, in your implementation, the priority of the flow determines which queue of any switch the flow goes through. However, for the creation of my offline schedule, taking multiple switches into account, queue assignment for a flow is not fixed based on the flow's priority. Hence a flow can be assigned to queue7 of switch 1 and then queue6 of switch 2 and so on. Is this possible to do with your current implementation? Otherwise, could you point to me the portion of the code where I can edit this?
This is regard to your clarification on point 3. So suppose a queue7 is opened from time 0 to time 1.04 and queue6 is opened from time 1.04 to 1.76. So if flow assigned to queue7 is received for example at time 1 and its transmission delay is 1.04, as you say it will still get transmitted even though the gate closes at 1.04. So in that case, it will get transmitted until 1+1.04=2.04. So, in that case the flow assigned to queue6 will not get transmitted at all since it was scheduled to be transmitted from 1.04 to 1.76, right?
Thanks and regards, Ananya
Hello Ananya, regarding the first part, your understanding ist correct.
I do not quite understand what you actually want to do with this concept. Do you want to change the PCP encoded in the frame at a boundary port of your network? This would require a modification of the in-control modules of 802.1Q/Qci (src/core4inet/linklayer/inControl/IEEE8021Q). Or do you want to configure a custom queueing method, where on certain devices the PCPs are interpreted in a different manner? This would require a custom queueing module (src/core4inet/linklayer/shaper/IEEE8021Qbv/queueing). The first option is, in my opinion, the more realistic approach to actually modify the frame priority instead of modifying how the queueing is handled in the switch.
Regarding the schedule calculation. That is exactly the point that I was trying to make. It is common practice, to introduce guard bands/red phases (all gates closed) in size of a full ethernet frame to ensure that the band is free when the high priority scheduled message arrives. Another possibility is frame preemption, which I mentioned in my earlier comment that is currently not implemented in our simulation model. Usually, you would use time-synchronized hosts to ensure that the sending nodes hit their time slots precisely in such a tight schedule.
Best, Timo
Hello Timo,
Thank you for the clarification. Regarding the point of changing the queue assignment at another switch, actually I am trying to create an offline schedule based on some formulations. I was trying to make the queue assignment/priority for every switch as output of the problem, but now I see, once assigned to a queue, it needs to be the same throughout. I have another observation. Suppose 8 flows with transmission delays 0.32, 0.32, 0.272, 0.272, 0.272, 0.272, 0.256, 0.256 us respectively and end to end delay requirement of 400us each are assigned to 1 queue at a switch port. The cycle duration is 98.76us and flow periods are very large = 500ms. The gate for the queue is opened for exactly 0.32 * 2 = 0.64 us (duration equal to the two largest transmission delays). A guard band of 0.32 us is also considered after the gate is closed. Logically, then all the flows should be received in 4 cycles. But, this is not the result. The flows are getting transmitted in 8 cycles. However, when the gate open duration is increased to slightly more for example to 0.74 us, all the flows are out within 4 cycles. Do you see any reason for this behaviour? Can't two frames be transmitted back to back or there needs to be a gap?
Configuration in omnetpp.ini: .scheduler.tick = 1ns .scheduler.numPeriods = 1 .scheduler.period[0].cycle_ticks = sec_to_tick(98.76us) .scheduler.oscillator.max_drift = 200ppm .scheduler.oscillator.drift_change = uniform(-50ppm,50ppm) .precission = 1ns
Using the 6th queue in switch.ini: **.switches[0].phy[8].shaper.gateControlList.controlList = "C,C,C,C,C,C,o,C:0;C,C,C,C,C,C,C,C:0.00000064"
Thanks and regards, Ananya
Hello Timo, In addition to the previous query, I observe that the flows go through in 4 cycles if I increase the gate open duration to even 0.68 us from 0.64 us. Is it because of the gate controls you said earlier? In that case should I change the scheduler tick, precission, max drift and drift change values? If so, to what granularity?
Thanks and regards, Ananya
Hello Ananya,
for the clock drift, you can check out our example simulations. most of the time we use a precision of 500ns which is enough if your schedules are in the area of multiple us.
Yes, there is a gap between Ethernet frames called the interpacket gap: https://en.wikipedia.org/wiki/Interpacket_gap
I suggest you try to find the reason why the packets are stuck in the queue using the simulation GUI, doing step by step simulation. Start your network, move into the switch and into the phy[] module which you expect to queue the packets. Then move to the shaper module. Check which states the gates are in when the packets arrive and how the frames are selected. All the calls should be animated and it should be very clear, why a packet can not be transmitted.
As those are very specialized research questions I hope this helps you in finding out what's wrong with your calculation.
Best regards, Timo
Dear Timo,
Thank you for explaining where to look. I think I understood the issue. I believe you have a lower limit on the payload size of the frame = 42 bytes. 42 bytes frame will give latency = 0.676 us (transmission delay is 0.576 us (72*8/1000), propagation delay is 0.1 us). If I want to use frame of size lower than 42 bytes, it also takes the delay as 0.676 us. Is there a way to take a lower payload size? The default value of the payload in TrafficSourceAppBase.ned was 46Byte. I have changed it to 1 byte but still the issue persists. Kindly suggest.
Thanks and Regards, Ananya
Dear Ananya,
this lower limit for the frame size is enforced by the Ethernet protocol. See https://en.wikipedia.org/wiki/Ethernet_frame. If the payload is smaller than 46 (BE) /42 Bytes (Q) it will be padded so the whole frame is no smaller than 64 Bytes on the line.
Regards, Timo
Hello Core Group,
I have created an offline schedule for IEEE802.1Qbv TSN and I want to use your implementation to test my required topologies and evaluate the delay characterisitcs. I have a couple of questions:
In the example for TSN small network, why do the hosts also have a gate control list? I believe only the switches are supposed to have a GCL.
The parameter 'sendInterval' is the periodicity of the application/flow from each node. However, can I randomize the starting time of the first frame of the flow? I want to introduce a random offset within a limit for the first frame of each flow. After that, the following packets will follow the periodicity of the flow.
Thanks and Regards, Ananya