Closed mounika-alavala closed 1 year ago
Also, using iperf3, I am able to run traffic from the UE to agw (uplink) and see the trace on srsenb console. But am not able to run traffic from agw to UE.
Any idea why @ghislainbourgeois ?
Great news! Regarding iperf
, do you happen to have a firewall running on the srsran
machine? On Ubuntu, this would by default be UFW:
sudo ufw status verbose
No, no firewall running ! Since this is an openstack VM, the security group acts like the firewall -but that is set to allow all TCP/UDP traffic.
Looking back at the iperf
output, it seems like it successfully connects but it is not able to pass data through that connection. Does the behavior change if at the same time, you run a ping from the UE to the AGW?
It would be interesting to see the network capture on the UE in this particular case.
Did the following:
Hi Attached is the tcp dump of UE. ue.log
I am seeing some weird traffic on port 5201, looking like JSON, and we can also see that the connection is established properly. I was not able to find the cause, but I would suspect something with OVS.
I will be unavailable until next week, but will continue looking into this then unless you find the issue.
I was able to reproduce the same behavior you get with iperf
on my deployment. It works properly when going UE -> AGW
, but it does not work properly when going AGW -> UE
. It turns out that when iperf
is running as a client on AGW
, it detects a large MTU because of the network setup.
Forcing the MSS
on the command works: sudo iperf3 -c 192.168.128.13 -b 10M -i 1 -t 60 -M 1360
.
With this, I assume we can close this ticket, can you confirm?
Hi, Thanks for all the help, we are able to send traffic using iperf after forcing MSS.
As a next step, we are planning to test QoS through orchestrator APIs. As a part of that we are planning to use a client as UE and send traffic via SRSRAN enodeb to eth1 of AGW and from there it should reach the server that is hosted on another VM via eth0 of AGW. Any idea on how to implement this on the existing setup?
Hi, I have tested a similar setup on my side, and you will have to create static routes on the machine running the UE for that to work. You also might have to remove alternative routes to the target VM. I would suggest you start by ensuring that you have console access to the UE machine, because it will be easy to lock yourself out.
Let's say the target VM has an IP of 10.20.30.5, you would first get the original route:
ip route get 10.20.30.5
If there is more than the default route (especially a directly connected route), you will need to remove those routes.
Then, you have to add this route:
ip route add 10.20.30.5 via 192.168.128.1
This would then force traffic for that destination through the AGW. You will only be able to add that route when the UE is running however.
If you need more specific help, please share the complete routing table of the machine running the UE and the target VM's IP and I will be able to give you the specific commands.
Hi In our SRSRAN VM both enodeb and UE will be running. We created another VM that has client running on it. This client VM has no idea about 192.168.128.0/24 subnet. When we removed the default route in client VM we are unable to ssh, which is expected behavior, then we tried adding route via 192.168.128.1, but it is not getting added. Our doubt is, in our case UE would be client VM, how will enodeb recognize this client VM and attach it as UE, so that it will pass the traffic to enodeb? In a sense that how do we separate enodeb and UE in two different machines?
Target VM would be server. Target VM's internal IP --> 192.168.30.235 Target VM's external IP --> 10.250.108.63
Routing table in client VM
ip route get
Error --> ip route add
I see that this is the reverse of what I had in mind. Usually, the UE would be the client and try to connect to the wider internet (your other VM in this case). The commands I gave you are meant to be run on the SRS machine.
As this is now out of the original scope for this ticket, I will close it. However, we will be glad to help you on our public Mattermost channel: https://chat.charmhub.io/charmhub/channels/telco. It will be a better forum for this kind of help and troubleshooting. Thank you!
Hi, We have installed charmed magma orchestrator and AGW services on Openstack VMs with Ubuntu 20.04 OS, running behind proxies.
Orchestrator: microk8s version = 1.23 AGW: Version = 1.6.1
We are able to access NMS UI and all the AGW services are in active state. There are error logs in AGW services. Attached are the screenshots of same.
Even though orchestrator services come to "Active" and "Idle" state after installation, they tend to go to "maintenance" state after a day or so. They remain in that state. Even though same proxy values are used, not always same services go to "maintenance" state.
End points of orchestrator are accessible from AGW, we used telnet to check the same.
As part of debugging section of documentation, we ran few python scripts to confirm if every prerequisite is satisfied. When we executed "checkin_cli.py" script, we found out that gateway certificate and gateway key are missing. Restarting magma services didn't help in regenerating the certificate and key.
We tried checking AGW to orchestrator with correct hardware details. But it's not checking in and the status in NMS UI is "Bad".
Any help will be appreciated. Thanks in advance.