Closed eulenleber closed 1 year ago
Hi, srsUE is not yet compatible with the new gNB, we are working on a fix for this and should have something soon. We will also update our docs with a guide on using the srsRAN 4G UE and srsRAN Project gNB when the changes have been made.
thanks for the feedback shouldnt at least the rx_ascii_art_dft show something?
after checking again with a COTS UE it does not find any net.
Its not clear from my initial post but the gNB console output shows continuously lates
like
Is that related to the issue?
I'm having the same issue here, band 41, I can't pickup any signal when I run gnb.
@eulenleber, you should first focus on getting the gNB running without lates and underflows. Take a look here in the documentation for tips on fixing this. We will have an app note soon on connecting a COTS UE to the gNB.
@gustavobsch, we recommend that you run the gNB with similar configurations to those given in the example configuration files. You can find them here. You should try to get a stable connection using either of these, then feel free to try customizing and changing the config.
thanks for the hint, but even the example config yields the same results
The linked is not really helpful for me, as I do not understand what may cause Lates and Underflows; (these explicit troubles are not mentioned) I can though run the prototypical gNB with srsran-4g (at least I see something on the spectrum analyzer, but can not connect with srsUE or cots UE)
So maybe you can give me a stronger push in the right direction?
@eulenleber lates, underflows, and overflows occur due to data buffer and processing issues, most likely due to a lack of processing power or misconfiguration.
By setting the CPU to performance mode, reducing the level of logging, and adjusting the level of processing that has to be done to run the gNB you can reduce lates and underflows. By adjusting the level of processing I mean reducing the overall bandwidth and throughput of the gNB. A set-up that uses 50 PRBs will be more computationally heavy than a set-up that uses 25 PRBs. Lowering the log levels means your PC is not "wasting" resources writing out to logs when they're not needed and can give more resources to actually running the gNB.
Excessive Lates and Underfows mean that a UE will not receive messages from the gNB on time, and as a result will not connect.
Ah, I see. We have an Intel nuc with an i5-6260U. Am I correct to assume that this way to weak to provide gNB functionalities?
We expect to have some documentation soon about running the gNB on less powerful machines. For now, I suggest you try some of the above fixes.
@eulenleber I think the problem here is your host PC. As Brendan mentioned, underflows could be occurring because the host computer is too slow to process the information or because the CPU governor or other forms of power control are incorrectly set up. For further debugging, can you please send the output of lscpu
Although I highly doubt it will fix the overflow problem, have you attempted connecting the B210 to a USB 3.0 port to see if it fixes the late issue?
yes, as i said the problem may in deed be the cpu (i5-6260U):
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz
CPU family: 6
Model: 78
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 3
CPU max MHz: 2900.0000
CPU min MHz: 400.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse
3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx
rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 64 KiB (2 instances)
L1i: 64 KiB (2 instances)
L2: 512 KiB (2 instances)
L3: 4 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerabilities:
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Meltdown: Mitigation; PTI
Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Retbleed: Mitigation; IBRS
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Srbds: Mitigation; Microcode
Tsx async abort: Not affected
the b210 is already connected via usb3
thanks for your support though :+1:
I already tried those fixes without success the prbs are calculated automatically - the channel bandwidth is 10 MHz Logs are disabled The cpu is in performance mode
Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz
Yeah. Just from looking at the specs, running it might be difficult. You can monitor the system performance in the background, which gives you a small idea about how it handles the workload.
Could you reduce the bandwidth and adjust the samp rate accordingly? I'm just curious to see how much you can reduce this underflow/late problem
How would I achieve that? What I just tested unsuccessfully:
cell_cfg:
channel_bandwidth_MHz: 5 # Bandwith in MHz. Number of PRBs will be automatically derived.
common_scs: 15
are there other "screws to adjust" ?
different cpu:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-7567U CPU @ 3.50GHz
CPU family: 6
Model: 142
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 9
CPU max MHz: 4000,0000
CPU min MHz: 400,0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse
3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx
rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 64 KiB (2 instances)
L1i: 64 KiB (2 instances)
L2: 512 KiB (2 instances)
L3: 4 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerabilities:
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Meltdown: Mitigation; PTI
Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Retbleed: Mitigation; IBRS
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Srbds: Mitigation; Microcode
Tsx async abort: Not affected
values seem to be better however not good:
Late: 2; Underflow: 2; Overflow: 0;
Late: 14; Underflow: 24; Overflow: 0;
Late: 0; Underflow: 1; Overflow: 0;
Late: 9; Underflow: 23; Overflow: 0;
Late: 0; Underflow: 1; Overflow: 0;
Late: 1; Underflow: 2; Overflow: 0;
Late: 9; Underflow: 23; Overflow: 0;
Late: 0; Underflow: 1; Overflow: 0;
Late: 6; Underflow: 13; Overflow: 0;
Late: 11; Underflow: 4; Overflow: 0;
Late: 11; Underflow: 20; Overflow: 0;
Late: 3; Underflow: 2; Overflow: 0;
Late: 11; Underflow: 26; Overflow: 0;
Late: 0; Underflow: 1; Overflow: 0;
Late: 14; Underflow: 16; Overflow: 0;
Late: 0; Underflow: 1; Overflow: 0;
So the question is: what are appropriate cpu specs for running that gNB?
Sorry to interject but I'm also seeing underflows and I have a pretty decent computer with a Intel(R) Core(TM) i9-10900 CPU @ 2.80GHz
# gnb -c /etc/srsran/gnb.yaml
Available radio types: uhd.
--== srsRAN gNB (commit 0523be699) ==--
[INFO] [UHD] linux; GNU C++ version 9.2.1 20200304; Boost_107100; UHD_3.15.0.0-2build5
[INFO] [LOGGING] Fastpath logging disabled at runtime.
Making USRP object with args 'type=b200'
Cell pci=1, bw=10 MHz, dl_arfcn=499200 (n41), dl_freq=2496.0 MHz, dl_ssb_arfcn=499230, ul_freq=2496.0 MHz
==== gNodeB started ===
Type <t> to view trace
Late: 0; Underflow: 4; Overflow: 0;
Late: 0; Underflow: 2; Overflow: 0;
Late: 0; Underflow: 5; Overflow: 0;
Late: 0; Underflow: 6; Overflow: 0;
Late: 0; Underflow: 2; Overflow: 0;
Late: 0; Underflow: 6; Overflow: 0;
Late: 0; Underflow: 3; Overflow: 0;
# top
top - 10:27:59 up 25 days, 53 min, 1 user, load average: 3.15, 2.77, 3.21
Tasks: 1053 total, 1 running, 905 sleeping, 0 stopped, 147 zombie
%Cpu(s): 7.8 us, 3.5 sy, 0.0 ni, 87.7 id, 0.3 wa, 0.0 hi, 0.7 si, 0.0 st
MiB Mem : 96458.8 total, 1550.6 free, 68073.5 used, 26834.6 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 27189.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3895415 root 20 0 1849916 869552 26464 S 30.2 0.9 0:07.74 gnb <<<<<<<<<<<<<<<< using just 30% of one core
Using default config
# gnb -c /usr/local/share/srsran/gnb_rf_b200_tdd_n78_10mhz.yml
Available radio types: uhd.
--== srsRAN gNB (commit 0523be699) ==--
[INFO] [UHD] linux; GNU C++ version 9.2.1 20200304; Boost_107100; UHD_3.15.0.0-2build5
[INFO] [LOGGING] Fastpath logging disabled at runtime.
Making USRP object with args 'type=b200'
Cell pci=1, bw=10 MHz, dl_arfcn=632628 (n78), dl_freq=3489.42 MHz, dl_ssb_arfcn=632640, ul_freq=3489.42 MHz
==== gNodeB started ===
Type <t> to view trace
Late: 1; Underflow: 1; Overflow: 0;
Late: 0; Underflow: 3; Overflow: 0;
Late: 1; Underflow: 4; Overflow: 0;
Late: 0; Underflow: 2; Overflow: 0;
Late: 0; Underflow: 4; Overflow: 0;
Late: 0; Underflow: 1; Overflow: 0;
for me its approx 130% and 170% respectively, just fyi
Hi @eulenleber , @gustavobsch ,
Thanks for your help debugging the issue. Could you try running this command and paste here the output, please?
sudo /usr/lib/uhd/examples/benchmark_rate --args type=b200 --rx_rate 23.04e6 --tx_rate 23.04e6
[INFO] [UHD] linux; GNU C++ version 9.2.1 20200304; Boost_107100; UHD_3.15.0.0-2build5
[00:00:00.000001] Creating the usrp device with: type=b200...
[INFO] [B200] Detected Device: B210
[INFO] [B200] Operating over USB 3.
[INFO] [B200] Initialize CODEC control...
[INFO] [B200] Initialize Radio control...
[INFO] [B200] Performing register loopback test...
[INFO] [B200] Register loopback test passed
[INFO] [B200] Performing register loopback test...
[INFO] [B200] Register loopback test passed
[INFO] [B200] Setting master clock rate selection to 'automatic'.
[INFO] [B200] Asking for clock rate 16.000000 MHz...
[INFO] [B200] Actually got clock rate 16.000000 MHz.
Using Device: Single USRP:
Device: B-Series Device
Mboard 0: B210
RX Channel: 0
RX DSP: 0
RX Dboard: A
RX Subdev: FE-RX2
RX Channel: 1
RX DSP: 1
RX Dboard: A
RX Subdev: FE-RX1
TX Channel: 0
TX DSP: 0
TX Dboard: A
TX Subdev: FE-TX2
TX Channel: 1
TX DSP: 1
TX Dboard: A
TX Subdev: FE-TX1
[00:00:01.006084] Setting device timestamp to 0...
[INFO] [B200] Asking for clock rate 23.040000 MHz...
[INFO] [B200] Actually got clock rate 23.040000 MHz.
[INFO] [B200] Asking for clock rate 23.040000 MHz...
[INFO] [B200] OK
[INFO] [B200] Asking for clock rate 23.040000 MHz...
[INFO] [B200] OK
[INFO] [B200] Asking for clock rate 23.040000 MHz...
[INFO] [B200] OK
[00:00:01.464820] Testing receive rate 23.040000 Msps on 1 channels
[INFO] [B200] Asking for clock rate 23.040000 MHz...
[INFO] [B200] OK
[INFO] [B200] Asking for clock rate 23.040000 MHz...
[INFO] [B200] OK
[INFO] [B200] Asking for clock rate 23.040000 MHz...
[INFO] [B200] OK
[INFO] [B200] Asking for clock rate 23.040000 MHz...
[INFO] [B200] OK
[00:00:01.489160] Testing transmit rate 23.040000 Msps on 1 channels
[00:00:01.589959] Tx timeouts: 1
[00:00:11.742031] Benchmark complete.
Benchmark rate summary:
Num received samples: 235554807
Num dropped samples: 0
Num overruns detected: 0
Num transmitted samples: 230462880
Num sequence errors (Tx): 0
Num sequence errors (Rx): 0
Num underruns detected: 0
Num late commands: 0
Num timeouts (Tx): 2
Num timeouts (Rx): 0
Done!
This will run a benchmark test between the CPU and the USRP. Note that this tool might be somewhere else, depending on how you installed UHD.
After that, could you run again the gnb binary and send us the log please?
@eulenleber To change cell bandwidth you need to adjust srate
and channel_bandwidth_MHz
.
However there are other limits. For instance n78
band has a minimum 10 MHz bandwidth (and even then I could only get my phone to connect when using 20 MHz bandwidth). Also you tried setting scs to 15 kHz but srsRAN_Project
only supports 30 kHz for n78
.
And about your earlier comment with not seeing the cell on the ascii dft : 5G cells are much more power efficient and really only TX when they have something to TX (as opposed to LTE for instance which always transmits pilots signals). So that means, they are "harder" to see when inactive. You'd need to use something like fosphor to make sure not to miss anything to see them.
Error connecting to 127.0.1.100:38412
also means no connection to Open5GS which in my experience also prevents the network from showing up.
@smunaut you are right, but the default values seem to be already cpu performant. I wouldnt know how to reduce the bandwith even further in order to achieve better cpu results
I already assumed that, thanks for the clarification
We have to distinguish between the default config which I only use to debug that Late/Underflow thingy while the working Open5gs core is used in the non-default config. I just guessed that it doesnt make any sense to go further if that Late/Underflow thing still persists.
@eulenleber one quick question: are you using virtualbox or any kind of virtualization or is this bare metal?
And which operating system are you using?
@ismagom thats bare metal using 22.04.1-Ubuntu
You mentioned you have a working setup where it successfully connects to Open5gs. Could you send us console stdout, logs and config file for that setup please?
@ismagom
that just on the weaker i5-6260U:
And in the powerful one?
Hi again @eulenleber . We've been testing your same commit in a variety of CPUs, using B210 and UHD versions 3.15 and 4.1. Unfortunately we haven't been able to reproduce your issue. To give some explanation about what is happening here, all these log messages:
Radio realtime event:...
Unavailable data to transmit for sector...
Received UL_TTI.request message out of time...
Are all real-time processing errors at different stages of the processing. Essentially the CPU is late to generate a given result. In a small CPU you could expect some of these messages to show up, specially if you are running a GUI, have your wifi enabled or NIC card doing internet traffic, your GPU etc, etc. All these devices interrupt the CPU and can cause these sort of issues. However, in your case these errors are seen continuously from the start, even with no users in the system, which is not typical.
We have tested the following CPUs: i7-8550U, i7-7700, i7-8559U and i7-6770HQ. We don't have anything as small as your i5-6260U which is a dual-core. In all these 4 i7 CPUs, we can run a cell with up to TDD 20 MHz and we just see a few of these RT errors.
My latest recommendation to try would be to make sure the CPU governor is set to maximum (sometimes using GUI tools don't set it correctly), make sure you disable the wifi card, maybe try to run without GUI (in headless mode), and make sure you don't have background processes that could be using CPU time. I guess you have also compiled in Release mode (is the default).
If I think of something else to try, I'll let you know.
Hi again @eulenleber . We've been testing your same commit in a variety of CPUs, using B210 and UHD versions 3.15 and 4.1. Unfortunately we haven't been able to reproduce your issue. To give some explanation about what is happening here, all these log messages:
Radio realtime event:... Unavailable data to transmit for sector... Received UL_TTI.request message out of time...
Are all real-time processing errors at different stages of the processing. Essentially the CPU is late to generate a given result. In a small CPU you could expect some of these messages to show up, specially if you are running a GUI, have your wifi enabled or NIC card doing internet traffic, your GPU etc, etc. All these devices interrupt the CPU and can cause these sort of issues. However, in your case these errors are seen continuously from the start, even with no users in the system, which is not typical.
We have tested the following CPUs: i7-8550U, i7-7700, i7-8559U and i7-6770HQ. We don't have anything as small as your i5-6260U which is a dual-core. In all these 4 i7 CPUs, we can run a cell with up to TDD 20 MHz and we just see a few of these RT errors.
My latest recommendation to try would be to make sure the CPU governor is set to maximum (sometimes using GUI tools don't set it correctly), make sure you disable the wifi card, maybe try to run without GUI (in headless mode), and make sure you don't have background processes that could be using CPU time. I guess you have also compiled in Release mode (is the default).
If I think of something else to try, I'll let you know.
Thanks for this input. I moved my testing to another, dedicated, baremetal node with an Intel(R) Core(TM) i7-3770S CPU @ 3.10GHz and now I don't see more underflows
Ensure the CPU governor is performance, I think Ubuntu defaults to powersave
# echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Here's the output of the benchmark
[00:00:20.805364459] Benchmark complete.
Benchmark rate summary:
Num received samples: 236749621
Num dropped samples: 0
Num overruns detected: 0
Num transmitted samples: 236223840
Num sequence errors (Tx): 0
Num sequence errors (Rx): 0
Num underruns detected: 0
Num late commands: 0
Num timeouts (Tx): 0
Num timeouts (Rx): 0
Done!
And the output of gnb with default config, no more underflows! I let it a couple of minutes running
# gnb -c /usr/local/share/srsran/gnb_rf_b200_tdd_n78_10mhz.yml
Available radio types: uhd.
--== srsRAN gNB (commit 0523be699) ==--
[INFO] [UHD] linux; GNU C++ version 11.2.0; Boost_107400; UHD_4.1.0.5-3
Making USRP object with args 'type=b200'
Cell pci=1, bw=10 MHz, dl_arfcn=632628 (n78), dl_freq=3489.42 MHz, dl_ssb_arfcn=632640, ul_freq=3489.42 MHz
==== gNodeB started ===
Type <t> to view trace
^CStopping ..
This node as a slower CPU and doesn't see this issue. I think the other worker node has too many CPU interrupts because the thousand or so tasks running..
The bad news is that I still can't see the network from the UE or see any activity on the GQRX waterfall. With the enb I was able to see it.
I'm waiting for the GPSDO to arrive, it might be related to that..
GQRX does very infrequent sampling you're unlikely to see it.
As for seeing it on the phone, I'm not sure what UE you use but : (1) Did you ever see your 5G network on it ? COTS UE are VERY fussy ... (2) Even on a known working setup I was never able to see n78 10 MHz network, I had to make it 20 MHz wide for the phone to see it ... (3) Indeed proper clock can be critical, depends a bit of the "luck of the draw" of how much your on-board clock is off nomina.
I have a Galaxy A22 and I have not been able to make it work yet. This phone only supports band 28 and 41 so kind of limited there.
My testing has been limited to band 41, catching errors like this underflow issue and monitoring the waterfall when testing other bands. With enb I see a lot of activity once I run the program on different bands, without any UE connecting to it.. so I was using that behavior as baseline. Do you know if gnb behaves differently?
I purchased a 'One Plus Nord N10' UE which supports more 5G bands and an internal GPSDO, waiting for them to arrive and test again. At least the underflow issue is resolved now.
Do you know if fosphor is better than GQRX for monitoring?
Yes, as I mentioned before gnb
behaves very differently. In LTE eve if there is no phone active there pilot carriers are always transmitted and they "fill out" enough of the spectrum over enough of the time to make the cell very visible. In 5G/NR, those don't exists, which mean when a phone is not active, only the SSB is transmitted and it's very short, doesn't fill much bandwidth and doesn't happen very often (in percentage of time).
And fosphor will undoubtedly catch the cell since it's designed to catch short events.
I'm assuming you mean the Galaxy A22 5G ? (since the plain A22 doesn't have 5G at all). And AFAICT it supports more band. However as I mentioned COTS phones are fussy ... I tried 3 of them before finding one that worked ... And even then it only works with PLMN 001/01 and when in SIM1 (same phone/same sim in the SIM2 slot and it doesn't work).
You also need to make sure your SIM is properly setup for 5G.
What UE are you using?
Correct, Galaxy A22 5G, I was using the information from https://cacombos.com/device/SM-A226B and they only have bands 28 and 41 listed as supported... but I see on other websites they have listed more bands, as you mentioned, that's great news I will test other bands.
I will also test with PLMN 001/01 on SIM1 slot and monitoring with fosphor
Well that site also only list 5G NR NSA band, no SA bands at all ... (and gnb is a SA network)
I'm using a One Plus Nord CE 5G (EB2103) and that works. I also tried a One Plus 8 without success.
For reference, we have tested with the One Plus Nord 5G model AC2003 and also works well in both TDD and FDD in 5G SA.
However, the minimum bandwidth for FDD is 10 MHz and for TDD is 20 MHz.
We'll gather a list of supported COTS UE and publish it soon.
Well, thanks for the advice. I could at least reduce the values, I guess the biggest impact had indeed the Release build 🫣 But Im still not able to eliminate the underflows completely.
I disabled most of the systemd services, isolated multi user, killed unnecessary processes. @gustavobsch have you also an open5gs core net running on the same machine? (well without open5gs core running I can improve the performance a little bit more :D)
But thank you all for the explanations.
Other things you can look at:
just for documentation purposes:
sudo dmidecode -t 17 | grep Locator
Locator: ChannelA-DIMM0
Bank Locator: BANK 0
Locator: ChannelB-DIMM0
Bank Locator: BANK 2
since there are 2 channels (A and B) the memory is in dual channel
Im not able to change the GPU
I will change the system and report back if that does not work.
Thanks for the support
Well, thanks for the advice. I could at least reduce the values, I guess the biggest impact had indeed the Release build 🫣 But Im still not able to eliminate the underflows completely.
I disabled most of the systemd services, isolated multi user, killed unnecessary processes. @gustavobsch have you also an open5gs core net running on the same machine? (well without open5gs core running I can improve the performance a little bit more :D)
But thank you all for the explanations.
I don't have open5gs installed on this node. The node giving me issues also is also running GPU tasks so I think it was just a matter of too many CPU interrupts.
I definitely recommend fosphor, it's so much more sensible than GQRX.
Have you checked your usb bus usage? for reference this is the usage I observe running band 41 with 10mhz bw
# lsusb
Bus 004 Device 002: ID 2500:0020 Ettus Research LLC USRP B210
# usbtop --bus usbmon4
Bus ID 4 (Raw USB traffic, bus number 4) To device From device
Device ID 1 : 0.00 kb/s 0.00 kb/s
Device ID 2 : 46363.56 kb/s 46770.42 kb/s
# top
top - 08:41:00 up 23 min, 2 users, load average: 0.49, 0.48, 0.35
Tasks: 177 total, 1 running, 176 sleeping, 0 stopped, 0 zombie
%Cpu(s): 6.8 us, 4.8 sy, 0.0 ni, 86.0 id, 0.0 wa, 0.0 hi, 2.4 si, 0.0 st
MiB Mem : 15911.0 total, 14141.8 free, 1121.7 used, 647.5 buff/cache
MiB Swap: 1956.0 total, 1956.0 free, 0.0 used. 14525.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
34548 root 20 0 1955060 868904 25088 S 78.1 5.3 0:20.92 gnb <<<< one core Intel i7-3770S
# cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
performance
performance
performance
performance
performance
performance
performance
performance
# gnb -c /root/gnb.yml
Available radio types: uhd.
--== srsRAN gNB (commit 0523be699) ==--
[INFO] [UHD] linux; GNU C++ version 11.2.0; Boost_107400; UHD_4.1.0.5-3
Making USRP object with args 'type=b200'
Cell pci=1, bw=10 MHz, dl_arfcn=499200 (n41), dl_freq=2496.0 MHz, dl_ssb_arfcn=499230, ul_freq=2496.0 MHz
==== gNodeB started ===
Type <t> to view trace
What UE are you using?
Correct, Galaxy A22 5G, I was using the information from https://cacombos.com/device/SM-A226B and they only have bands 28 and 41 listed as supported... but I see on other websites they have listed more bands, as you mentioned, that's great news I will test other bands.
I will also test with PLMN 001/01 on SIM1 slot and monitoring with fosphor
Hello @gustavobsch , did you get to connect this UE to the gnb? I have a doubt, should be possible to see the network with a standard SIM of a telecom provider? If it is possible, it will be easier to identify which UE are able to see the network. I have tried with Xiami Redmi Note 10 5G and Poco F3 and I don't see the network in any of them.
@alvaroalfaro612 I haven't been able to see the network or connect using the Galaxy A22.
I do see the network with OnePlus Nord N10 5G, even with wrong SIM inserted, but it's not connecting to it.
I'm still waiting for GPSDO maybe that's why the galaxy is not working.
I just changed the system to:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU max MHz: 4100,0000
CPU min MHz: 800,0000
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse
3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid
mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp sgx_lc md_clear flush_l1d arch_capabilities
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 192 KiB (6 instances)
L1i: 192 KiB (6 instances)
L2: 1,5 MiB (6 instances)
L3: 9 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerabilities:
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Meltdown: Mitigation; PTI
Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Retbleed: Mitigation; IBRS
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Srbds: Mitigation; Microcode
Tsx async abort: Not affected
sudo /usr/bin/gnb -c ~/5g/srsran-config/gnb.conf
Available radio types: uhd.
--== srsRAN gNB (commit 0523be699) ==--
[INFO] [UHD] linux; Clang version 14.0.0 ; Boost_107400; UHD_4.4.0.0-47-gd18647dd
[INFO] [LOGGING] Fastpath logging disabled at runtime.
Making USRP object with args 'type=b200'
Cell pci=1, bw=10 MHz, dl_arfcn=632628 (n78), dl_freq=3489.42 MHz, dl_ssb_arfcn=632640, ul_freq=3489.42 MHz
==== gNodeB started ===
Type <t> to view trace
Late: 0; Underflow: 2; Overflow: 0;
Late: 0; Underflow: 1; Overflow: 0;
Late: 0; Underflow: 5; Overflow: 0;
Late: 0; Underflow: 3; Overflow: 0;
The amount and frequency of the underflows is heavily reduced but still not eliminated completely
Is that normal behaviour?
Just an update on my issue. After using external GPSDO I can attach correctly to the network on both b3 and b78 using the One Plus Nord N10. The galaxy A22 will not see the network using the same SIM / srsRAN config combination.
I'm trying to connect srsue with the gnB (each running a B210 sdr) but can not achieve that. (for the core I basically followed https://open5gs.org/open5gs/docs/guide/01-quickstart/, but I guess that is irrelevant at this point since the connection between gnB and UE is not even built)
gNB
UE
spectrum analyzer
I would further expect a peak around 3489.42 MHz; but I only see noise
But as the cell is found I guess that is a wrong assumption