Nuand / bladeRF

bladeRF USB 3.0 Superspeed Software Defined Radio Source Code
http://nuand.com
Other
1.13k stars 455 forks source link

Enforce 12 bit range to prevent silent integer overflow #866

Open warnes opened 2 years ago

warnes commented 2 years ago

The current bladeRF driver / FPGA code doesn't prevent or detect overflow of the 12-bit ADC.

The 12-bit DAC seems to accepts the range [-2048, +2047], and the current driver/FPGA code doesn’t range check the input. When provided complex floating-point values, it (appears) to convert them to 16-bit signed integer values by multiplying by 2048, and then dropping the top 4 bits, silently converting 1.0 to +2048, which overflowings to a negative value, yielding very strange RF results.

As a workaround, in my code I explicitly convert from complex I/Q values to 16-bit int values, and enforce the range limit, but it would be much friendlier for the driver/FPGA code performed this task.

One solution is to have the driver/FGA code apply the [-2048,2047] threshold and (ideally) generate a warning to the user.

FWIW, the documentation for Ettus Research's (now Analog Devices) devices indicates they use the most significant 12 bits of the int16's, so complex data is scaled by 2^15, and the lowest four bits are dropped when feeding the ADC. I suspect this approach leverages standard CPU hardware detection of integer overflow.

jenda122 commented 2 years ago

I use the higher bits for synchronization of my T/R switch -- I have extended the internal FPGA FIFO to 13bit and mapped the new bits to Expansion Header and now I have TTL signals that I can trigger in-sync with the samples I'm transmitting by simply setting the 13th bit to 1 (they are actually offset by a few samples because of the filtering and processing in the RFIC -- I have calibrated this delay).

Therefore, if you are implementing cutting of these bits, I wish for this to be configurable :)

nhw76 commented 2 years ago

FWIW, the documentation for Ettus Research's (now Analog Devices) devices indicates they use the most significant 12 bits of the int16's, so complex data is scaled by 2^15, and the lowest four bits are dropped when feeding the ADC. I suspect this approach leverages standard CPU hardware detection of integer overflow.

That makes sense - looking at the way the Volk kernel that performs the float->int conversion is implemented, doing that would saturate the output into [SHRT_MIN, SHRT_MAX] and avoid this issue, while still retaining good vector performance.