As a hardware engineer, I am working on developing an FPGA hardware design for LDPC. I have a MATLAB code that is similar to the
this code. However, I am facing an issue with the processing of the Rx data. Currently, the data is coming in one by one into the array "updated_llr_vec". According to the code, the variable "l1" is calculated using the value of "updated_llr_vec(n2)- lasteta(i_m, n2)))",
which means that if "n2" is equal to 136, I will have to wait until 136 data points are filled in the "updated_llr_vec" array before the next process can start. This will result in latency issues.
I am wondering if there is a way to avoid this latency issue and optimize the processing of the data. Is there anything that I am missing in my understanding of the code, or is there a better way to implement this process? Any insights or suggestions would be greatly appreciated.
As a hardware engineer, I am working on developing an FPGA hardware design for LDPC. I have a MATLAB code that is similar to the this code. However, I am facing an issue with the processing of the Rx data. Currently, the data is coming in one by one into the array "updated_llr_vec". According to the code, the variable "l1" is calculated using the value of "updated_llr_vec(n2)- lasteta(i_m, n2)))",
which means that if "n2" is equal to 136, I will have to wait until 136 data points are filled in the "updated_llr_vec" array before the next process can start. This will result in latency issues.
I am wondering if there is a way to avoid this latency issue and optimize the processing of the data. Is there anything that I am missing in my understanding of the code, or is there a better way to implement this process? Any insights or suggestions would be greatly appreciated.