tavildar / LDPC

C and MATLAB implementation for LDPC encoding and decoding
183 stars 68 forks source link

How to reduce the latency interms of hardware #4

Open RashVm opened 1 year ago

RashVm commented 1 year ago

As a hardware engineer, I am working on developing an FPGA hardware design for LDPC. I have a MATLAB code that is similar to the this code. However, I am facing an issue with the processing of the Rx data. Currently, the data is coming in one by one into the array "updated_llr_vec". According to the code, the variable "l1" is calculated using the value of "updated_llr_vec(n2)- lasteta(i_m, n2)))",

                            n2 = obj.row_mat(i_m, i_n2);
                            l1 = (updated_llr_vec(n2) - lasteta(i_m, n2));

which means that if "n2" is equal to 136, I will have to wait until 136 data points are filled in the "updated_llr_vec" array before the next process can start. This will result in latency issues.

I am wondering if there is a way to avoid this latency issue and optimize the processing of the data. Is there anything that I am missing in my understanding of the code, or is there a better way to implement this process? Any insights or suggestions would be greatly appreciated.

xofi commented 1 year ago

这是来自QQ邮箱的自动回复邮件。   您好,我已收到您的邮件。