Open Dolu1990 opened 7 months ago
For reference the nuttx liteeth driver also had a similar pattern and very low performance. We (@g2gps) made some similar changes and got a big improvement but I'm not sure those changes are upstream yet.
cc-ing @shenki (original author of the linux liteeth driver) for situational awareness.
For my part, I'm happy to help develop and/or review any improvements to the Linux driver, but won't have any "spare cycles" for that until after mid-May of this year.
Yes, we are limited by memcpy_io throughput. On microwatt (ppc64) a patch to unroll that operation gave us much better performance:
https://github.com/shenki/linux/commit/9bd268b987398a1e1da8457a4b0ce8431b3dd73f
This change never made it upstream. @antonblanchard might be interested in doing something similar for riscv these days.
Adding dma capability to liteeth would be ideal, maybe something similar to litesdcard using litedma. This is something we have discussed internally and with @enjoy-digital . We could provide some funding for the liteeth gateware dma integration.
Thanks everyone. It seems there is now quite some interest to get DMA capabilities with LiteEth, so no reason to not make it happen :) We already got some contributions on this but also want to make sure the approach is close to the LiteSDCard/LiteSATA DMA. I'll try to provide a first implementation from the different discussions/PRs.
Initial PRs/Implementations that need to be review/could be use as basis:
Thanks ^^ Let's keep ourself in sync about this. I'm currently trying to not split myself between too many contexts. I may have some time to work on it in 1-2 months. (i'm a bit in a rush)
Hi,
I was looking at the maximal bandwidth reacheable with liteth in linux (+ vexii) https://github.com/litex-hub/linux/blob/master/drivers/net/ethernet/litex/litex_liteeth.c And it seems there is some stuff which can be done in the linux driver. Currently it kinda cap around 20/25 Mbps (via iperf3). It seems that the bottleneck are :
Running iperf3 on localhost toward localhost get around 230 Mbps (userspace isn't the bottleneck)
So, i'm not realy sure of it, but maybe with some optimization we may saturate the 100 Mbps ethernet <3