Open ximinity opened 4 years ago
There are multiple culprits (at least when using lattice toolchain and LFE5UM-45F(non-5G) Versa, which is what I've been doing) causing timing problems.
done = 1
and value = 0
.On a final note, both PHY status indicators of the Ethernet interface on the FPGA turn off as soon as I connect the network cable (the orange link state is turned on when the cable is unconnected).
This is normal default behavior on my board as well. The link seems fine, perhaps Lattice have designed the board in a strange way (it wouldn't be the first part... :) ). Indeed the versa user guide schematic around PHY1_LED seems a bit strange... See also this pdf for more info: marvell-phys-transceivers-alaska-88e151x-datasheet-2018-02.pdf
Another hacky way I'm using to make timing closure easier is to disable icmp (with_icmp=False
to LiteEthUDPIPCore
).
The examples has been removed. The current issue is already tracked here: https://github.com/litex-hub/litex-boards/issues/40, we'll try to improve this soon.
@enjoy-digital This can be closed. With the 32-bit + buffered CDC there are no timing issues anymore.
I still saw this issue with $glbnet$eth_clocks_rx$TRELLIS_IO_IN failing well below 100 MHz after all those improvements and even on a Butterstick board using ECP5 with speed grade 8. It took some time until I realised that the 'data_width' of the etherbone module is by default still 8 instead of 32. ECP5 examples in 'bench' should probably be updated unless there are other side effects of using a wider Ethernet/Etherbone data_width by default.
anyway, thank you all for fixing this!
@ozel Which version and config are you using? I get pretty stable timing closure with the current liteeth with this config:
phy: LiteEthECP5PHYRGMII
phy_tx_delay: 0e-9
phy_rx_delay: 2e-9
device: LFE5U-25F-6BG256C
vendor: lattice
toolchain: trellis
# Core -------------------------------------------------------------------------
clk_freq: 125e6
core: udp
mac_address: 0x10e2d5000000
ip_address: 172.30.0.1
tx_cdc_depth: 16
tx_cdc_buffered: True
rx_cdc_depth: 16
rx_cdc_buffered: True
udp_ports:
raw:
data_width: 32
mode: raw
The CDC parameters are important to get better timing
@rowanG077 I meant the test bench folder projects, for example https://github.com/enjoy-digital/liteeth/blob/master/bench/butterstick.py
I see. Configuration on that level is not supported for the benchcore it seems. You could copy paste the add_etherbone
method defined in: https://github.com/enjoy-digital/litex/blob/master/litex/soc/integration/soc.py#L1766
And change the config there so it passes timing.
32-bit data width can be changed from the 8-bit default, this works:
self.add_etherbone(phy=self.ethphy, buffer_depth=255, data_width=32)
I just wanted to highlight that current ECP5 test bench examples still report timing violations unless modified. There might be good reasons for that of course, but newcomers to LiteX might wonder what's going on...
@ozel Which version and config are you using? I get pretty stable timing closure with the current liteeth with this config:
phy: LiteEthECP5PHYRGMII phy_tx_delay: 0e-9 phy_rx_delay: 2e-9 device: LFE5U-25F-6BG256C vendor: lattice toolchain: trellis # Core ------------------------------------------------------------------------- clk_freq: 125e6 core: udp mac_address: 0x10e2d5000000 ip_address: 172.30.0.1 tx_cdc_depth: 16 tx_cdc_buffered: True rx_cdc_depth: 16 rx_cdc_buffered: True udp_ports: raw: data_width: 32 mode: raw
The CDC parameters are important to get better timing
I'm having trouble reaching 125 MHz even with this config on a butterstick (LFE5UM5G-85F), best I can get to is about 104 MHz
Are you sure you have the most recent liteeth? What is your system clock frequency? I'm using liteeth on a ECP5 part that is the lowest speed grade. The ECP5 on the butterstuck has the highest speed grade. It should have no trouble reaching 125Mhz. I just looked at just the logic timing between the two. Your part is almost twice faster. Are you using 125Mhz for your system clock?
I'm on liteeth downloaded today, so that should be no problem, but I probably misunderstood the clocking. Since I set core_freq
to 125 MHz I figured I should drive sys_clk
at 125 MHz, but I guess that's not the case?
sys_clk
can be any clock you want. What is import is that you use the RGMII clock for the rx clock. I run liteeth on 50Mhz. That's the reason why you go to a higher bitwidth. You process multiple bytes in a single cycle so you can lower your clock frequency.
Excellent, thanks for the quick reply!
The README file mentions the following:
I've tested the example in
examples/targets/udp_loopback
by performing the following steps:$ ./versa_ecp5.py
Full output: timing.txt Note that timing closure is not met for
crg_clkout
. Versions: yosys: 3c41599ee1f62e4d77ba630fa1a245ef3fe236fa nextpnr: 247e18cf027334d5201be00735aa607250e6253d trellis: e2e10bfdfaa29fed5d19e83dc7460be9880f5af4$ ./versa_ecp5.py load
$ ifconfig enp7s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255 ether e8:6a:64:c7:84:3b txqueuelen 1000 (Ethernet) RX packets 1699 bytes 1050972 (1.0 MiB) RX errors 0 dropped 529 overruns 0 frame 0 TX packets 1641 bytes 119511 (116.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 33
$ sudo arp -s 192.168.1.50 10:e2:d5:00:00:00 -i enp7s0 $ arp -a
? (192.168.1.50) at 10:e2:d5:00:00:00 [ether] PERM on enp7s0
$ ping 192.168.1.50 PING 192.168.1.50 (192.168.1.50) 56(84) bytes of data.
$ sudo ./listener.py & ./sender.py [4] 9009 2020-01-25 20:58:29 2020-01-25 20:58:30 2020-01-25 20:58:30 2020-01-25 20:58:31 2020-01-25 20:58:31 2020-01-25 20:58:32