Open matwey opened 6 years ago
First, remember how the SDRAM buffering works: There's the SDRAM_SINK which sinks 8-bit data from the cstream (aka "Whacker"), and writes it into SDRAM.
Then there's the SRAM_Host_Read, which reads data from SDRAM, packetizes it to host_burst_length (=16 * 2bytes), and sends those over the "CmdProc", aka. the thing that muxes together the different data sources for the return (FTDI) channel.
SDRAM_Sink and SDRAM_Host_Read know each other's read/write pointers so that we have flow control.
cstream (whacker's producer) receives data from ULPI (timestamp'ed
d/is_start/is_end/is_err/isovr) and formats it into the A0
Hello,
The motivation is simple. Any application can crash unexpectedly, so we cannot guarantee graceful exit. So the most reliable on-start behavior for us is to force stop streaming, and reinit all the things as they expected to be (full reset), and the start streaming from the scratch.
I've followed your advice and reordered starting procedure, but there is still no success. Even CSTREAM
and SINK
are OFF (and SDRAM
pointers are reset to 0
), then the unexpected data are transmitted to host after I trigger SDRAM_HOST_READ
.
Could you please also review the following: https://github.com/matwey/ov_ftdi/commit/0ffe58f453fe70e08786f64027f241f3a6c0c96c Does it makes sense?
The most reliable setup is the following:
// dev.regs.SDRAM_SINK_GO.wr(1)
// dev.regs.SDRAM_HOST_READ_GO.wr(1)
// dev.regs.CSTREAM_CFG.wr(1)
Using https://github.com/matwey/ov_ftdi/commit/0ffe58f453fe70e08786f64027f241f3a6c0c96c
I still see that there is extra byte between d0
and a0
:
0x0000: 3260 d01f 27a0 1000 0000 7df6 69a0 0000
0x0010: 0300 3d43 7169 8498 a000 0001 00a5 4971
0x0020: 5aa0 0000 0300 6596 7869 8498 a000 0001
0x0030: 00cd 9c78 5aa0 0000 0300 8de9 7f69 8498
0x0040: a000 0001
Here 1f
is size byte, and then a0
should follows in theory.
Now I see that self.fifo_fsm
from SDRAM_sink
needs to be force reset to READ_LOW
at every sink restart. Otherwise, it may cache the last odd byte from previous run.
Patches to fix this kind of thing are welcome! (I haven't been able to find my board to do anything that requires HW-in-the-loop testing).
I am working on pure-C implementation for OpenVizsla host software: https://github.com/matwey/libopenvizsla I've faced the following issue with FPGA firmware last summer. I have been not able to make FPGA reliable restart sniffed data transmission.
The issue itself is the following. The protocol has two encapsulation levels. The upper level is packets come from SDRAM buffering module. The packets consists of
0x0D
magic header,length
byte, and thedata
. On practice this packets are of the same length. The nested data is a stream consisted of packets from captured data. They are consisted of0xA0
magic header,length
and USBdata
. This packets are not aligned with each other. One0x0D
-packet may consisted many0xA0
-packets, and0xA0
-packet may be split between two consecutive0x0D
-packets.When I stop capturing and streaming and start it again then the first data byte of the first
0x0D
-packet is not0xA0
which it should be. This is an issue because there is no other reliable way to sync0xA0
-packet stream. We cannot just scan for first0xA0
because0xA0
byte may be consisted insidedata
itself (compare with SLIP protocol).My stop sequence is the following:
write
0
toSDRAM_HOST_READ_GO
(0xC28
)write
0
toSDRAM_SINK_GO
(0xE11
)write
0
toCSTREAM_CFG
(0x800
)My start sequence is the following (given I assured that the stream is stopped):
write 32-bit
0
toSDRAM_SINK_RING_BASE
(0xE09
)write 32-bit
0x01000000
toSDRAM_SINK_RING_END
(0xE0D
)write 32-bit
0
toSDRAM_HOST_READ_RING_BASE
(0xC1C
)write 32-bit
0x01000000
toSDRAM_HOST_READ_RING_END
(0xC20
)write
0
toSDRAM_SINK_PTR_READ
(0xE00
)write
1
toCSTREAM_CFG
(0x800
)write
1
toSDRAM_SINK_GO
(0xE11
)write
1
toSDRAM_HOST_READ_GO
(0xC28
)I've tried to add
Reset
forsdram_fifo
inSDRAM_Sink
andSDRAM_Host_Read
to reset fifoes onSDRAM_SINK_GO
/SDRAM_HOST_READ_GO
switch but this didn't help.The issue is still present in the latest firmware from
new_migen
branch.