Open cjbe opened 6 years ago
Remote TTL is faster than local TTL?
yes - remote is faster than local. I was surprised by this too, but verified that when there is no underflow I get the correct sequence (number of pulses on a counter) out of both the master and slave.
That's due to the analyzer interfering (it is writing back to the memory the full DMA sequence, using IO bandwidth, causing bus arbitration delays, DRAM page cycles, etc.). With the analyzer disabled I get 207mu instead of ~1150mu. No need to modify gateware, disabling it in the firmware is sufficient:
--- a/artiq/firmware/runtime/main.rs
+++ b/artiq/firmware/runtime/main.rs
@@ -223,8 +223,8 @@ fn startup_ethernet() {
io.spawn(16384, session::thread);
#[cfg(any(has_rtio_moninj, has_drtio))]
io.spawn(4096, moninj::thread);
- #[cfg(has_rtio_analyzer)]
- io.spawn(4096, analyzer::thread);
+ //#[cfg(has_rtio_analyzer)]
+ //io.spawn(4096, analyzer::thread);
let mut net_stats = ethmac::EthernetStatistics::new();
loop {
The KC705 is less affected because the wider DRAM words make linear transfers (which is what the DMA core and the analyzer are doing) more efficient. We could reach similar efficiency on Kasli by implementing optional long bursts in the DRAM controller, and supporting them in the DMA and analyzer cores.
@sbourdeauducq I don't see how this should make local and remote TTL transactions take different time - could you reproduce this aspect?
Right - if I am reading the SDRAM core correctly, it is currently not buffering reads and writes, or optimising access patterns. So on Kasli during a DMA sequence, in worst case of DMA and analyser data in same bank:
So this broadly tallies with the opticlock 530ns/2 = 265ns per event = 33 cycles, but does not explain the ~1.1 us per event.
Whereas reading/write a whole row would take 2+6+125+2=135 cycles for 2KB = 111x 18 byte RTIO events, or just over 1 cycle per event. Hence without the RTIO analyser ~5 cycles per RTIO event taking into account the CRI write = 40ns Or just a cycle or two extra for the RTIO analyser writeback, assuming it is cached similarly.
So, depending on the effort required, it seems well worth implementing long bursts.
Here are the results I got:
t_mu=650
for local, and t_mu=510
for remote. I see a difference but it is not as marked as in your case. Perhaps the difference is due to the CPUs making different DRAM accesses.t_mu=~210
With the analyzer enabled and the patch below, also about the same time: t_mu=~670
diff --git a/artiq/gateware/rtio/dma.py b/artiq/gateware/rtio/dma.py
index 735d52f54..fd37c2ed1 100644
--- a/artiq/gateware/rtio/dma.py
+++ b/artiq/gateware/rtio/dma.py
@@ -331,13 +331,15 @@ class DMA(Module):
flow_enable = Signal()
self.submodules.dma = DMAReader(membus, flow_enable)
self.submodules.fifo = stream.SyncFIFO(self.dma.source.description.payload_layout, 16, True) self.submodules.slicer = RecordSlicer(len(membus.dat_w)) self.submodules.time_offset = TimeOffset() self.submodules.cri_master = CRIMaster() self.cri = self.cri_master.cri
self.comb += [
Here is what I propose:
@pca006132 how is the DMA performance on Zynq? Does the ARM RAM controller give better performance?
@pca006132 how is the DMA performance on Zynq? Does the ARM RAM controller give better performance?
There are some debug code and cache flushing in the current artiq-zynq master. With those removed (and moving the cache flush to another location), we can get to 65mu.
Note that this is because the handle is reused every time. Cache flushing is a pretty expensive operation... So the time that would take to get the handle is not negligible.
Note: This is not using ACP as it is not finished yet, I expect a bit better performance with ACP.
Edit: ACP would not be used for DMA due to low bandwidth.
cool! That's a big step forwards. Is that with the analyzer enabled? I remember there being quite a long tail to the underflow distribution where we'd very occasionally find that sequences which would normally run with quite a bit of slack would underflow. If that's also reduced it would be wonderful...
cool! That's a big step forwards. Is that with the analyzer enabled? I remember there being quite a long tail to the underflow distribution where we'd very occasionally find that sequences which would normally run with quite a bit of slack would underflow. If that's also reduced it would be wonderful...
yes, analyzer is enabled, I could get some analyzer output:
OutputMessage(channel=4, timestamp=17094553753, rtio_counter=17094549496, address=0, data=1)
OutputMessage(channel=4, timestamp=17094553761, rtio_counter=17094549528, address=0, data=0)
OutputMessage(channel=4, timestamp=17094553818, rtio_counter=17094549560, address=0, data=1)
OutputMessage(channel=4, timestamp=17094553826, rtio_counter=17094549592, address=0, data=0)
OutputMessage(channel=4, timestamp=17094553883, rtio_counter=17094549624, address=0, data=1)
OutputMessage(channel=4, timestamp=17094553891, rtio_counter=17094549656, address=0, data=0)
OutputMessage(channel=4, timestamp=17094553948, rtio_counter=17094549688, address=0, data=1)
OutputMessage(channel=4, timestamp=17094553956, rtio_counter=17094549720, address=0, data=0)
OutputMessage(channel=4, timestamp=17094554013, rtio_counter=17094549752, address=0, data=1)
OutputMessage(channel=4, timestamp=17094554021, rtio_counter=17094549784, address=0, data=0)
OutputMessage(channel=4, timestamp=17094554078, rtio_counter=17094549816, address=0, data=1)
OutputMessage(channel=4, timestamp=17094554086, rtio_counter=17094549848, address=0, data=0)
OutputMessage(channel=4, timestamp=17094554143, rtio_counter=17094549880, address=0, data=1)
OutputMessage(channel=4, timestamp=17094554151, rtio_counter=17094549912, address=0, data=0)
So it should be working correctly I think.
The sustained DMA event rate is surprisingly low on Kasli. Using the below experiment, I find that shortest pulse-delay time without underflow for a TTL output is:
For comparison, with the current KC705 gateware this is 128mu, and sb0 believes this should be closer to 48mu (3 clock cycles per event, https://irclog.whitequark.org/m-labs/2018-03-05)
(N.B. the RTIO clock for the DRTIO gateware is 150 MHz, vs 125 MHz for Opticlock)
Experiment: