sinara-hw / meta

Meta-Project for Sinara: Wiki, inter-board design, incubator for new projects
50 stars 4 forks source link

[RFC] new EEM: Banker #18

Closed jordens closed 5 years ago

jordens commented 5 years ago

Some experiments need a large numbers of digital outputs, more than you want to implement with DIO-BNC/RJ45/SMA connected directly to Kasli. Let's say 3*32=96 digital outputs as a ballpark. This increase in number of digital outputs can come at the expense of reducing timing flexibility.

The proposed EEM would occupy a single EEM on Kasli and be a N times 32 bit SPI shift register (either 6SPI with N=6 or 2xSPI with N=2). And it would either (a) feed N*4 downstream EEMs of the DIO type or (b) it would itself expose those `N32` digital outputs at its front panel.

Such a banked digital out would make multi-output events much smaller (can switch many channels at once), but would lead to a contention window of about +-0.5µs (62.5 MHz SPI clock, 32 bit) around each event (meaning that if an output changes at t, other outputs within the same bank can only change at the same point t or are blocked from changing within the contention window of t+-0.5µs).

gkasprow commented 5 years ago

Shall we put tiny CPLD or low cost FPGA or implement discrete logic? With PLD we can make not only simple IOs but also addressable IO toggling or even simple sequencer that performs predefined patterns on many outputs with full speed. Basic version would be just SPI register. QSPI could be also possible and would increase timing resolution.

What connector to use? SCSI? We can use same connector as on VHDCI carrier and in this way provide 128 signal and GND lines. 2x 48 channels would be feasible. Existing low cost cabling is advantage here. We can also use ruggedized stacked IDC connectors, lets say 2x 64 pins and also provide 2x 48 channels + GND. We need plenty of GND signals unlike in case of LVDS.

What signalling? 5V TTL @ 50R?

hartytp commented 5 years ago

@jordens Sounds like an interesting project. Forgive me for being nosy, but I'm curious what the use-case for this is. One of the things I really like about Artiq/Sinara is how many TTLs/BNCs have disappeared from our labs, since components like switches, DACs, DDSs etc are now controlled directly from Kasli. Is this for users who haven't fully switched to Sinara yet? Or do you have use cases in mind that still need a lot of DO, even in a "native" Sinara experiment?

jordens commented 5 years ago

Optical clocks and neutrals still need lots of shutters. Coil switching, ovens, cameras, and random other external hardware all adds up.

jordens commented 5 years ago

For the addressable eem (a) I'd think cheap fpga since it's all lvds. And for the HD connector case (b) probably as well. Maybe one board to rule them both.

gkasprow commented 5 years ago

@jordens so the spec would look like this:

hartytp commented 5 years ago

Optical clocks and neutrals still need lots of shutters. Coil switching, ovens, cameras, and random other external hardware all adds up.

Makes sense.

Anyway, a board with an EEM a cheap FPGA and lots of DIO sounds like a useful addition to Sinara.

gkasprow commented 5 years ago

It looks like we also need this board for satellite test systems. VHDCI + breakout is preferred choice.

jordens commented 5 years ago

I'd put the following on the board:

gkasprow commented 5 years ago

Actually, I wanted to propose EEM board with upstream and downstream port to extend number of slow control signals. But there are no low cost FPGAs with that many LVDS. We would have to use Kasli-type FPGA and essentially build Kasli without SFPs :) With biggest (but still very cheap) ICE40 we can have one EEM upstream and 2 EEM downstream ports. Buying 8 DIO boards would be quite expensive, so I propose using LVCMOS to 5V TTL buffers on board, routed to the dual VHDCI, without isolation. When somebody wants isolated ports would use downstream EEM ports. I'm not sure if there are too many case requiring all ports to be isolated. And then we have 2 breakout modules that convert VHDCI to screw terminal blocks, IDCs , SMAs or or spring loaded terminal blocks: obraz

We can even place the drivers/level converters on the breakout board to improve signal integrity over long cables.

gkasprow commented 5 years ago

What kind of breakout connectors would you use most often? I can prepare 2 VHDC breakout boards with different connectivity.

jordens commented 5 years ago

Ok. Convinced. Maybe we can fit 2 pairs of stacked vhdci connectors on there if there is space and power for the drivers.

gkasprow commented 5 years ago

@jordens We have only 168 single ended pins in the FPGA. So won't have enough for 2x 96 IOs. We miss 5mm of EEM width so wont be able to fit two VHDCI connectors. Moreover if we want to fanout all FPGA balls, we would need at least 6 layers. So I think that we should keep it simple.

dtcallcock commented 5 years ago

single VHDCI to another connector fanout. What connectors would be most feasible?

Something like these NI breakouts?

I like the mix of options and the 19" form factor. I'd also add a couple of 9-way D-subs. I think they are the most convenient way of hacking 8x slow TTL onto things in the lab.

sbourdeauducq commented 5 years ago
  • What about updating it via I2C? We were discussing it in case of Urukul, but in case of such FPGA I can imagine that someone would like to upgrade it much more often than Urukul CPLD. Simple I2C -> SPI converter would do the job and let access the SPI config memory or even the FPGA.

I don't know how it works with Lattice, but Xilinx FPGAs can rewrite their flash and doing it through the FPGA would be preferable to adding potentially buggy circuits to the board.

gkasprow commented 5 years ago

@sbourdeauducq In Lattice ICE40 (it was Bluechip before ) the SPI FLASH is connected entirely to GPIO pins so it should be possible to update FLASH in the same way. I don't know if Lattice can init reconfiguration by itself. I can always connect some IO to CRESET_B pin via MOSFET to enable this. This NXP chip I use to connect SPI ADCs to common I2C bus. And they work fine. But I never used it with FLASH. Direct access would be far quicker. Using I2C to write 1 Mbit FLASH takes a while.

gkasprow commented 5 years ago

@dtcallcock since the VHDCI cable can be already 10m long, does it make sense to make such fanout as 19" form factor? maybe sth smaller that one can put on optical table? Maybe we can make VHDCI to D-sub9 and then you can distribute it along the table directly to the equipment. How fast are these pulses? is there any standard pinout for Dsub 9 ? I see that 3 VHDCI breakouts would do the job ;

jordens commented 5 years ago
gkasprow commented 5 years ago
96 IO? I though the VHDCI had ~34 pairs, i.e. ~32 IO if we add adequate ground returns and watch the twisted-pairing of the cable, I2C.

yes, with 5V TTL signalling and 2 VHDCI connectors we can have 2x 48 lines. We need much more than just a few GND contacts. But long 10m VHDCI cables may introduce some crosstalk between neighbouring lines in the pairs. I thought you wanted to have single-ended DIO

IMHO 2x32 IO is fine here (i.e. a stacked VHDCI with adequate ground and I2C). That also matches the upstream SPI and EEM interface nicely. And we could just VHDCI-HD68 adapter this to the already existing HD68IDC 4xIDCBNC cards and get BNC breakouts for free.

Using 32 single-ended signals per VHDCI would be better idea in terms of SI

Instead of the screw/spring/pluggable connector, maybe we should consider merging this with Humpback. Since the FPGA, the EEM interface and the rest are so similar. But maybe they can share the design of the FPGA/EEM/power/I2C design.

I was thinking about it. But it's easier to make 2 designs. screw/spring/D-sub9 is an idea how to terminate the VHDCI on the bench.

I2C-SPI converter is fine, as would be indirect programming through the FPGA (I'd also like to have the FPGA on the I2C tree).

If we only do 64 TTL, then maybe the power we can get from the upstream EEM is ok and we don't even need the DC jack.

it depends how many Banker EEMs you want to stack, With downstream ports you can stack them Indefinitely....

gkasprow commented 5 years ago

It looks like we are converging:

In this configuration with single Banker one will be able to connect:

If there is interest in other break-out, I can design one.

jordens commented 5 years ago

Yeah. I would share the 64 IO between the IDC-BNC connectors and the VHDCI. Otherwise it's probably too much unused power hungry driver logic. And maybe allocate downstream I2C buses as: the FPGA/SPI-to-I2C/EEPROM, the two downstream EEMs, 2 VHDCIs. And IO-wise that's for the FPGA 3x2x8=48 for the 1 upstream and 2 downstream EEM connectors, 64 for the digital IOs, 8 for the direction switches, ~8 for SPI flash and I2C, i.e. some 128 IO total. Maybe a 8 way DIP switch to for board-level direction switching or to the FPGA.

Just as a sidenote, the datasheet specifically describes how to emulate LVDS outputs with LVCMOS on all banks. So there are more LVDS output pairs available. But AFAICT they all need external termination anyway so it would be cumbersome in addition to the points you raised.

gkasprow commented 5 years ago

Driver logic consumes just tens of uA when not driven. Sharing pins between two connecting options will lead to confusion. I will try to route both options with 4 layers. Long time ago I succeed to route almost all BGA484 pins with 3 signal layers. The emulated LVDS works only as an output, so you can use them to connect only output EEMs. This may lead to confusion because it won't be full EEM. Let's keep it simple. If someone needs more isolated outputs, will cascade Banker boards or use another Kasli.

jordens commented 5 years ago

Ack. Yes. I wouldn't prioritise the downstream lvds eem ports.

gkasprow commented 5 years ago

Discussion continues here