Closed dhslichter closed 1 year ago
@dhslichter Take it into account that to place such big FPGA on backplane, you need at least 12 layers. And such big 19" board would be hell expensive. Moreover, DIN connectors are THT so it means you cannot place BGA under them so some of slots would be wasted. At least 12..14 HP. It's much cheaper and more convenient (maintenance) to keep backplane simple and make Kasli complex. We don't have to use DIN96 connector for Kasli. We can easily use any type of backplane connector, i.e. 170 pin AMCone we use in Sayma/Metlino. We can stack them to have 340 or even 680 pins and they are dedicated to high speed signalling. These backplane connectors are cheap, from Kasli side you don't need any (PCB fingers for 170 pins). We can make current Kasli compatible with such ones. look here
ack @gkasprow, sounds very logical.
There is interesting initiative here: http://easy-phi.ch/ They use different approach, based on active backplane with USB as the main interface. There is plenty of work they did and it would be nice to find common denominator between our EEM and their backplane. Example board schematic is here. On page 2 in upper right edge there is DIN connector pinout. I see potential of marrying our solution with their. In case of the backplane, we can simply use pins which are grounded for our LVDS lanes. In this way such backplane would support both DRTIO and non-DRTIO boards with USB interface where simple controller based on USB hub or some RPI/Beaglebone do the job. In case of DRTIO extensions we simply plug there Kasli. Or we can mix the two and mix boards within crate. We could build all planned hardware to be compliant with this approach. It would be nice to combine our and their efforts and not duplicate the work. 5HP is not an issue, one can simply place 1HP dummy panels in between ours. This is just an idea, nothing more.
Pretty sure that's been dead for a while. I chatted with them back in '15 but haven't seen any movement since then.
@jordens at least we can reuse some ideas for slow control things :)
Yep.
@jordens I met a man from Unige at the quantum engineering workshop in London, who is using this Easy-phi stuff. They contacted me and it seems the project is still alive.
The connection between the panel board and the smart backplane could be 4x SATA plus a ribbon for all of USB/Power/SFP-Management/LEDs and maybe MMCX-SMA pigtail for the clock.
The question is if we really want to have smart backplane. It is part that is difficult to replace and also the most expensive. It's much easier to break active backplane, while passive one can be broken only mechanically. I'd keep the backplane dummy and put all the smartness to the Kasli or other controller. Today we had discussion with @marmeladapk and he asked simple question - why we really need this backplane. I know that it's nice looking, we get rid of cable nest but we add: cost, all boards must be bigger to have access to it so they will be more expensive, the backplane does not come for free, we limit the slot availability in case we use 8HP boards. We can of course make some slots 8HP and some 4HP. Another aspect is crate size - I saw users having different HP size. Some use table-top crates, others prefer 19" rack mount ones. We would need dedicated backplanes here as well.
If we really want to go for backplane, it would be nice to keep some compatibility with existing EEM boards. In case of short ones, it's easy - just add some extender with IDC socket and backplane connector.
We can do the same to existing Kasli - extend IDC cables to the backplane connector attached directly to another end of the IDC cable. It has 64 pins (96 pin version also has 64 connected pins) so would split to 2 EMMs. But this means we need 4 additional such backplane IDC connectors!
If we go for 96 pin ones, we would need only two of them, but dedicated adapter board would be needed that would translate 8 30 pin IDCs to two straight DIN 41612. Another issue is clock routing.
Yet another option that do not use any cables is to add to Kasli two mezzanines with clock distribution and female connectors that:
In this way we keep compatibility with all existing boards Urukul, Sampler and Mirny would need to use 8 HP slots and dedicated mezzanines - EEM to backplane adapters
I agree with that.
@gkasprow I agree completely: while the idea of a BP sounds nice, I'm not sure we'd use one even if it were available:
I'm going to (perhaps optimistically) close this issue for now. My reason for doing this is that currently there does not seem to be a clear plan for the BP, or even necessarily a clear use case for it. Moreover, no one is currently taking the lead on it and I'm not sure we have the resources to think about it properly. My suggestion is that we keep this closed until someone has the time/motivation to take an active lead on this.
Following on, I agree with the above that the backplane does not seem to provide important benefits, and introduces a lot of extra complication and cost, at this point. The big points of the Kasli ecosystem are low cost, flexibility, adaptability, and simplicity, and the backplane idea runs counter to all of them.
For future reference - there is a COTS backplane that seem to fit perfectly to our needs - the Compact PCI Serial standard. With most basic configuration it has 8x LVDS pairs, I2C, 12V and management power rail. And it is 3U Euro crate. ButiIt supports only 8 cards. The modules are 4 HP. Short introduction is here Of course that will be more expensive than existing solution, mainly due to connectors.
Let's continue the cpci discussion here. A couple more pieces of info from Schroff Hartmann
And some general info on backplane design article
CERN DIOT project with two presentations and DIOT 24 IO board design
Management/Standby power seems to be 5V instread of 3.3V.
that's not a big deal, tiny 3.3V regulator does the job.
@gkasprow do I understand that correctly that you want to use the 4 Ethernet, 2 USB3 and 2 SATA pairs as the second EEM connector on the "dual EEMs"? You would not use the fat pipe on the first two slots?
If we want to keep same pinout between slots, it's a must. We won't use this backplane for PCIe anyway, so who cares. If we use only first two slots with double width fat pipe, we still have no use for 2 EEM slot.
I think there is one "slot 9: dual EEM" missing.
I didn't count slot 0 with controller :) Let's number them correctly with specification, starting from 1.
How much would a custom backplane cost? Is that a 20 layer board in the 5k range or more like 1k?
depends strongly with quantity, but I don't see strong reason to do it. We don't need full mesh topology, only single star configurtion. FR4 would also work, no need for 10G support. Price would be dominted by connectors, not PCB. 10 layers would be sufficient.
IMO if we go down this route, we should just settle on 4 HP and try really hard to stick with the number of diff pairs easily available. That's 12 diff pairs (4 lane PCIx, 1 USB3, 1 SATA) for ports 3-8 and 20 pairs for ports 1-2.
add 4 Ethernet pairs to every slot + 1 USB2.0 I'm in favour of such 4HP approach. If someone want Sampler with BNCs, can add IDC2BNC converter and use 2 neighbouring slots. I wouldn't change too much number of LVDS lanes and keep signal compatibility with existing boards. If we use only 4HP modules, can simply assign dual EEM to slots 2-5 and single EEM to slots 6-9 And cPCI serial has one interesting feature - the same boards, equipped with shells that dissipate heat, can be used in space environment, plugged to CPCI Serial Space backplane. It adds second redundant controller and also dual CAN support. All connectivity is dual star. I have spec but didn't analyse it in details.
Sidenote: interestingly, Schroff supports connecting the chassis to the backplane ground and isolating backplane and chassis gnd. I.e. connecting chassis and signal/power ground is not at all required or the only option.
In satellites and high reliability systems isolation is a must.
The rear io stuff would also perfectly fit the idea of having BNC breakouts towards the back for e.g. sampler. Or Dio BNC or zotino or banker.
I am ok with using the Ethernet star from kasli for four more diff pairs. The only downside is that it requires another connector and is physically far away from J1.
we would use it only on boards that require second EEM. I'm a bit sceptical about using rear IO for analogue signals. There are other high density connector standards that fit within 4HP. SMB,SMC or micro BNC. We can always use BNC pigtails which can be cheap when ordered in large quantity. @jordens any idea about cPCI pricing for typical crate with power supply?
There are other high density connector standards that fit within 4HP. SMB,SMC or micro BNC. We can always use BNC pigtails which can be cheap when ordered in large quantity.
My preference would be to scrap BNC across the whole of Sinara as it's just too big for Eurocard panels IMHO. If we do that then I think we should pick one connector to replace it (and also replace SMA in some places where high performance isn't required).
MMCX would be an obvious choice as we are already using it for the Sayma v2 AFE analog inputs and Stabilizer aux digital inputs. However, they are a bit on the small side and easy to disengage compared SMB/MCX so one might worry about mechanical robustness. SMC/Micro BNC/LEMO all seem somewhat less ubiquitous and more expensive. My slight preference would be for MCX, but it would be good to hear from people with more experience.
I'm not totally sure how you would isolate this kind of connector from the front panel though (just a clearance hole?).
Perhaps this should be a separate issue ('[RFC] Death to BNC'?).
One thing I worry about is whether there will be broad support and funding from the current user base to convert 10-20 boards from EEM to cPCI. Whilst it looks like it could be a nicer system in many ways, it doesn't add any killer new features and the lack of backwards compatibility with already-purchased hardware would be painful to swallow.
If the project was well funded by its own grant that would probably make it more palatable. There seems to be a market as, for example Innsbruck's Quantum Flagship effort specifically has "control electronics developed to commercial level" as one of it's milestones.
My preference would be to scrap BNC across the whole of Sinara as it's just too big for Eurocard panels IMHO.
I disagree; many groups have small systems and there is plenty of space for BNCs in a 84hp crate.
Yes, the backplane isn't very interesting and the backward compatibility problems are tough (and what if we need to replace or add a board in an existing EEM system?).
backward compatibility would be ensured by installing both EEM and cPCI connectors. EEM would be not mounted by default. Of course one would not be able to use previous generation modules with backplane. To continue support for 8HP panels, LVDS bus mux on Kasli would do the job (to some extend) without loosing EEM signals in not used slots. The nice thing in existing system is that one can entirely fill 19" rack with mix of 8 and 4HP modules and utilise all 12 EEM channels without loosing single one. I'm not sure if it is really important. We have some idea to adopt existing SInara HW to space applications, and backplane is the must in such case. So funding would be provided in such case. And if all community could benefit from that, it would be great.
The cPCI connectors are press-fit mounted, so can be installed after boards are assembled, so the user can choose which connectivity wants during ordering.
backward compatibility would be ensured by installing both EEM and cPCI connectors
There is still the issue with Kasli.
The nice thing in existing system is that one can entirely fill 19" rack with mix of 8 and 4HP modules and utilise all 12 EEM channels without loosing single one. I'm not sure if it is really important.
Yes, that's very nice. Also the cables work with all enclosures and without the frustrating issues that tend to pop up at every occasion in mechanical systems (for example: when mounting Schroff front panel screws on some other brands, due to their length they cannot be inserted fully and do not hold the panel properly).
@dtcallcock @sbourdeauducq That line of argument doesn't carry much weight for me. It's like comparing pdq and Shutter. Shuttler also doesn't add any killer feature over pdq and is full backwards incompatible. But Shuttler is a much better design that addresses the problems with pdq. The backplane is extremely interesting and AFAICT the only way forward for the numerous reasons mentioned. BNC can go. There would be patch and break out panels for people wanting low density connectors.
You cannot compare one NIST internal design with a full line of products that is deployed in many dozens of labs. The backplane is not the only way forward as demonstrated by the successful deployment of the ribbon cable systems.
I disagree; many groups have small systems and there is plenty of space for BNCs in a 84hp crate.
Personally, I also like BNCs. Much quicker to swap in and out. Much easier to make ones own cables to length quickly and cheaply.
There would be patch and break out panels for people wanting low density connectors.
One can do that, but IMHO it's much less convenient.
The nice thing in existing system is that one can entirely fill 19" rack with mix of 8 and 4HP modules and utilise all 12 EEM channels without loosing single one. I'm not sure if it is really important.
It's not a deal breaker, but it is nice feature of the current system.
Ultimately, the BP is doable, but the question is going to be whether the benefits outweigh the downsides.
Anyway, since this issue keeps cropping up, if someone has time, I do think it's worth putting together a full proposal for what the BP system would look like in detail. Once we have a proposal and can see the trade offs, it's ultimately up to the users to decide which system they would prefer to use...
Possible scenario:
What we can also do, is to design adapter for Kasli/Kasli Zynq that takes all signals from EEMs and distribute them to CPCIS connector using bus switches. But in such case Kasli v2 would have to be 160mm long and both boards would have to use same EEM connectors pitch.
Guys at CERN are also building something very, very similar. Also 3U, also cPCI serial, they will need system controller on ZYnQ. They also use 8x LVDS between it and modules. They are thinking about custom 19" crate with 8HP and 8 user slots to solve issue with 4HP panels. They want to run JTAG on some spare pins and have central acces to all FPGAs in the system. They need it radiation hard for work in accelerator tunnel, but also in non-radiation enviorement. So with existing CPCIS backplane and CERN 19" backplane where 8 x8HP slots are used we would cover most use cases.
I thought that Kasli was supposed to be a cheap system with low entry barrier for what it offers and uTCA was high power solution, which had everything figured out.
While scanning this thread and Kasli Zynq one I didn't notice many arguments for it, it's mostly comments on how could we do it. Arguments for easier assembly and not being a hack don't outweigh cons IMO.
And even, you can replace µTCA (and improve cost, debugging, sourcing, and, I would say, reliability) with a 6U eurorack, a fan tray, and a bunch of USB-C cables :)
The assumption is that the backplane won't be expensive. And the backplane is of course simple. Simpler than properly routing and managing 12 ribbon coax cables. Sure you can compare the extremely successful PDQ solution that has been deployed and used in several different groups much longer than kasli. At least 50 boards are in use. It's not NIST internal at all. You probably didn't know that. Obviously Shuttler will still be much better and is the way forward.
@sbourdeauducq you must think that everybody not using "a bunch of USB-C cables", not gluing together improperly fitting panels to boards, and everybody who looks beyond the hollow "works for me" because they have had actual experience working with these crates and backplanes is extremely stupid.
Not "stupid" but there is definitely a tendency to overengineer things. And I did not glue those panels.
@gkasprow to get back to technical questions. Would the switches be driven by some presence pin on the eems? I don't fully understand your backplane proposal and which diff pairs would be available to which eems and how that meshes with the module width. Now we have three backplanes. Your custom one, the CERN custom one and the cPCI S.
A few more words about CERN apprach. The Pentair crates they use have backplane mounted at 160mm depth, but it can be easily moved to 220mm and they did it. One can order 100, 160 or 220mm version. So we could use our Kasli/Kasli ZynQ with added adapter with bus switches and CPCIS system controller connector. When demand is high, dedicated version of the system controller could be designed. Moreover, we could make another adapters that enable transition between existing boards and backplane. So to summarize - if:
We would be able to support both connectivity approaches without increasing the price and both maintain low cost and backplane appraches. CERN develops also CPCIS rad hard power supply.
My backplane would be exactly the same as CERN one. The CERN one is not yer fully fixed. They work on specification. My idea is to make controller with 16 slots where additional 4 are shared using bus switches. They could be switched by detection of EEM slot insertion. So when you plug 8HP module to 4HP slot, neighbouring slot signals would be switched to another slot as second EEM to not waste signals.
@dtcallcock @sbourdeauducq That line of argument doesn't carry much weight for me.
I'm not trying to argue against this - indeed I think it would be great. I'm just pointing out that there may not be a lot of appetite in the current community to hand over cash in order to obsolete hardware (they just bought) for no gain in physics capability. I like @gkasprow idea of leveraging the space or CERN community and I think attaching this to a future quantum computing mega-grant is also a possibility.
Yet another idea is to not touch the EEM boards at all. Just add 2 extra mounting holes where IDC to cPCIS adapter would be attached. Ones who want to use boards as they are do nothing. Ones who want to plug them to cPCIs would buy such low cost adapter, fasten it to the EEM board using 2 screws and plug to the cPCIs chassis. We can modify EEMs and use right angle EEM connectors to make this step easier, without IDC jumper. In some of them just another RA connector type would be needed with existing PCB. moreover RA IDC connector makes installation of the IDC cables a bit easier.
Moving BNC discussion to Sampler repository as this is the only board it really effects right away.
Here is what I mean by Kasli/ Kasli_Zynq cPCIs adapter acting as a system controller.
and cPCIs EEM module adapter attached to existing board.
So with these two modules we could seamlessly transfer existing ecosystem to cPCIs mechanical standard. The EEM adapter module would have balun to convert diff clock to SE clock, 3.3V LDO. The Kasli adapter module would need balun, clock distribution IC and 6 bus switches.
CERN already deigned interesting radiation hard FMC carrier / system controller based on ProASIC FPGA.
I don't see people with one of the current crates buying a new crate, backplane, and a bunch of adaptors just to replace the ribbon cables. Therefore, I personally wouldn't bother with the Kasli-cPCI adaptor. I don't think it's unreasonable to say that people who want the new backplane have to buy a cPCI controller.
Would the cPCI adaptor allow people to build complete cPCI crates if there is an interim period when not all cards are available in the cPCI form and/or reuse old-style peripherals in a new-style crate? The issue I can see is that if the cPCI controller is 160mm depth then the backplane can't be moved back to make space for an old-style board + adaptor.
cPCI controller would be fixed to 220mm anyway. This is the role for cPCIs adaptor to use existing boards, or boards that were not yet modified to support cPCI natively. Anyway, we will keep EEM connectors as default and cPCI version would be assembled on request. One can also press-fit the cPCI connector anytime later. Kasli-CPCI adaptor is temporary solution. If there is enough interest in community, we will design Kasli version that supports cPCIs natively and have lower cost. CERN wants to participate in system development, if we find the common denominator for both projects.
Yes. Using an existing backplane - CERN or standard - is very valuable.
What minimum number of layers does the connector require on kasli and on the eems?
In our case 4 layers would do the job.
The CPCIS uses shared I2C bus that goes through all modules. In Kasli Ecosystem we use different approach with I2C mux/switch. We want to keep the backplane passive so have to choose different approach. CPCIS standard defines a few control lines:
I propose using PCIE_EN to enable I2C bus switch. Normally this signal is pulled low by 220R resistor. Controller detects module presence by using weak pullup. Then it pulls the line up enabling I2C bus switch and connecting the EEM I2C resources to the common I2C bus for module identification. CPCIS defines also common SPI bus with geographical addressing but so far I see no use cases for it.
Moving my comment from #204 to a new issue:
Might it be better in the long haul to have Kasli's functionality actually just integrated on the backplane? You would have a slim "dumb" card that brings SFP and coax connections to the front panel, and connects them to the backplane via some high-performance but lower-pin-count connector, suitable for gigabit signals. Then the Artix FPGA, clock distribution, etc all live on the backplane, and you don't need crazy fat connectors just to route all the EEM signals and clocks off the Kasli and on to the backplane. This would also free more front panel space, because if you are adding 96-DIN mezzanines to Kasli then this obviously will make it wider, which may be undesirable for some.
The only downside I can see is that if your FPGA on the backplane goes bad for some reason, it's more work to change it out....but this seems like it should not be a very common occurrence.