cms-gem-daq-project / ctp7_modules

0 stars 13 forks source link

Minimal working version of the templated RPC methods #169

Closed lpetre-ulb closed 4 years ago

lpetre-ulb commented 4 years ago

Description

This PR aims at providing a minimal working set of templated RPC methods.

The following changes were made:

[Requires https://github.com/cms-gem-daq-project/xhal/pull/144 so it is probably better to wait before merging.]

Types of changes

Motivation and Context

Port cmsgemos to the templated RPC modules in order to, at first, remove the direct IPBus dependency.

How Has This Been Tested?

Can successfully run a basic RPC call (i.e., vfat3::getVFAT3ChipIDs):

$ cat test.cpp
#include "xhal/common/rpc/call.h"
#include "ctp7_modules/common/vfat3.h"
#include <iostream>

int main(int argc, char **argv)
{
        wisc::RPCSvc conn;
        conn.connect("eagle48");
        conn.load_module("vfat3", "vfat3 v1.0.1");

        for (auto &i: xhal::common::rpc::call<vfat3::getVFAT3ChipIDs>(conn, 0, 0x0, false))
                std::cout << i << std::endl;

        return 0;
}
$ ./run.sh
6241
6415
6210
6185
6534
[...]

Checklist:

mexanick commented 4 years ago

Note that since the templated RPC migration, the LMDB database is opened and closed for every LMDB register read;

Did you have a chance to estimate the performance impact? Would it make sense to migrate the LMDB opening to the rpcsvc and returning the copy of the handlers (although this can be quite tricky...)

lpetre-ulb commented 4 years ago

Note that since the templated RPC migration, the LMDB database is opened and closed for every LMDB register read;

Did you have a chance to estimate the performance impact? Would it make sense to migrate the LMDB opening to the rpcsvc and returning the copy of the handlers (although this can be quite tricky...)

Not yet, but I'll definitely do so, and open an issue if there is significant impact (that would you be my first guess).

mexanick commented 4 years ago

If there's no last-moment comments, I'll merge this in 1 hour. All further discussions can be handled in separate issues.

lpetre-ulb commented 4 years ago

If there's no last-moment comments, I'll merge this in 1 hour. All further discussions can be handled in separate issues.

I would wait for https://github.com/cms-gem-daq-project/xhal/pull/144 to be finished. That PR works, but it uses a code path leading to potential disasters.

lpetre-ulb commented 4 years ago

Actually, optional is not experimental since c++17, but I'm fine with the current solution ;)

Right, but the CTP7 cross-compiler does not even fully support C++11... And xDAQ is not certified with C++17...

mexanick commented 4 years ago

Actually, optional is not experimental since c++17, but I'm fine with the current solution ;)

Right, but the CTP7 cross-compiler does not even fully support C++11... And xDAQ is not certified with C++17...

which one? Old Linaro packaged by Xilinx or fresh Linaro GCC 7.5 (which I believe has full support of c++17) Actually even older Linaro from Xilinx supports c++14 IIRC

lpetre-ulb commented 4 years ago

which one? Old Linaro packaged by Xilinx or fresh Linaro GCC 7.5 (which I believe has full support of c++17) Actually even older Linaro from Xilinx supports c++14 IIRC

The old Linaro package by Xilinx (4.9.2). It indeed supports some C++14 features, but the C++11 standard library support is not complete/compliant.

I'm not sure we want to use a modern Linaro GCC release without thorough tests. We may trigger hard to debug run-time issues when linking against system-wide C++ libraries (particularly with the exceptions). If we can ensure to link only against system-wide C libraries, that should be fine (let's be careful though, rpcsvc is C++, but we will probably end up recompiling it...).

jsturdy commented 4 years ago

which one? Old Linaro packaged by Xilinx or fresh Linaro GCC 7.5 (which I believe has full support of c++17) Actually even older Linaro from Xilinx supports c++14 IIRC

The old Linaro package by Xilinx (4.9.2). It indeed supports some C++14 features, but the C++11 standard library support is not complete/compliant.

I'm not sure we want to use a modern Linaro GCC release without thorough tests. We may trigger hard to debug run-time issues when linking against system-wide C++ libraries (particularly with the exceptions). If we can ensure to link only against system-wide C libraries, that should be fine (let's be careful though, rpcsvc is C++, but we will probably end up recompiling it...).

Indeed, on the CTP7 I would stick with the compiler that everything else on the card is built with because there very well might be difficult to debug issues; otherwise I'd say use the latest arm toolchain compiler which is (was at the time of my last update) 9.2-2019.12, I think (it should be installable in the docker container with the getarm.sh script living in /opt/arm)

On the PC, devtoolset is safe, because that's explicitly how redhat designed it. In fact, the current xDAQ team recommendations are actually wrong, and moving forward I'd have no problem, for our code, even using devtoolset-9 for both cc7 and cc8, and targetting c++17 or c++2a compatibility, though for anything we do on both card and PC we might prefer to simply target c++14, incomplete though it may be on the Zynq, (unless we convince Jes to update the image to a more recent version...)

jsturdy commented 4 years ago

On the PC, devtoolset is safe, because that's explicitly how redhat designed it.

Well... with the provisio that the machine the code executes on runs the same version or later of Centos as the machine that compiled the code