Geontech / meta-redhawk-sdr

REDHAWK SDR Layer for Yocto/OpenEmbedded -based deployments
http://geontech.com/getting-started-with-meta-redhawk-sdr/
GNU Lesser General Public License v3.0
9 stars 6 forks source link

How to interact with Redhawk Device Manager on AM57xx EVM phytec board #62

Closed NayanaAnand closed 3 years ago

NayanaAnand commented 3 years ago

My requirement : Pinging Hardware(AM57xx EVM board) through Device manager which is running on hardware from any Desktop server.

Test setup: Yocto BSP build image is completed with meta Redhawk layer(Branch:Pyro) and image is loaded on the hardware.Now, on the hardware seen Device and Domain managers are running background tasks. VLAN cables are connected to hardware and local desktop server within same network.

I am trying to interact with redhawk device manager which is running on AM57xx EVM board, from one of my desktop by using ping utility.

But on the hardware didn't observe any notification that shows hardware is responding back to ping request.

Is there any ways to find out is our ping request reaching hardware via Redhawk device manager.

If yes, can you guide me the way to find out the place where i can find out the interaction interface between hardware(Device manager) and local server.

But It could not print anything via redhawk Device manager on board.

Redhawk branch : pyro Build : yocto

Can anyone let me know how to interact with redhawk device manager from any other desktop/laptop?

Thanks, Nayana

btgoodwin commented 3 years ago

Ping request reaching hardware via the REDHAWK Device Manager -- no, I cannot think of any ways to do this since it's really only sitting above the network stack.

The general Linux network stack isn't something we really deal with in this layer. However, are you certain the AM57xx's NIC is configured to be on the same network and subnet address as the NIC on your desktop server? The default NIC configuration (/etc/network/interfaces) in most Yocto releases is to use DHCP on eth0. If your desktop server (or intermediate layer-3 switch gear) isn't providing a response to DHCP, then though physically connected, they're not on the same logical network. And if you're using VLAN tagging you will need to make even more changes to that file. I did this in the last couple months on a different project and would be happy to help sort that out if need be.

Does the AM57xx have a serial UART you've been using for console access during the development of your BSP? From your other messages it sounds like it does (since you can see the Domain and DeviceManager processes in the background). If so I would start there for editing the /etc/network/interfaces to verify it either pulls a DHCP lease from your server or you have it configured to a static network address within the same subnet as your server's statically-assigned network address space (if that's your configuration, of course).

NayanaAnand commented 3 years ago

Hi Thomas,

My Requirement/Question : if Ping request is not possible, then is there any other ways to send/Receive Data to Hardware via Device manager from Desktop server.

Regards, Nayana

btgoodwin commented 3 years ago

Maybe I misunderstood your question. The question seemed to be asking for a way to determine from the DeviceManager the ability to ping the host on which it is running. The DeviceManager instance itself only needs access to the OmniNames service for finding the Domain identified in the DMD XML (or the override via the command line, nodeBooter). If those two services are co-located on the same host, and the /etc/omniORB.cfg is pointed at local host (the default, 127.0.0.1), then it wouldn't need a functional network interface to be defined, only loopback, which is part of the default /etc/network/interfaces definition usually. There's otherwise nothing inherent in DeviceManager's API to reach outside of the addresses that OmniNames knows about (at least not that I'm aware of).

Are you saying that when you're viewing this from the AM57xx, that you ping <server address> and see no console output until you stop that process (e.g., CTRL+C)? If so, I have seen that before. If I remember correctly it's because the ping utility might be the slim-down version from Busybox which gives no "ping timeout" messages if the destination is unreachable. Instead we would only see the message about 100% packet loss once we stopped the process. However, the real root cause was that the network hardware on the device wasn't fully initialized (the kernel driver couldn't communicate with the hardware). In my case this was an AXI Ethernet 1G IP core on the PL (Xilinx design), and we had to write a clock value to an onboard EEPROM in order for for the associated PHY chip to function (also, we had to hand-edit the device tree source at the time because the blob related to the core wasn't being constructed correctly by Xilinx's device-tree-xlnx tooling). This took a lot of troubleshooting in the device's /var/log/messages, etc.

Something else this might be is if your server has a firewall configuration such that it doesn't respond to ICMP (ping) requests. You might try checking the status of the firewall or any NAT forwarding your server might be trying to do. Since VLANs are involved though, I can't help but wonder if that's more the root cause. If the AM57xx NIC is setup without VLAN tagging, it's coming out as untagged PVID 0 most likely, and if the server ignores that PVID, then the server might see the traffic from the AM57xx via wireshark or something, but otherwise won't respond since technically it's not on that network (even if the address space matches).

For VLAN, you need to configure your kernel to support 8021q by enabling it in the kernel config (it probably already is enabled, but it's worth double-checking). Mine was enabled as a module, so I also had to do these:

IMAGE_INSTALL_append = " vlan kernel-modules"
KERNEL_MODULE_AUTOLOAD_append = " 8021q"

The first ensures the CLI tools are installed for enabling/disabling VLANs as well as ensures kernel modules are in the image, and the second line ensures that module is automatically loaded on boot.

Over in your /etc/network/interfaces, I found I needed to define the base interface (eth0 for example) and then all VLANs as in the dot notation, like eth0.15 for VLAN 15. What you would see is something like this:

auto eth0
iface eth0 inet static
    name  Untagged NIC
    # yadda yadda yadda

auto eth0.15
iface eth0.15 inet static
    name Tagged NIC 15
    # yadda yadda...
    ip_rp_filter 0
    pre-up ifconfig eth0 up
    pre-up vconfig add eth0 15
    post-down vconfig rem eth0.15

What this does is ensure that if eth0.15 is to come up (on boot, because auto eth0.15), it will first ensure that eth0 came up, and then run vconfig to setup the VLAN tag 15. Then, eth0.15 should be able to come up. On down, it uses vconfig to remove the eth0.15 NIC which thanks to the syntax translates to removing that VLAN tag ID. If you don't need the untagged interface (eth0, then you can probably get away with eliminating it (and the pre-up ifconfig... directive).