Closed jbellister-slac closed 1 year ago
In short. I am able to replicate what you report. This is a bug which I will look into.
I would still recommend designing your PVA networks to avoid relying on automatic forwarding of unicast searches.
tldr...
PVXS handles the 224.0.0.128
local multicast "hack" differently than pvAccessCPP or pvAccessJava. I never liked the design of this feature, and long thought that it would eventually blow up on someone.
Prior to PVXS 0.3.0, unicast searches were never rebroadcast. Since 0.3.0, unicast searches are rebroadcast (with a CMD_ORIGIN_TAG
prefix), but not in all situations. Specifically, I know that some versions of some implementations don't prefix forwarded messages with CMD_ORIGIN_TAG
. I also don't trust that all implementations clear the Unicast flag bit on forwarded messages.
So PVXS tries to be strict about only forwarding unicast searches w/o CMD_ORIGIN_TAG
which arrive from an interface other than 127.0.0.1
. There is the added wrinkle that forwarding has proven difficult to unittest, and I have made mistakes in the past. Thus far these have been cases of being too strict.
Part of the problem now has to do with the addition of IPv6 support and the Linux specific differences between [::]
vs. 0.0.0.0
. The result is that the server is not joining 224.0.0.128
.
To compound this, so far I haven't figured out a good way to unittest any of the behavior.
I had a good guess what that link would be before clicking it :)
Got it, that does sounds like a pain, but thanks for taking a look!
I would still recommend designing your PVA networks to avoid relying on automatic forwarding of unicast searches.
While this fix is in progress, is there a better practice way of solving this on our end? If we say have multiple PVA servers running on a single linux host, and a client on the same subnet wants to be able to retrieve PVs from any of them what is the recommended approach for setting that up with PVA?
... is there a better practice way of solving this on our end?
Without specific knowledge of how your network is laid out (which probably shouldn't be posted here) I can't say more than to avoid situations requiring unicast UDP search. At simplest, this means relying on broadcast search. Other situations might involve adding PVA gateways between subnets, and/or utilizing ipv4 multicast when the desired scope for PVA searching crosses subnet boundaries.
Describe the bug
Hi! Not sure if it's a bug, I'm not configuring something properly, or maybe just not implemented yet. But for some context, I'm attempting to update the version of P4P used at SLAC from 3.5.5 to a version 4.X (trying to go straight to 4.1.5 at the moment). When testing the update, we've noticed that existing P4P servers that are serving PVs from a machine just fine with 3.5.5 are no longer able to be communicated with from a different machine, and actually even from the same machine itself. It seems to be an issue with the underlying change to use pvxs, and I've narrowed it down to the simple reproducible case below:
To Reproduce
Steps to reproduce the behavior:
Expected behavior pvget should return the PV. (It does return correctly with the same EPICSPVA* configuration using P4P 3.5.5 or just the example from pvAccess module from epics-base)
Information (please complete the following):
Output of
pvxinfo -D
. (This is from RHEL7)Additional context
Running the server with
PVXS_LOG=*=DEBUG
shows debug output on server startup, but no debug output when a pvget is made. Can include other information and dig further if needed, but wanted to see if this was a known thing first.Server startup debug output: