Closed jwlodek closed 1 week ago
@evalott100 this is a bug in the IOC side. All the recent changes should only have happened on the PVI structures, but it looks like some changes crept in on the Table structures. I can see how this might have happened as they probably use the same QSRV helpers.
Will look into this this morning.
Before and after change of table structures pvget -v
PANDA1:DATA:DATASETS epics:nt/NTTable:1.0
structure record
structure _options
boolean atomic true
string[] labels ["Name", "Type"]
structure value
string[] name []
string[] type []
PANDA1:DATA:DATASETS epics:nt/NTTable:1.0
structure record
structure _options
boolean atomic true
string[] labels ["Name", "Type"]
structure pvi
structure name
string[] r []
structure type
string[] r []
structure value
structure name
string[] r []
structure type
string[] r []
The cause was passing in access here:
@jwlodek Just released the fix on the ioc side, let me know if you run into any more problems.
The DATASETS
PV now does not have the access level:
In [9]: !pvget -v XF:31ID1-ES{PANDA:1}:DATA:DATASETS
XF:31ID1-ES{PANDA:1}:DATA:DATASETS epics:nt/NTTable:1.0
structure record
structure _options
boolean atomic true
string[] labels [Name, Type]
structure pvi
string[] name [TrigTs]
string[] type [float64]
structure value
string[] name [TrigTs]
string[] type [float64]
But I am now getting a pydantic
ValidationError
on get_value
:
In [10]: await panda1.data.datasets.get_value()
An exception has occurred, use '%tb verbose' to see the full traceback.
ValidationError: 1 validation error for DatasetTable
type
Extra inputs are not permitted [type=extra_forbidden, input_value=['float64'], input_type=list]
For further information visit https://errors.pydantic.dev/2.9/v/extra_forbidden
See /home/xf31id/.cache/bluesky/log/bluesky.log for the full traceback.
This is because (I think after signal typing) the name of the columns in the table must match the init signature of DatasetTable
, i.e. the column type
needs to be hdf5_type
, so it unpacks correctly.
I believe I've solved this in the PR to the IOC linked above.
@jwlodek did the IOC change from type
to hdf_type
recently? Or did ophyd just start being stricter?
I believe the IOC was always type
, and looking at the git history on the ophyd side it was always hdf5_type
. I think this is a case of things just getting stricter on the ophyd side.
The ioc initialises with the row name as it always did
vs
The only reason this is failing now is because DataSetTable
changed from a TypedDict
to a Table
, which means its elements are now being checked by pydantic.
class DatasetTable(Table):
name: Sequence[str]
hdf5_type: Sequence[PandaHdf5DatasetType]
I would initially be against changing the name of the column in the ioc side to support this, but since the header name needs to be correspond to a python variable name, and "type"
conflicts with the primitive I'd argue this column name should be changed on PandaBlocks-ioc. @coretl
Currently, doing a
writer.open()
on a PandA writer will fail with an error instantiating theDatasetTable
object. You can actually reproduce the issue by just doing:This is because the pvget of the corresponding table PV will return something like:
which does not unpack into something that matches the init signature of
DatasetTable
See #638 for a potential solution.
Steps To Reproduce
Steps to reproduce the behavior:
panda1.writer.open()
Acceptance Criteria