Open sanyakamra opened 2 months ago
Hi @sanyakamra thank you for reporting. I think you found a bug
Here is what I did, started databroker
docker run -it --rm --net=host ghcr.io/eclipse-kuksa/kuksa-databroker:main --port 55555
run two clients like
docker run -it --rm --net=host ghcr.io/eclipse-kuksa/kuksa-python-sdk/kuksa-client:latest grpc://127.0.0.1:55556
In the first client
Test Client> subscribe Vehicle.CurrentLocation
{
"subscriptionId": "6f64fb5a-8097-4bb1-afbf-f3c155f0edf1"
}
In the second client
Test Client> setValue Vehicle.CurrentLocation.Latitude 2.0
OK
Results in
[
{
"entry": {
"path": "Vehicle.CurrentLocation.Latitude",
"value": {
"value": 2.0,
"timestamp": "2024-04-26T01:37:17.497904+00:00"
},
"metadata": {
"data_type": "UNSPECIFIED",
"entry_type": "UNSPECIFIED",
"unit": "degrees"
}
},
"fields": [
"VALUE"
]
}
]
That looks good, however in this case:
first client
Test Client> subscribe Vehicle.OBD
{
"subscriptionId": "e6cd148e-68af-40e3-bc80-5ccabad0cec5"
}
second client
Test Client> setValue Vehicle.OBD.DTCList 1,2,3
OK
results in the subscription not being triggered, even though the value is set, as can be confirmed by manually
Test Client> getValue Vehicle.OBD.DTCList
{
"path": "Vehicle.OBD.DTCList",
"value": {
"value": {
"values": [
"1",
"2",
"3"
]
},
"timestamp": "2024-04-26T01:38:20.984240+00:00"
}
}
Can you confirm this @argerus ? Maybe fix it/assign somebody who can fix it or point out where things are going wrong?
@sanyakamra Generally for your usecase it is not very "convenient" yet to work with such signal "groups" that belong together. To avoid running into race conditions (because even when you subscribe the "branch", you still would need to manually "get" the related values/wait for them during subscription. To make sure there is is less chance of having an inconsistent view in databroker, it is recommended to SET those values all at once, i.e. when using our python sdk doing something like
async with VSSClient('127.0.0.1', 55555) as client:
await client.set_target_values({
'Vehicle.SomePosition.lat': Datapoint(1),
'Vehicle.SomePosition.long': Datapoint(12),
'Vehicle.SomePosition.somethign': Datapoint(42),
})
this at least reduces the chances of "unfortunate accidents". Unfortunately this is not, how GPS provider (https://github.com/eclipse-kuksa/kuksa-gps-provider/) is doing it currently, as it still retains some compatibility to the deprecated C++ version of KUKSA. (but now, with the depreaction this might change soon)
So those combined things are currently working well either for very slow updates (i.e. you update a position every few seconds, but the updates of all values are coming in short order, so it is easy to get what belongs together), OR you update things very fast (take the posiiton example, if lat/long etc would be updated every 10 ms or so, they would not change much, and "grouping" things together from a few different timeslots does not hurt/evens out (just like sampling things at slightly different timeslots).
@SebastianSchildt I can reproduce the issue with the steps you provided. However, I also tested the same steps with the databroker-cli and there it is working fine. So while we have an issue here, the issue might be part of the sdk and not of the databroker itself.
I will create a ticket over here for the scenario you described: https://github.com/eclipse-kuksa/kuksa-python-sdk
Steps I did:
Start a Databroker
docker run -it --rm --net=host ghcr.io/eclipse-kuksa/kuksa-databroker:main --port 55556
Start a Databroker CLI and subscribe to Vehicle.OBD
docker run -it --rm ghcr.io/eclipse-kuksa/kuksa-databroker-cli:main --server=192.168.5.2:55556
Using kuksa.val.v1
⠀⠀⠀⢀⣤⣶⣾⣿⢸⣿⣿⣷⣶⣤⡀
⠀⠀⣴⣿⡿⠋⣿⣿⠀⠀⠀⠈⠙⢿⣿⣦⠀
⠀⣾⣿⠋⠀⠀⣿⣿⠀⠀⣶⣿⠀⠀⠙⣿⣷
⣸⣿⠇⠀⠀⠀⣿⣿⠠⣾⡿⠃⠀⠀⠀⠸⣿⣇⠀⠀⣶⠀⣠⡶⠂⠀⣶⠀⠀⢰⡆⠀⢰⡆⢀⣴⠖⠀⢠⡶⠶⠶⡦⠀⠀⠀⣰⣶⡀
⣿⣿⠀⠀⠀⠀⠿⢿⣷⣦⡀⠀⠀⠀⠀⠀⣿⣿⠀⠀⣿⢾⣏⠀⠀⠀⣿⠀⠀⢸⡇⠀⢸⡷⣿⡁⠀⠀⠘⠷⠶⠶⣦⠀⠀⢠⡟⠘⣷
⢹⣿⡆⠀⠀⠀⣿⣶⠈⢻⣿⡆⠀⠀⠀⢰⣿⡏⠀⠀⠿⠀⠙⠷⠄⠀⠙⠷⠶⠟⠁⠀⠸⠇⠈⠻⠦⠀⠐⠷⠶⠶⠟⠀⠠⠿⠁⠀⠹⠧
⠀⢿⣿⣄⠀⠀⣿⣿⠀⠀⠿⣿⠀⠀⣠⣿⡿
⠀⠀⠻⣿⣷⡄⣿⣿⠀⠀⠀⢀⣠⣾⣿⠟ databroker-cli
⠀⠀⠀⠈⠛⠇⢿⣿⣿⣿⣿⡿⠿⠛⠁ v0.4.4
Successfully connected to http://192.168.5.2:55556/
kuksa.val.v1 > subscribe Vehicle.OBD
[subscribe] OK
Subscription is now running in the background. Received data is identified by [1].
[1] Vehicle.OBD.DTCList: [1, 2, 3, 4, 5]
Start a second Databroker CLI and publish new values for Vehicle.OBD.DTCList
docker run -it --rm ghcr.io/eclipse-kuksa/kuksa-databroker-cli:main --server=192.168.5.2:55556
Using kuksa.val.v1
⠀⠀⠀⢀⣤⣶⣾⣿⢸⣿⣿⣷⣶⣤⡀
⠀⠀⣴⣿⡿⠋⣿⣿⠀⠀⠀⠈⠙⢿⣿⣦⠀
⠀⣾⣿⠋⠀⠀⣿⣿⠀⠀⣶⣿⠀⠀⠙⣿⣷
⣸⣿⠇⠀⠀⠀⣿⣿⠠⣾⡿⠃⠀⠀⠀⠸⣿⣇⠀⠀⣶⠀⣠⡶⠂⠀⣶⠀⠀⢰⡆⠀⢰⡆⢀⣴⠖⠀⢠⡶⠶⠶⡦⠀⠀⠀⣰⣶⡀
⣿⣿⠀⠀⠀⠀⠿⢿⣷⣦⡀⠀⠀⠀⠀⠀⣿⣿⠀⠀⣿⢾⣏⠀⠀⠀⣿⠀⠀⢸⡇⠀⢸⡷⣿⡁⠀⠀⠘⠷⠶⠶⣦⠀⠀⢠⡟⠘⣷
⢹⣿⡆⠀⠀⠀⣿⣶⠈⢻⣿⡆⠀⠀⠀⢰⣿⡏⠀⠀⠿⠀⠙⠷⠄⠀⠙⠷⠶⠟⠁⠀⠸⠇⠈⠻⠦⠀⠐⠷⠶⠶⠟⠀⠠⠿⠁⠀⠹⠧
⠀⢿⣿⣄⠀⠀⣿⣿⠀⠀⠿⣿⠀⠀⣠⣿⡿
⠀⠀⠻⣿⣷⡄⣿⣿⠀⠀⠀⢀⣠⣾⣿⠟ databroker-cli
⠀⠀⠀⠈⠛⠇⢿⣿⣿⣿⣿⡿⠿⠛⠁ v0.4.4
Successfully connected to http://192.168.5.2:55556/
kuksa.val.v1 > publish Vehicle.OBD.DTCList [1,2,3,4,5,6]
[publish]
Check first Databroker CLI for update
kuksa.va
[1] Vehicle.OBD.DTCList: [1, 2, 3, 4, 5, 6]
Update: Even if you mix the Databroker CLI and Kuksa Client (one subscribes and one publishes the changes) it is reproducible in the same way:
I assume this is fixed in Head @erikbosch Should we make a new release of SDK (docker and pyi) then close here?
@SebastianSchildt - yes, I did some tests and even created a pypi pre-release at https://pypi.org/project/kuksa-client/0.4.3a1/ that @sanyakamra can test out, but noticed a small regression if using the deprecated Kuksa Server that needs to be fixed: https://github.com/eclipse-kuksa/kuksa-python-sdk/pull/29
Hello,
For my use case, I'll take cues from the GNSS location example, such as:
Topics:
I'm exploring a workaround where a parent signal, such as GNSS location, would have child signals like latitude and longitude. Whenever there's an update on either latitude or longitude, subscribing to the parent signal would automatically provide updates on the child signals. This would make it easier to process the data together or establish a dependency between child and parent signals.
However, when the topic datatype is string[], it doesn't provide updates when I subscribe to the parent node. Strangely, it works fine if the datatype is string, uint, or any other type, but not an array.
Do you have any other suggestions or workarounds that might be worth considering?