FZJ-INM1-BDA / siibra-api

Apache License 2.0
4 stars 3 forks source link

on stable, receptor queries can no longer be performed #41

Closed xgui3783 closed 2 years ago

xgui3783 commented 3 years ago

url: https://siibra-api-stable.apps.hbp.eu/

commit hash: 9c12770

code to reproduce:

curl 'https://siibra-api-stable.apps.hbp.eu/v1_0/atlases/juelich%2Fiav%2Fatlas%2Fv1.0.0%2F1/parcellations/minds%2Fcore%2Fparcellationatlas%2Fv1.0.0%2F94c1125b-b87e-45e4-901c-00daee7f2579/regions/Area%20hOc1%20(V1%2C%2017%2C%20CalcS)%20right%20/features/ReceptorDistribution'

expected result:

1 or more receptor profiles returned

actual result:

0 receptor profiles returned

possibly related to https://github.com/FZJ-INM1-BDA/siibra-python/issues/85 (change of authentication) edit: not related to change of authentication, as far as I could tell

fsdavid commented 3 years ago

Related to https://github.com/FZJ-INM1-BDA/siibra-explorer/issues/1000

Randomly, returns the receptor distributions, but mostly empty array.

xgui3783 commented 3 years ago

Upon further investigation, it seems 2 of the 3 pods have the wrong return value cached.

to reproduce, run the curl script > 10 times. It seems roughly 30% returns result.

Browser will always be hit with the same pod (openshift will route browser traffic to the same pod, with session cookie)

This should be built in health checks of pods.

marcenko commented 2 years ago

@xgui3783 I wanted to work on this bug. Do I understand it correct, that the main reason behind this bug was a wrong pods caching?

xgui3783 commented 2 years ago

Wrong cache being written is definitely the issue.

To add more madness to the issue, we have since added a redis cache in front of all json responses (it can be bypassed with header, however)

I do not yet know if the instability is happening on siibra-api or siibra-python side.

marcenko commented 2 years ago

@xgui3783 @fsdavid I was checking today on this issue and I couldn't reproduce it after alle the changes that we have made. On the latest version all three pods were used while routing and I always got a result.

Did you came across this issue lately?

xgui3783 commented 2 years ago

I believe this issue is fixed.

marcenko commented 2 years ago

I believe so as well. I will close it and we can open a new one, if we run into it again.