Closed NexZhu closed 2 years ago
@NexZhu I added an error check to prevent the panic in https://github.com/cs3org/reva/pull/3430. It seems we get an error from the os when trying to read the node. It might be a different ERRNO from the internal stat or getxatr call. Maybe the filesystem does not support xattrs? we will know more when trying with that PR.
@NexZhu I added an error check to prevent the panic in cs3org/reva#3430. It seems we get an error from the os when trying to read the node. It might be a different ERRNO from the internal stat or getxatr call. Maybe the filesystem does not support xattrs? we will know more when trying with that PR.
Yeah, I think you guessed right. I'm using Alibaba Cloud NAS and I think it doesn't support xattrs.
Is there a way to workaround this limitation in the software layer?
Is there a way to workaround this limitation in the software layer?
Unfortunately with the current ocis storage interface we need xattrs to store file metadata and access permissions.
I tested with volumes witch support xattrs
, now storage-system
and store
pods are pending forever and I'm getting these errors, could you please take a look?
storage-system
pod failed to start:
PodScheduled False 2022-11-03 23:51:27 SchedulerError running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition
The only log is:
{"code":"SERVER_ERROR_CODE","message":"Cannot invoke method getContent() on null object","requestId":"b6f922f7-5bb0-46ae-b771-ad0ed239b49a","successResponse":false}
store
pod, the same:
{"code":"SERVER_ERROR_CODE","message":"Cannot invoke method getContent() on null object","requestId":"f50b4b50-def7-4587-bee3-2c08193b3468","successResponse":false}
I tested with volumes witch support
xattrs
, nowstorage-system
andstore
pods are pending forever and I'm getting these errors, could you please take a look?
storage-system
pod failed to start:PodScheduled False 2022-11-03 23:51:27 SchedulerError running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition
To me this look like an error of your underlying infrastructure. Kubernetes fails to schedule the pod because of some issue with the volumes. I don't think there is much we can do about that. You might be able to find some more detail about what failed by taking a look at the volumes and pvc (using kubectl), or by looking at the lower level kubernetes logs (if you have access to that).
Describe the bug
I deployed the latest version
owncloud/ocis:2.0.0-beta.8
with the official Helm chart and configured it to authenticate users with an OIDC provider Authelia instance, here's what happens:Accept
in the OAuth2 consent page.https://owncloud.example.com/oidc-callback.html?code=a0j4L6_ZuH2QbH5mPFJJbCVLH-OO9jIJg5mscuxPSiw.oqHE9PYJP1SnVboJth46PyRES17YKo9FT2WyONQhSTg&scope=openid+profile+email&state=9d6a21e638474d40bdaab11ebfb27088
https://owncloud.example.com/ocs/v1.php/cloud/user
with header:authorization: Bearer p9UAAo2wIMfb8RLlN_f7dW9u5mpxUl76IfUH3jAMEos.4b8WcEj1mLZOen34w719wsjoJa5tpq_ArXvub4M6NrY
, resulted in500
status code.Error logs
settings
pod:storage-system
pod: