-
## Problem description
It is possible that during a high deployment stream that the healthcheck gives false negatives during the `checking for halted volumes`.
## Possible root of the problem
Rac…
-
# Problem description
Creating back-end during the removal of a node will freeze the back-end on the installing status
I suspect that it will generally break anything when the setup changes.
Also I …
-
On OVH we received this error for the last 3 vpool installation.
packages (some scripts are manually patched:
```
^Croot@ovs05:~# dpkg -l | grep openvstorage
ii openvstorage …
-
## Problem description
Disk safety says 1 disk is lost but it isn't
### Logs
```
[FAILED] Backend mybackend-global has lost 1 disk(s). Losing more disks will cause data loss!
[WARNING] Backend …
-
## Problem description
In an accelerated albabackend objects can have a lower safety but we are not interested in those safeties.
namespaces with {albaid}_{namespaceid} can be ignored (cached object…
-
## Problem description
This happens when running the healthchecks parallel on many nodes.
Occurs on fargo
## Possible root of the problem
Related to : https://github.com/openvstorage/openvstor…
-
## Problem description
Running multiple alba proxy-test on one server leads to errors in the healthcheck.
Scenario:
Running healthcheck with check_mk every 3 minutes.
If someone runs the healthc…
-
After extending the vpool to the 4th node, suddenly the vPool configuration changed (Dedupe, Cache on Read, write buffer 960, ...). When removing the node, it changed back to normal settings.
Version…
-
## Problem description
The check of RabbitMQ/celery port check is stuck for a long time
### Logs
```
[INFO] Checking RabbitMQ/Celery ...
^C[EXCEPTION] Error during execution of the healthcheck
…
-
## Problem description
It takes a while to (re)load the backend information in the vpool wizard. This is mainly due to the local_stack query that happens on the API to get the necessary information.…