-
I had unexpected loss of all resources from a project while building an ingestion pipeline that could update the dataset metadata. Turns out there is a `patch` variant of package and resource update:…
-
When creating and inverted index in a large MV column (3000 integer values on average) from a parquet file with many rows (2 million rows) I get a `SIGSEV` error:
```
#
# A fatal error has been d…
-
**Describe the problem**
When init'ing the tpce/custumers=2m workload on 22.2, two large index backfills took a total of 40 hrs to add 10TB of replicated physical data to the cluster, which is doub…
-
### Description
The failure store is a very new experimental data stream feature that captures documents that couldn't be indexed and stores them in a special index with a fixed mapping in the fail…
-
We are testing the usage of the Kusto connector to connect Confluent with Kusto.
We downloaded the module and uploaded the plugin/module into Confluent for using it.
After filling in the necessary de…
-
_Describe the issue here._
This was raised by Manny/SE team to provide a step by step process for users to follow who try to ingest their data, but do not see the data on Cloud 2 when they browse thr…
-
### Which OpenObserve functionalities are the source of the bug?
alerts
### Is this a regression?
Yes
### Description
we configured rules as below screenshot with log_processed_msg **Contains**…
-
## Tell us about the new destination you’d like to have
* I would like to have Azure data lake storage Gen 2 as a destination
## Describe the context around this new destination
* This destination wi…
-
When creating an ingestion job, the user should be able to specify a backup file to restore from and then start consuming a stream from the timestamp at which it restored. This enables users to have a…
-
This is for capturing needs not currently supported by the CEDS model. Please do not send or share actual data as examples in this issue or in attachments.
**Author(s)**
Deborah Donovan
**Autho…