-
Hi, I have been interested in this project for many years, and I have used in different occasions.
Currently, I have used this project for a class I give, to compare different RDBMS and test performa…
-
Hi Community,
First time engaging here on this project, looking forward to it and thanks in advance.
The spec currently defines 4 options for the `type` field: `sensor`,`actuator`,`attribute`,`b…
-
## SQL
Tu as réalisé un très, très bon travail dans l’ensemble, avec beaucoup de choses qui fonctionnent bien (et quelques petites choses à améliorer).
Le schéma est très bien fait, simple à com…
-
Hi all,
Im currentley facing some troubles trying to write on a partitioned table due to differences between schemas evolutions. In example, I have a Dataframe partitioned by column source with thi…
-
This repository provides a good example about serializing events from PubSub to Avro in GCS Bucket with the `PubsubToGenericAvro` pipeline, but based on the concrete schema defined by `AvroPubsubMessa…
-
### Apache Iceberg version
1.3.1 (latest release)
### Query engine
Spark
### Please describe the bug 🐞
If you have ever used a column in write sort order (as it's called in the Spark Ic…
-
Bonjour,
Ne me souvenant jamais si cela a déjà été discuté/si c'est dans les cartons, j'ouvre une discussion :
la possibilité d'utiliser le module d'import vers le module monitoring est-elle env…
-
Hi,
Say at time T0, Ihave the following struct:
```
#[derive(Archive, Serialize, Deserialize)]
struct A {
a: u32,
b: String,
}
```
Later, and at T1, I decide to add a field:
…
-
# Description
We might need to allow protocol changes during the write operation when people add timestampNtz during schema evolution.
Once the RFC for time/interval dtypes becomes part of the de…
-
This is useful when handling untrusted inputs. E.g. a client could pack extra fields into messages in order to increase server load. Enforcing a max size limit can help, but it wouldn't address the mo…