-
Now,FlinkSink requires developers to transfer the schema parameter to build DataStream, which means once the schema given,then the TableSchema will be determinded, canot be changed for ever ,but in pr…
-
### Search before asking
- [X] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar.
### Paimon version
Paimon 0.7
Hive 3.1.3 (On MR)
### Compute Engine
…
-
**Describe the bug**
binding readable stream on adddatatomap failed
**To Reproduce**
Steps to reproduce the behavior:
1)getting readable stream response as byte [] from backend python serve…
-
Hello,
I'm reaching out to discuss an issue we're facing while setting up the **Debezium-Iceberg** integration in our architecture. We followed the steps outlined in **Chris Cranford (Naros)'s** bl…
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no similar issues.
### Version
Doris version 1.1.3-rc02-abceed6
…
-
The current vision is to document a single "data processor" - something that takes in 1..n data sources and emits 1..n data sinks. Extend this vision to include multiple "nodes", documenting the flows…
-
### Describe the bug
Build fails with no default features
### To Reproduce
```shell
cargo build --no-default-features
```
Produces
```
error[E0412]: cannot find type `ParquetSink` in thi…
alamb updated
7 months ago
-
**Is your feature request related to a problem? Please describe.**
With the default log configuration, CockroachDB crashes when it encounters an error writing to a log file sink. This is to prevent…
-
I discovered that sink() doesn't work on RCloud
-
in the source code: src/essentia/streaming/algorithms/vectoroutput.h
class VectorOutput : public Algorithm {
protected:
Sink _data;
std::vector* _v;
public:
VectorOutput(std::vector*…