-
Hi,
I have a Spark Structured Streaming application where I'd like to write streaming data to HBase using SHC. It reads data from a location where new csv files continuously are being created. The …
-
Spark Streaming provides slightly different API to control.
Is there any plan to support start/stop stream processing?
-
### Steps to reproduce the behavior (Required)
+ 1. create spark load
```
LOAD LABEL pre_stream.test_load_ly_2 (
DATA FROM TABLE test_list_dup_sr_external_h2s_foit_820240510
INTO TABLE test_l…
-
Details to be added
y-f-u updated
3 months ago
-
I couldn't find a streaming fixture, is there anything coming up?
-
Hi @chrisbetz
Thanks for Sparkling, it's delicious. I just started working with it, and my employer Iris.tv is prepared to use it in production.
I am going to put some effort into Spark Streaming supp…
-
Hello Community,
We are facing issue with the streaming pipeline with hudi 0.15 (hoodie.metadata.enable=True) while migrating from 0.12 to 0.15
Pipeline is running successfully with hoodie.metadat…
-
We are currently missing these two Dataset method:
- DataStreamWriter writeStream()
- Dataset withWatermark(String eventTime, String delayThreshold)
That require some understanding of Spark st…
-
### What happened
I am running table compaction using Spark Actions. my spark action code is:
```scala
sparkActions
.rewriteDataFiles(table)
.option(RewriteDataFiles.PARTI…
-
**Is your feature request related to a problem?**
When working with Iceberg tables in Spark streaming jobs, the stream will terminate with an error if there are updated or deleted rows in the Icebe…