-
I get the following error in the running of the following code from Chapter 3 (Structured Streaming)
# in Python
streamingDataFrame = spark.readStream\
.schema(staticSchema)\
.option("maxFilesPe…
-
## Feature Request
**Is your feature request related to a problem? Please describe:**
NA
**Describe the feature you'd like:**
Pass extra structured meta data (Ts, Schema, Table) for event in Con…
-
**Describe the issue**
The integration testing deployment of the Rubin alert stream is now available. Information for the connection can be found at
https://github.com/lsst-dm/sample_alert_info/bl…
-
## Question
#### Which Delta project/connector is this regarding?
- [X] Spark
- [ ] Standalone
- [ ] Flink
- [ ] Kernel
- [ ] Other (fill in here)
### Overview
We have some logic that lo…
-
**Describe the issue:**
If you ask a question in the frontend that triggers the content filter (e.g. "how do I make a bomb?") You get a "type error" in the frontend.
![image](https://github.com/Azur…
-
Hi,
Is there a way to specify the partition(s) which I need my spark job's instance to read from and only from?
By doing so, I would like to control/limit the partition(s) a spark job will read…
-
## Feature request
### Overview
Today, when you pass "endingVersion" to DataStreamReader on a Delta Table with "Change Data Feed" enabled, it is just ignored.
The current supported options for …
-
```
%dpl
index=1k_multi_uniq earliest="2023-06-06T00:00:00"
| teragrep exec kafka save "1k-multi-uniq-test"
```
results in
```
org.apache.spark.sql.AnalysisException: Failed to find data so…
eemhu updated
9 months ago
-
I'm testing Mage to see if it can be a good fit for a project I'm working on.
I already have a pipeline in notebooks, using Spark Structured Streaming.
I tried to just copy the notebooks to mage a…
-
Many of these exports are large. Instead of creating the file in memory and then saving the file, it should be possible to stream the Excel blob to a file.
This library is similarly structured to …