Generate relevant synthetic data quickly for your projects. The Databricks Labs synthetic data generator (aka `dbldatagen`) may be used to generate large simulated / synthetic data sets for test, POCs, and other uses in Databricks environments including in Delta Live Tables pipelines
Most Recent Ignore Conditions Applied to This Pull Request
| Dependency Name | Ignore Conditions |
| --- | --- |
| pyspark | [>= 3.2.a, < 3.3] |
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/databrickslabs/dbldatagen/network/alerts).
Bumps pyspark from 3.1.3 to 3.3.1.
Commits
fbbcf94
Preparing Spark release v3.3.1-rc4ca60665
[SPARK-40703][SQL] Introduce shuffle on SinglePartition to improve parallelism27ca30a
[SPARK-40782][BUILD] Upgradejackson-databind
to 2.13.4.1442ae56
[SPARK-8731] Beeline doesn't work with -e option when started in backgroundfdc51c7
[SPARK-40705][SQL] Handle case of using mutable array when converting Row to ...9f8eef8
[SPARK-40682][SQL][TESTS] Setspark.driver.maxResultSize
to 3g in `SqlBased...5a23f62
Preparing development version 3.3.2-SNAPSHOT7c465bc
Preparing Spark release v3.3.1-rc35fe895a
[SPARK-40660][SQL][3.3] Switch to XORShiftRandom to distribute elements5dc9ba0
[SPARK-40669][SQL][TESTS] ParameterizerowsNum
inInMemoryColumnarBenchmark
Most Recent Ignore Conditions Applied to This Pull Request
| Dependency Name | Ignore Conditions | | --- | --- | | pyspark | [>= 3.2.a, < 3.3] |Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show