Closed gatear closed 3 weeks ago
Thanks @snuyanzin for the review 🙏
I also have some questions ..
1.x
?Attention: Patch coverage is 95.23810%
with 2 lines
in your changes missing coverage. Please review.
Project coverage is 91.91%. Comparing base (
b37c566
) to head (57e83c6
). Report is 166 commits behind head on main.
Files | Patch % | Lines |
---|---|---|
.../net/datafaker/transformations/sql/SqlDialect.java | 84.61% | 2 Missing :warning: |
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Thanks for addressing feedback and for the valuable contribution
in order to port this change to faker version 1.x I need to also open a PR to branch 1.x ?
yep, need a backport PR for that
I would also invest in some documentation, where would you suggest it's best to make it ?
depending on how large documentation update you want to make, will it be ok to add as a subsection of SQL transformation
?
https://github.com/datafaker-net/datafaker/blame/f3fa54a4495fd8923e790701fab0570df8bd75b7/docs/documentation/schemas.md#L163
or ### Advanced SQL types
at https://github.com/datafaker-net/datafaker/blame/981eaa714266b9246c127f3b84855270a3a00591/docs/documentation/schemas.md#L239
Add Spark SQL support. See "INSERT INTO" spec https://spark.apache.org/docs/3.2.1/sql-ref-syntax-dml-insert-into.html
There are some issues with existing design in order to support all Spark types. Spark SQL has 3 complex data types:
Insertions look like this
So notable design changes are
java.util.Map
with Spark SQL MapI tested on latest Databricks runtime this generated SQL and works well 👍