catalyst-cooperative / pudl-usage-metrics

A dagster ETL for collecting and cleaning PUDL usage metrics.
MIT License
1 stars 0 forks source link

Update dagit requirement from ~=0.15.0 to >=0.15,<1.2 #99

Closed dependabot[bot] closed 1 year ago

dependabot[bot] commented 1 year ago

Updates the requirements on dagit to permit the latest version.

Changelog

Sourced from dagit's changelog.

1.1.15 (core) / 0.17.15 (libraries)

New

  • Definitions now accepts Executor instances in its executor argument, not just ExecutorDefinitions.
  • @multi_asset_sensor now accepts a request_assets parameter, which allows it to directly request that assets be materialized, instead of requesting a run of a job.
  • Improved the performance of instantiating a Definitions when using large numbers of assets or many asset jobs.
  • The job passed to build_schedule_from_partitioned_job no longer needs to have a partitions_def directly assigned to it. Instead, Dagster will infer from the partitions from the assets it targets.
  • OpExecutionContext.asset_partition_keys_for_output no longer requires an argument to specify the default output.
  • The “Reload all” button on the Code Locations page in Dagit will now detect changes to a pyproject.toml file that were made while Dagit was running. Previously, Dagit needed to be restarted in order for such changes to be shown.
  • get_run_record_by_id has been added to DagsterInstance to provide easier access to RunRecord objects which expose the start_time and end_time of the run.
  • [dagit] In the “Materialize” modal, you can now choose to pass a range of asset partitions to a single run rather than launching a backfill.
  • [dagster-docker] Added a docker_container_op op and execute_docker_container_op helper function for running ops that launch arbitrary Docker containers. See the docs for more information.
  • [dagster-snowflake-pyspark] The Snowflake I/O manager now supports PySpark DataFrames.
  • [dagster-k8s] The Docker images include in the Dagster Helm chart are now built on the most recently released python:3.x-slim base image.

Bugfixes

  • Previously, the build_asset_reconciliation_sensor could time out when evaluating ticks over large selections of assets, or assets with many partitions. A series of performance improvements should make this much less likely.
  • Fixed a bug that caused a failure when using run_request_for_partition in a sensor that targeted multiple jobs created via define_asset_job.
  • The cost of importing dagster has been reduced.
  • Issues preventing “re-execute from failure” from working correctly with dynamic graphs have been fixed.
  • [dagit] In Firefox, Dagit no longer truncates text unnecessarily in some cases.
  • [dagit] Dagit’s asset graph now allows you to click “Materialize” without rendering the graph if you have too many assets to display.
  • [dagit] Fixed a bug that stopped the backfill page from loading when assets that had previously been backfilled no longer had a PartitionsDefinition.
  • [dagster-k8s] Fixed an issue where k8s_job_op raised an Exception when running pods with multiple containers.
  • [dagster-airbyte] Loosened credentials masking for Airbyte managed ingestion, fixing the Hubspot source, thanks @joel-olazagasti!
  • [dagster-airbyte] When using managed ingestion, Airbyte now pulls all source types available to the instance rather than the workspace, thanks @emilija-omnisend!
  • [dagster-airbyte] Fixed an issue which arose when attaching freshness policies to Airbyte assets and using the multiprocessing executor.
  • [dagster-fivetran] Added the ability to force assets to be output for all specified Fivetran tables during a sync in the case that a sync’s API outputs are missing one or more tables.

Breaking Changes

  • The asset_keys and asset_selection parameters of the experimental @multi_asset_sensor decorator have been replaced with a monitored_assets parameter. This helps disambiguate them from the new request_assets parameter.

Community Contributions

  • A broken docs link in snowflake_quickstart has been fixed, thanks @clayheaton!
  • Troubleshooting help added to helm deployment guide, thanks @​adam-bloom!
  • StaticPartitionMapping is now serializable, thanks @AlexanderVR!
  • [dagster-fivetran] build_fivetran_assets now supports group_name , thanks @toddy86!
  • [dagster-azure] AzureBlobComputeManager now supports authentication via DefaultAzureCredential, thanks @mpicard!

Experimental

  • [dagster-airflow] added a new api load_assets_from_airflow_dag that creates graph-backed, partitioned, assets based on the provided Airflow DAG.

1.1.14 (core) / 0.17.14 (libraries)

New

... (truncated)

Commits
  • 5d45afa 1.1.15
  • 0ca6358 1.1.15 changelog (#12051)
  • e025d31 fix(dbt-cloud): inherit generate docs settings for compile run (#12043)
  • 6da3aff feat(dbt-cloud): compile run only if job has environment variable cache (#12042)
  • aac1eb5 lint fix (#12044)
  • 7bf3b64 Change endpoints to the ones that are used by airbyte UI (#12012)
  • f2125d5 [dagit] Fix use of fragments causing Apollo caching error in partition health...
  • 0d9f907 [dagster-fivetran] Add option to force-create materializations for tables not...
  • d984b71 Fix broken doc link for Snowflake credential setup (#12017)
  • 4596372 [dagster-airflow] load_assets_from_airflow_dag (#11876)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 1 year ago

Superseded by #105.