ash-project / ash

A declarative, extensible framework for building Elixir applications.
https://www.ash-hq.org
MIT License
1.61k stars 218 forks source link

Updating through a JSON API gives an error from 3.4.33 #1541

Closed MikaelFangel closed 4 days ago

MikaelFangel commented 4 days ago

Describe the bug I'm not entirely sure what the bug is, but upgrading from ash 3.4.32 to 3.4.33 breaks the patch updates through our JSON API. Using the create action works as usual, and I've tracked it down to it might be the ash upgrade that is the one that breaks the update action.

The error message I'm getting when upgrading

  1) property JSON API PATCH valid alert update returns a 200 OK response (CaseManager.AlertExternalTest)
     test/api/alert_external_test.exs:60
     ** (ExUnitProperties.Error) failed with generated values (after 0 successful runs):

         * Clause:    alert_attr <- AlertGenerator.alert_attrs()
           Generated: %{alert_id: "7qP1h3gD", creation_time: "2024-10-21T12:42:14.192075Z", link: "󔢖", risk_level: :low, title: "󹸞"}

         * Clause:    additional_data <- JsonGenerator.json_map()
           Generated: %{}

     got exception:

         ** (Plug.Conn.WrapperError) ** (Postgrex.Error) ERROR 42804 (datatype_mismatch) column "additional_data" is of type jsonb but expression is of type text

             query: UPDATE "alert" AS a0 SET "additional_data" = s1."__new_additional_data", "updated_at" = s1."__new_updated_at" FROM (SELECT $1::timestamp::timestamp AS "__new_updated_at", $2 AS "__new_additional_data", sa0."id" AS "id" FROM "alert" AS sa0 WHERE (sa0."id"::uuid = $3::uuid) LIMIT $4) AS s1 WHERE (a0."id" = s1."id") RETURNING a0."id", a0."alert_id", a0."team_id", a0."title", a0."description", a0."creation_time", a0."link", a0."risk_level", a0."additional_data", a0."inserted_at", a0."updated_at"

             hint: You will need to rewrite or cast the expression.

To Reproduce

Expected behavior I would expect that I could use the JSON API for updating my resource.

Runtime

Extensions:

{:ash_json_api, "~> 1.0"},
{:ash_postgres, "~> 2.3"},
{:ash_phoenix, "~> 2.1"
{:ash_authentication, "~> 4.0"},
{:ash_authentication_phoenix, "~> 2.1"},
{:ash_state_machine, "~> 0.2.6"},
{:ash_admin, "~> 0.11.9"}

Additional context Some snippets from the project.

The routes defined:

  json_api do
    routes do
      base_route "/alerts", CaseManager.Alerts.Alert do
        post :create
        patch :update_additional_data
      end
    end
  end

The update action:

    update :update do
      primary? true
    end

    update :update_additional_data do
      accept [
        :additional_data
      ]
    end
zachdaniel commented 4 days ago

@MikaelFangel I've fixed something else and I think I have likely fixed this in the process. Can you take main for a spin? Will reopen if not :) A new release will be out likely later today if it works for you.

MikaelFangel commented 4 days ago

@zachdaniel it still fails the test and also when I try using the API manually. Let me know if you need anything specific. (:

{:ash, git: "https://github.com/ash-project/ash.git", branch: "main", override: true},

I can provide some more context from the OpenAPI page:

# Postgrex.Error at PATCH /api/json/alerts/2f26eb65-c3ca-4beb-aea5-ff0e4281d5ca

Exception:

    ** (Postgrex.Error) ERROR 42804 (datatype_mismatch) column "additional_data" is of type jsonb but expression is of type text

        query: UPDATE "alert" AS a0 SET "additional_data" = s1."__new_additional_data", "updated_at" = s1."__new_updated_at" FROM (SELECT $1::timestamp::timestamp AS "__new_updated_at", $2 AS "__new_additional_data", sa0."id" AS "id" FROM "alert" AS sa0 WHERE (sa0."id"::uuid = $3::uuid) LIMIT $4) AS s1 WHERE (a0."id" = s1."id") RETURNING a0."id", a0."alert_id", a0."team_id", a0."title", a0."description", a0."creation_time", a0."link", a0."risk_level", a0."additional_data", a0."inserted_at", a0."updated_at"

        hint: You will need to rewrite or cast the expression.
        (ecto_sql 3.12.1) lib/ecto/adapters/sql.ex:1096: Ecto.Adapters.SQL.raise_sql_call_error/1
        (ecto_sql 3.12.1) lib/ecto/adapters/sql.ex:994: Ecto.Adapters.SQL.execute/6
        (ecto 3.12.4) lib/ecto/repo/queryable.ex:232: Ecto.Repo.Queryable.execute/4
        (ash_postgres 2.4.9) lib/data_layer.ex:1391: AshPostgres.DataLayer.update_query/4
        (ash 3.4.34) lib/ash/actions/update/bulk.ex:542: Ash.Actions.Update.Bulk.do_atomic_update/5
        (ash 3.4.34) lib/ash/actions/update/bulk.ex:266: Ash.Actions.Update.Bulk.run/6
        (ash_json_api 1.4.12) lib/ash_json_api/controllers/helpers.ex:327: anonymous fn/2 in AshJsonApi.Controllers.Helpers.update_record/2
        (ash_json_api 1.4.12) lib/ash_json_api/controllers/patch.ex:20: AshJsonApi.Controllers.Patch.call/2
        (case_manager 0.1.0) deps/plug/lib/plug/router.ex:246: anonymous fn/4 in CaseManagerWeb.AshJsonApiRouter.dispatch/2
        (telemetry 1.3.0) /Users/mikaelfangel/Programming/elixir/case_manager/deps/telemetry/src/telemetry.erl:324: :telemetry.span/3
        (case_manager 0.1.0) deps/plug/lib/plug/router.ex:242: CaseManagerWeb.AshJsonApiRouter.dispatch/2
        (case_manager 0.1.0) lib/case_manager_web/ash_json_api_router.ex:1: CaseManagerWeb.AshJsonApiRouter.plug_builder_call/2
        (phoenix 1.7.14) lib/phoenix/router/route.ex:42: Phoenix.Router.Route.call/2
        (phoenix 1.7.14) lib/phoenix/router.ex:484: Phoenix.Router.__call__/5
        (case_manager 0.1.0) lib/case_manager_web/endpoint.ex:1: CaseManagerWeb.Endpoint.plug_builder_call/2
        (case_manager 0.1.0) deps/plug/lib/plug/debugger.ex:136: CaseManagerWeb.Endpoint."call (overridable 3)"/2
        (case_manager 0.1.0) lib/case_manager_web/endpoint.ex:1: CaseManagerWeb.Endpoint.call/2
        (phoenix 1.7.14) lib/phoenix/endpoint/sync_code_reload_plug.ex:22: Phoenix.Endpoint.SyncCodeReloadPlug.do_call/4
        (bandit 1.5.7) lib/bandit/pipeline.ex:124: Bandit.Pipeline.call_plug!/2
        (bandit 1.5.7) lib/bandit/pipeline.ex:36: Bandit.Pipeline.run/4

Code:

`lib/ecto/adapters/sql.ex`

    1091     defp raise_sql_call_error(%DBConnection.OwnershipError{} = err) do
    1092       message = err.message &lt;&gt; &quot;\nSee Ecto.Adapters.SQL.Sandbox docs for more information.&quot;
    1093       raise %{err | message: message}
    1094     end
    1095   
    1096>    defp raise_sql_call_error(err), do: raise(err)
    1097   
    1098     @doc false
    1099     def reduce(adapter_meta, statement, params, opts, acc, fun) do
    1100       %{pid: pool, telemetry: telemetry, sql: sql, opts: default_opts} = adapter_meta
    1101       opts = with_log(telemetry, params, opts ++ default_opts)

`lib/ecto/adapters/sql.ex`

    989     end
    990   
    991     @doc false
    992     def execute(prepare, adapter_meta, query_meta, prepared, params, opts) do
    993       %{num_rows: num, rows: rows} =
    994>        execute!(prepare, adapter_meta, prepared, params, put_source(opts, query_meta))
    995   
    996       {num, rows}
    997     end
    998   
    999     defp execute!(prepare, adapter_meta, {:cache, update, {id, prepared}}, params, opts) do

`lib/ecto/repo/queryable.ex`

    227             assocs: assocs,
    228             from: from
    229           } = select
    230   
    231           preprocessor = preprocessor(from, preprocess, adapter)
    232>          {count, rows} = adapter.execute(adapter_meta, query_meta, prepared, dump_params, opts)
    233           postprocessor = postprocessor(from, postprocess, take, adapter)
    234   
    235           {count,
    236            rows
    237            |&gt; Ecto.Repo.Assoc.query(assocs, sources, preprocessor)

`lib/data_layer.ex`

    1386                   else
    1387                     Ecto.Query.exclude(query, :select)
    1388                   end
    1389   
    1390                 {_, results} =
    1391>                  with_savepoint(repo, query, fn -&gt;
    1392                     repo.update_all(
    1393                       query,
    1394                       [],
    1395                       repo_opts
    1396                     )

`lib/ash/actions/update/bulk.ex`

    537            },
    538            atomic_changeset &lt;- sort_atomic_changes(atomic_changeset),
    539            query &lt;- handle_attribute_multitenancy(query),
    540            {:ok, data_layer_query} &lt;-
    541              Ash.Query.data_layer_query(query) do
    542>        case Ash.DataLayer.update_query(
    543                data_layer_query,
    544                atomic_changeset,
    545                update_query_opts
    546              ) do
    547           :ok -&gt;

`lib/ash/actions/update/bulk.ex`

    261                   },
    262                   data_layer_context: opts[:data_layer_context] || %{}
    263                 }
    264               )
    265             else
    266>              {:ok, do_atomic_update(query, atomic_changeset, has_after_batch_hooks?, input, opts)}
    267             end
    268             |&gt; case do
    269               {:ok, bulk_result} -&gt;
    270                 if opts[:return_notifications?] do
    271                   bulk_result

`lib/ash_json_api/controllers/helpers.ex`

    322   
    323             {:ok, filter, query} -&gt;
    324               request = Request.assign(request, :query, query)
    325   
    326               query
    327>              |&gt; Ash.bulk_update(
    328                 request.action.name,
    329                 attributes.(request),
    330                 Request.opts(request,
    331                   return_errors?: true,
    332                   notify?: true,

`lib/ash_json_api/controllers/patch.ex`

    15       route = options[:route]
    16       all_domains = options[:all_domains]
    17   
    18       conn
    19       |&gt; Request.from(resource, action, domain, all_domains, route, options[:prefix])
    20>      |&gt; Helpers.update_record()
    21       |&gt; Helpers.fetch_includes()
    22       |&gt; Helpers.fetch_metadata()
    23       |&gt; Helpers.render_or_render_errors(conn, fn conn, request -&gt;
    24         Response.render_one(conn, request, 200, request.assigns.result, request.assigns.includes)
    25       end)

`deps/plug/lib/plug/router.ex`

    241           try do
    242             :telemetry.span(
    243               [:plug, :router_dispatch],
    244               %{conn: conn, route: path, router: __MODULE__},
    245               fn -&gt;
    246>                conn = fun.(conn, opts)
    247                 {conn, %{conn: conn, route: path, router: __MODULE__}}
    248               end
    249             )
    250           catch
    251             kind, reason -&gt;

`/Users/mikaelfangel/Programming/elixir/case_manager/deps/telemetry/src/telemetry.erl`

    319           EventPrefix ++ [start],
    320           #{monotonic_time =&gt; StartTime, system_time =&gt; erlang:system_time()},
    321           merge_ctx(StartMetadata, DefaultCtx)
    322       ),
    323   
    324>      try SpanFunction() of
    325         {Result, StopMetadata} -&gt;
    326             StopTime = erlang:monotonic_time(),
    327             execute(
    328                 EventPrefix ++ [stop],
    329                 #{duration =&gt; StopTime - StartTime, monotonic_time =&gt; StopTime},

`deps/plug/lib/plug/router.ex`

    237         @doc false
    238         def dispatch(%Plug.Conn{} = conn, opts) do
    239           {path, fun} = Map.fetch!(conn.private, :plug_route)
    240   
    241           try do
    242>            :telemetry.span(
    243               [:plug, :router_dispatch],
    244               %{conn: conn, route: path, router: __MODULE__},
    245               fn -&gt;
    246                 conn = fun.(conn, opts)
    247                 {conn, %{conn: conn, route: path, router: __MODULE__}}

`lib/case_manager_web/ash_json_api_router.ex`

    1>  defmodule CaseManagerWeb.AshJsonApiRouter do
    2     @moduledoc &quot;&quot;&quot;
    3     JsonApi Router for CaseManager
    4     &quot;&quot;&quot;
    5     use AshJsonApi.Router,
    6       domains: [Module.concat([&quot;CaseManager.Alerts&quot;])],

`lib/phoenix/router/route.ex`

    37     @doc &quot;Used as a plug on forwarding&quot;
    38     def call(%{path_info: path, script_name: script} = conn, {fwd_segments, plug, opts}) do
    39       new_path = path -- fwd_segments
    40       {base, ^new_path} = Enum.split(path, length(path) - length(new_path))
    41       conn = %{conn | path_info: new_path, script_name: script ++ base}
    42>      conn = plug.call(conn, plug.init(opts))
    43       %{conn | path_info: path, script_name: script}
    44     end
    45   
    46     @doc &quot;&quot;&quot;
    47     Receives the verb, path, plug, options and helper

`lib/phoenix/router.ex`

    479           :telemetry.execute([:phoenix, :router_dispatch, :stop], measurements, metadata)
    480           halted_conn
    481   
    482         %Plug.Conn{} = piped_conn -&gt;
    483           try do
    484>            plug.call(piped_conn, plug.init(opts))
    485           else
    486             conn -&gt;
    487               measurements = %{duration: System.monotonic_time() - start}
    488               metadata = %{metadata | conn: conn}
    489               :telemetry.execute([:phoenix, :router_dispatch, :stop], measurements, metadata)

`lib/case_manager_web/endpoint.ex`

    1>  defmodule CaseManagerWeb.Endpoint do
    2     use Phoenix.Endpoint, otp_app: :case_manager
    3   
    4     # The session will be stored in the cookie and signed,
    5     # this means its contents can be read but not tampered with.
    6     # Set :encryption_salt if you would also like to encrypt it.

`deps/plug/lib/plug/debugger.ex`

    131             case conn do
    132               %Plug.Conn{path_info: [&quot;__plug__&quot;, &quot;debugger&quot;, &quot;action&quot;], method: &quot;POST&quot;} -&gt;
    133                 Plug.Debugger.run_action(conn)
    134   
    135               %Plug.Conn{} -&gt;
    136>                super(conn, opts)
    137             end
    138           rescue
    139             e in Plug.Conn.WrapperError -&gt;
    140               %{conn: conn, kind: kind, reason: reason, stack: stack} = e
    141               Plug.Debugger.__catch__(conn, kind, reason, stack, @plug_debugger)

`lib/case_manager_web/endpoint.ex`

    1>  defmodule CaseManagerWeb.Endpoint do
    2     use Phoenix.Endpoint, otp_app: :case_manager
    3   
    4     # The session will be stored in the cookie and signed,
    5     # this means its contents can be read but not tampered with.
    6     # Set :encryption_salt if you would also like to encrypt it.

`lib/phoenix/endpoint/sync_code_reload_plug.ex`

    17   
    18     def call(conn, {endpoint, opts}), do: do_call(conn, endpoint, opts, true)
    19   
    20     defp do_call(conn, endpoint, opts, retry?) do
    21       try do
    22>        endpoint.call(conn, opts)
    23       rescue
    24         exception in [UndefinedFunctionError] -&gt;
    25           case exception do
    26             %UndefinedFunctionError{module: ^endpoint} when retry? -&gt;
    27               # Sync with the code reloader and retry once

`lib/bandit/pipeline.ex`

    119       end
    120     end
    121   
    122     @spec call_plug!(Plug.Conn.t(), plug_def()) :: Plug.Conn.t() | no_return()
    123     defp call_plug!(%Plug.Conn{} = conn, {plug, plug_opts}) when is_atom(plug) do
    124>      case plug.call(conn, plug_opts) do
    125         %Plug.Conn{} = conn -&gt; conn
    126         other -&gt; raise(&quot;Expected #{plug}.call/2 to return %Plug.Conn{} but got: #{inspect(other)}&quot;)
    127       end
    128     end
    129   

`lib/bandit/pipeline.ex`

    31         conn = build_conn!(transport, method, request_target, headers, opts)
    32         span = Bandit.Telemetry.start_span(:request, measurements, Map.put(metadata, :conn, conn))
    33   
    34         try do
    35           conn
    36>          |&gt; call_plug!(plug)
    37           |&gt; maybe_upgrade!()
    38           |&gt; case do
    39             {:no_upgrade, conn} -&gt;
    40               %Plug.Conn{adapter: {_mod, adapter}} = conn = commit_response!(conn)
    41               Bandit.Telemetry.stop_span(span, adapter.metrics, %{conn: conn})

## Connection details

### Params

    %{"data" => %{"attributes" => %{"additional_data" => %{}}, "id" => "string", "relationships" => %{}, "type" => "alert"}, "fields" => %{"alert" => "id"}}

### Request info

  * URI: http://localhost:4000/api/json/alerts/2f26eb65-c3ca-4beb-aea5-ff0e4281d5ca
  * Query string: fields%5Balert%5D=id

### Headers

  * accept: application/vnd.api+json
  * accept-encoding: gzip, deflate
  * accept-language: en-US,en;q=0.9
  * connection: keep-alive
  * content-length: 139
  * content-type: application/vnd.api+json
  * cookie: _case_manager_key=SFMyNTY.g3QAAAACbQAAAAtfY3NyZl90b2tlbm0AAAAYR1F0NWdRVGZTU2VEYnNOQ2dWX3hiOHpKbQAAAAR1c2VybQAAACx1c2VyP2lkPTNhMTc0ZmI2LTE3MzQtNDI4Mi1iNWVlLTNjOWYyMTBiZmQwYg._gQO1gEIWilbTv5bwOPzfsfQmosmZ9FF3Hb7bJYPakk; lb_user_data=eyJuYW1lIjoiTWlrYWVsIiwiaGV4X2NvbG9yIjoiI0Y4NzE3MSJ9; lb:user_data=eyJpZCI6InhrYm0zbmU0ZTZ2dnE1aWl2djNtbHEyeHFjMmhpdDJrIiwibmFtZSI6bnVsbCwiZW1haWwiOm51bGwsImhleF9jb2xvciI6IiNGQTgwNzIifQ==
  * host: localhost:4000
  * origin: http://localhost:4000
  * priority: u=3, i
  * referer: http://localhost:4000/api/json/swaggerui
  * sec-fetch-dest: empty
  * sec-fetch-mode: cors
  * sec-fetch-site: same-origin
  * user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.0.1 Safari/605.1.15
  * x-csrf-token: MiAZCzQyWC8bNUABRzY4WwkNURQWBghdkxv3MghvyM7n-fHh_n8KQiX-

### Session

    %{"_csrf_token" => "GQt5gQTfSSeDbsNCgV_xb8zJ", "user" => "user?id=3a174fb6-1734-4282-b5ee-3c9f210bfd0b"}
zachdaniel commented 4 days ago

Hm...I haven't been able to reproduce this yet. Can you ensure that all of your ash dependencies, like ash_postgres and ash_sql are updated to the latest?

One question:

what is the type of the additional_data attribute? Is it :map?

MikaelFangel commented 4 days ago

I did run mix deps.update --all before testing with main.

And yes, additional_data is a map.

MikaelFangel commented 4 days ago

@zachdaniel - I found some time to make a minimal reproduceable version of the bug. (:

To Reproduce

MikaelFangel commented 4 days ago

Did run a git bisect and to me, the first bad commit was: bb06803525dfb34e2c4dc4888c0459fcb53a982a

zachdaniel commented 4 days ago

Thank you for the detailed report and reproduction 🙇