Open ameyapathare opened 10 months ago
@jamiezieziula i've assigned this to you to take a look
at a high level, here are some pointers that should you get started:
Think of this provider as a jazzed-up API client, so there will be code to add in the /internal/api
section of this repo to set up a new section that interacts with the /work_pools/{pool_name}/queues
API. This is the best starting point - we're setting up the interface between the provider and the Prefect Cloud API
/internal/api
, we'll set new struct
data models for the new client (eg. WorkQueuesClient
), the representation of the resource (eg. WorkQueue
), and optionally the create/update payloads (eg. WorkQueueCreate
or WorkQueueUpdate
). See this existing example for WorkPool
/internal/client
, we'll set the actual API methods that correspond with GET
POST
DELETE
etc. See this existing example for WorkPool
- customarily, we'll create the List
, Get
, Create
, Update
, and Delete
methods -- essentially, CRUD + a List method (which hits POST /filter
)PrefectClient
interfacewhen looking at our API schema, you might see 2 different "groups" of work-queue
API endpoints. for this provider, we'll want to use the one that is a part of the /work_pools/{pool_name}/queues
API (so the one always associated with a particular work pool) and NOT the one inside of the /work_queue
API (this is for the old agent workloads, which is on its way out)
As far as the actual Terraform resources, we'll want to support datasource
, resource
, and import
functionality for this new Work Queue
object type. See the matrix here.
datasource
allows querying existing resources, eg. data "some_object" "my_object_name" {}
, so you can read down attributes (like the id
or others)resource
allows fully managing the resource in Terraformimport
allows linking existing resources into the TF lifecycleThe way we structured this repo, you'll have to set up a separate:
datasource
in the /internal/provider/datasources directory, in a work_queue.go
fileresource
in the /internal/provider/resources directory, in a work_queue.go
fileimport
support goes into the associated resources file, inside of a reserved ImportState
method on the resource class/struct. See this example for a service_account
import, via a commonly used pattern where we allow passing a resource ID or a resource name via the prefix name/<your resource name here>
For datasource
and resource
configurations in the code - the top level function methods are special + reserved, so they represent the logic that the Terraform provider framework will expect + call. You can pull those over from an existing, corresponding datasources/<name>.go
/ resources/<name>.go
. The logic to update will predominantly be:
Schema
methods + the *Model
structsRead
method, as we'll be calling the client methods you defined first + have the flexibility to perform certain kinds of checksFinally, there are root-level configurations where you'll need to add your newly created datasources.NewWorkQueueDataSource
and resources.NewWorkQueueResource
hi, what's up with this? seems kinda closed, kinda open?
Thanks for reaching out @ondramie, it's definitely open and part of our milestone for parity with the Prefect API. We're working on prioritizing that work and getting more of our team up to speed on the provider to speed up development.
hi, @mitchnielsen, so its it open for anybody to tackle?
@ondramie definitely, we haven't started on this one yet so if you're interested that'd be great!
It would be very helpful to be able to setup/configure our work queues via terraform. Right now, we create an 'on-demand', 'priority', and 'default' work queue for each work pool. We are also contemplating creating a separate work queue per data source, which would involve setting up hundreds of work queues.
Work queues are the last remaining resource before we switch over to using this terraform provider to provision our prefect infrastructure!