pulumi / pulumi-yaml

YAML language provider for Pulumi
Apache License 2.0
39 stars 12 forks source link

Support calling resource methods #354

Open AaronFriel opened 2 years ago

AaronFriel commented 2 years ago

It is almost possible to call resource methods manually, as in this program which calls the "getKubeconfig" method on cluster:

variables:
  kubeconfig:
    Fn::Invoke:
      Function: google-native:container/v1:Cluster/getKubeconfig
      Arguments:
        __self__: ${someClusterResource}

However, this generates an Invoke, not a Call RPC. The syntax also leaves something to be desired, with the __self__ parameter being an implementation detail that other Pulumi languages do not expose to users.

lukehoban commented 2 years ago

The simplest “raw” thing to do might be to just expose the Call concept directly (much like we do for Invoke today)?

variables:
  kubeconfig:
    Fn::Call:
      Function: google-native:container/v1:Cluster/getKubeconfig
      Self: ${someClusterResource}
      Arguments:
        …

Are there other more “sugar” options worth considering?

viveklak commented 2 years ago

Just adding a note that we would also need to add support for the above in docs gen as well: https://github.com/pulumi/pulumi-google-native/issues/709

justinvp commented 2 years ago

Are there other more “sugar” options worth considering?

Maybe something like the following, where you only have to specify the resource instance and name of the method?

variables:
  kubeconfig:
    Fn::Method:
      Resource: ${someClusterResource}
      Method: getKubeconfig
      Arguments:
        ...

Or, could we get fancy and grab the method out of an expression?

variables:
  kubeconfig:
    Fn::Method:
      Method: ${someClusterResource.getKubeconfig}
      Arguments:
        ...
jaxxstorm commented 1 year ago

For those who stumble across this particularly for GKE, you can work around it by building a kubeconfig variable and referencing that:

name: gke-yaml-cluster
runtime: yaml
description: A GKE cluster
resources:
  cluster:
    type: google-native:container/v1beta1:Cluster
    properties: 
      clusterTelemetry:
        type: ENABLED
      defaultMaxPodsConstraint:
        maxPodsPerNode: 100
      initialNodeCount: 1
      ipAllocationPolicy:
        clusterIpv4CidrBlock: /14
        servicesIpv4CidrBlock: /20
        useRoutes: false
      location: us-west2
      resourceLabels:
        env: lbriggs
  provider:
    type: pulumi:providers:kubernetes
    properties:
      kubeconfig: ${kubeconfig}
  nginx-ingress:
    type: kubernetes:helm.sh/v3:Release
    properties: # The arguments to resource properties.
      chart: "ingress-nginx"
      repositoryOpts:
        repo: https://kubernetes.github.io/ingress-nginx
      cleanupOnFail: true
      createNamespace: true
      description: "Main load balancer"
      lint: true
      name: "ingress-nginx"
      namespace: "ingress-nginx"
      version: "4.7.1"
      values:
        ingressClass: "internet"
    options:
      provider: ${provider}
variables:
  kubeconfig:
    fn::toJSON:
      apiVersion: v1
      clusters:
        - cluster:
            certificate-authority-data: ${cluster.masterAuth.clusterCaCertificate}
            server: https://${cluster.endpoint}
          name: ${cluster.name}
      contexts:
        - context:
            cluster: ${cluster.name}
            user: ${cluster.name}
          name: ${cluster.name}
      current-context: ${cluster.name}
      kind: Config
      users:
        - name: ${cluster.name}
          user:
            exec:
              apiVersion: client.authentication.k8s.io/v1beta1
              command: gke-gcloud-auth-plugin
              installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
              provideClusterInfo: true
outputs:
  kubeconfig: ${kubeconfig}