scolladon / sfdx-git-delta

Generate the sfdx content in source format from two git commits
Other
444 stars 114 forks source link

Improve diff outputs #870

Closed AllanOricil closed 5 months ago

AllanOricil commented 6 months ago

Is your proposal related to a problem?


CDK has a really nice diff output. It can be used to let people know exactly what is going to be deployed to an AWS account. It is very useful in pipelines that have approval processes.

image

Besides showing resources changes, CDK also summarizes changes to IAM Roles and User permissions in a table.

image

I believe that both features would ease pipeline approval processes reviews.

Describe a solution you propose

Describe alternatives you've considered

Currently people can use git diffs or just print the generated manifests. However, both are not that easy to analyze. Plus, because metadata is serialized as xml, git diff algorithms don't work well with it and it is really hard to read them.

Additional context

N/A

scolladon commented 6 months ago

Hi @AllanOricil !

Thanks for raising this issue and thanks for contributing in making this project better!

I'm not familiar much with the CDK output. I don't know if its output is a standard or if it is just beautiful. It need to spike a bit in order to know how to implement that kind of feature.

I think it could be driven by a parameter, pretty much like the --json works today, maybe something like --cdk ? Or using a format parameter could be convenient if we need more format later.

I also wonder if this could be the output of a deploy command instead, or maybe it should be its own command located in the cli directly (or into another plugin) ? It could be great to have another command that looks into a folder and display the metadata content in the cdk format (to segregate responsibility further) ?

AllanOricil commented 6 months ago

In cdk, there is a special command for that output with both resource diffs and role/profile diffs:

cdk diff

cdk deploy displays the role/profile diffs only.

Both commands don't need output config. That output is the "human" one.

AllanOricil commented 6 months ago

In my opinion it makes sense to create a lib that does it, based on the output of sgd. Once this lib is ready, then a pre-deploy hook and a command would need to be created to consume it.

aheber commented 6 months ago

Thinking out loud. This could be a CLI plugin that looks at a package and destructive changes XML files and essentially pretty-prints them?

In the past I've simply printed their contents as part of the CI job and called it good, if there are lots of lines I might have a threshold for summarization. (X CustomObject, Y PermissionSet)

AllanOricil commented 6 months ago

Thinking out loud. This could be a CLI plugin that looks at a package and destructive changes XML files and essentially pretty-prints them?

Yes. It has to show a tree like view of what has changed.

In the past I've simply printed their contents as part of the CI job and called it good, if there are lots of lines I might have a threshold for summarization. (X CustomObject, Y PermissionSet)

I think that showing a summary of what is and what will be can give more confidence when approving deployments, than simply displaying the contents of the manifests. What do you think?

scolladon commented 6 months ago

The more I think of it, the more I think it deserves its own plugin. It could definitely be helpful also for other scenario than just incremental deployment.

From what I see in the "SDK" this plugin should consider the usage of the MetadataResolver from SDR. Then pass the result to a CDK transformer and pass the result to a CDK outputter.

The whole work would be to create this CDK transformer IMHO (if the outputter already exists)

AllanOricil commented 6 months ago

I was not thinking about using constructs. But maybe using constructs would be the most accurate way of implementing this feature, since aws cdk works on top of this standard. However it would be necessary to create constructs for all metadata types, and then use cdk ou terraform to synthesize manifests, and perform deployments using the tolling API. As an example, Mongodb and Github, have constructs to create resources for their products. If someone does it for SF, sf cli could even be deprecated in my opinion. It would also be something good to do since SF products can now be purchased from the AWS marketplace.

I do believe it would be cool if one of you that work for sf to create a demo showing how sf deploymemts could be done with IaC, representing metadata as constructs, using cdk, terraform or even polumi. These guys who signed this partnership with AWS would probably buy the idea of having a single tool. Specially because the constructs modules and tools have way more people working on it.

I just realized that deployment metadata with constructs would also easily allow the creation of resources across different clouds that seamlessly work with salesforce. For example, if one needs to interact with AI models that are not part of Salesforce offerings, or even just a lambda function to overcome Salesforce limits, Salesforce constructs could be used to do the provisioning and integration between services. These constructs would result in resources that are ready to interact with salesforce services with a proprietary connector. Salesforce would profit on it.

AllanOricil commented 6 months ago

A simple demo would be creating one stack with constructs that create the following resources:

  1. One object
  2. One validation rule
  3. One trigger
  4. One class with a simple transaction
  5. One flow with the same transaction as the class
  6. A static site using the "experience cloud" with 2 buttons to call 4 and 5

This would demonstrate how companies can leverage sf to do what they do with vercel, for example.

This demo can go one step further if this stack includes resources from AWS and Heroku. For example, deploy a simple service in heroku, call a lambda function or an AI model from AWS.

That would be freaking cool. And I think it could save money for companies because the same person that is expert in IaaC could now easily talk with a SF developer. Of course sf resources would still be serialized as metadata, but now teams working in different stacks would be able to talk the same language using Constructs.

AllanOricil commented 6 months ago

Anybody interested in writting a poc? I don't have time to do it alone. I would just like to find someone important which we can demo it too. I don't want to go through some asholes who still ideas and give no credit.

scolladon commented 6 months ago

Anybody interested in writting a poc? I don't have time to do it alone. I would just like to find someone important which we can demo it too. I don't want to go through some asholes who still ideas and give no credit.

IMHO the right person to contact for this kind of idea is Philippe Ozil (@pozil). If you guys build a team to think and work on this idea, please count me in ! I'll be glad to contribute.

AllanOricil commented 6 months ago

This is how mongodb atlas did:

They created constructs for their resources

https://github.com/mongodb/awscdk-resources-mongodbatlas

And enabled cloudformation to deploy them

https://github.com/mongodb/mongodbatlas-cloudformation-resources

I'm just not sure if they are deploying resources in aws. I think it works for them because their resources are deployed in aws infrastructure. For Salesforce it wouldn't work, I think 😕 it would be necessary to check with aws if this is possible. For me it makes sense because it could facilitate the integration between aws resources and salesforce stuff.

It would have worked for that product of Salesforce lambda functions. Instead of deploying to salesforce, they could've been deployed to aws. But that product failed.

AllanOricil commented 6 months ago

After reading this https://docs.aws.amazon.com/prescriptive-guidance/latest/best-practices-cdk-typescript-iac/constructs-best-practices.html

I understood we could create a "custom resource" to instruct cloudformation to deploy our custom construct to salesforce instead of aws. So it could work, I think.

AllanOricil commented 6 months ago

Chatgpt agrees it is possible haha

Sure, you can create a custom resource in AWS CloudFormation that uses a Lambda function to interact with the Salesforce API to create or manage objects. Here's a detailed guide on how to achieve this:

Step-by-Step Implementation

  1. Define the Custom Resource in CDK
  2. Implement the Lambda Function
  3. Create the CDK Stack

1. Define the Custom Resource in CDK

First, we will create a custom resource construct in CDK that triggers a Lambda function. This Lambda function will handle interactions with Salesforce.

lib/salesforce-custom-resource.ts

import * as cdk from '@aws-cdk/core';
import * as lambda from '@aws-cdk/aws-lambda';
import * as cr from '@aws-cdk/custom-resources';

interface SalesforceCustomResourceProps {
  clientId: string;
  clientSecret: string;
  username: string;
  password: string;
  securityToken: string;
  salesforceObject: {
    type: string;
    fields: Record<string, any>;
    uniqueField: string;
    uniqueFieldValue: any;
  };
}

export class SalesforceCustomResource extends cdk.Construct {
  constructor(scope: cdk.Construct, id: string, props: SalesforceCustomResourceProps) {
    super(scope, id);

    const onEvent = new lambda.Function(this, 'OnEventHandler', {
      runtime: lambda.Runtime.NODEJS_14_X,
      handler: 'index.handler',
      code: lambda.Code.fromAsset('lambda'),
      environment: {
        CLIENT_ID: props.clientId,
        CLIENT_SECRET: props.clientSecret,
        USERNAME: props.username,
        PASSWORD: props.password,
        SECURITY_TOKEN: props.securityToken,
        SALESFORCE_OBJECT: JSON.stringify(props.salesforceObject),
      },
    });

    const provider = new cr.Provider(this, 'SalesforceProvider', {
      onEventHandler: onEvent,
    });

    new cdk.CustomResource(this, 'SalesforceResource', {
      serviceToken: provider.serviceToken,
    });
  }
}

2. Implement the Lambda Function

The Lambda function will be responsible for authenticating with Salesforce and creating or updating the specified object.

lambda/index.js

const axios = require('axios');

exports.handler = async function(event, context) {
  const {
    CLIENT_ID,
    CLIENT_SECRET,
    USERNAME,
    PASSWORD,
    SECURITY_TOKEN,
    SALESFORCE_OBJECT
  } = process.env;

  const salesforceObject = JSON.parse(SALESFORCE_OBJECT);
  const baseUrl = 'https://login.salesforce.com';

  try {
    // Authenticate with Salesforce
    const authResponse = await axios.post(`${baseUrl}/services/oauth2/token`, null, {
      params: {
        grant_type: 'password',
        client_id: CLIENT_ID,
        client_secret: CLIENT_SECRET,
        username: USERNAME,
        password: `${PASSWORD}${SECURITY_TOKEN}`,
      },
    });

    const { access_token, instance_url } = authResponse.data;

    // Check the event type (Create/Update/Delete)
    if (event.RequestType === 'Create' || event.RequestType === 'Update') {
      // Fetch the current state of the object from Salesforce
      const existingObjectResponse = await axios.get(`${instance_url}/services/data/v52.0/query`, {
        params: {
          q: `SELECT Id, ${Object.keys(salesforceObject.fields).join(', ')} FROM ${salesforceObject.type} WHERE ${salesforceObject.uniqueField} = '${salesforceObject.uniqueFieldValue}'`,
        },
        headers: {
          Authorization: `Bearer ${access_token}`,
        },
      });

      const existingObjects = existingObjectResponse.data.records;

      if (existingObjects.length > 0) {
        // Update the existing object if differences are found
        const existingObject = existingObjects[0];
        const differences = findDifferences(existingObject, salesforceObject.fields);

        if (Object.keys(differences).length > 0) {
          await axios.patch(`${instance_url}/services/data/v52.0/sobjects/${salesforceObject.type}/${existingObject.Id}`, differences, {
            headers: {
              Authorization: `Bearer ${access_token}`,
              'Content-Type': 'application/json',
            },
          });
          return { PhysicalResourceId: existingObject.Id };
        } else {
          console.log('No differences found, no update necessary.');
          return { PhysicalResourceId: existingObject.Id };
        }
      } else {
        // Create a new object if it doesn't exist
        const createResponse = await axios.post(`${instance_url}/services/data/v52.0/sobjects/${salesforceObject.type}/`, salesforceObject.fields, {
          headers: {
            Authorization: `Bearer ${access_token}`,
            'Content-Type': 'application/json',
          },
        });
        return { PhysicalResourceId: createResponse.data.id };
      }
    } else if (event.RequestType === 'Delete') {
      // Handle delete operation if necessary
      // (Not implemented here but can be added as needed)
    }

    return { PhysicalResourceId: event.PhysicalResourceId };
  } catch (error) {
    console.error('Error:', error);
    throw error;
  }
};

function findDifferences(existingObject, desiredFields) {
  const differences = {};
  for (const [key, value] of Object.entries(desiredFields)) {
    if (existingObject[key] !== value) {
      differences[key] = value;
    }
  }
  return differences;
}

3. Create the CDK Stack

Now, let's create the CDK stack that uses our custom resource.

bin/salesforce-app.ts

#!/usr/bin/env node
import * as cdk from '@aws-cdk/core';
import { SalesforceStack } from '../lib/salesforce-stack';

const app = new cdk.App();
new SalesforceStack(app, 'SalesforceStack', {
  env: { account: process.env.CDK_DEFAULT_ACCOUNT, region: process.env.CDK_DEFAULT_REGION },
});

lib/salesforce-stack.ts

import * as cdk from '@aws-cdk/core';
import { SalesforceCustomResource } from './salesforce-custom-resource';

export class SalesforceStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    new SalesforceCustomResource(this, 'SalesforceCustomResource', {
      clientId: 'YOUR_CLIENT_ID',
      clientSecret: 'YOUR_CLIENT_SECRET',
      username: 'YOUR_USERNAME',
      password: 'YOUR_PASSWORD',
      securityToken: 'YOUR_SECURITY_TOKEN',
      salesforceObject: {
        type: 'Account',
        fields: {
          Name: 'New Account Name',
          // Add other fields as needed
        },
        uniqueField: 'Name',
        uniqueFieldValue: 'New Account Name',
      },
    });
  }
}

Synthesize and Deploy

Run the following commands to synthesize and deploy your stack:

cdk synth
cdk deploy

Explanation

  1. Custom Resource: The SalesforceCustomResource class defines a custom resource that uses a Lambda function to interact with the Salesforce API.
  2. Lambda Function: The Lambda function contains the logic to call Salesforce APIs, handling create and update operations based on CloudFormation lifecycle events.
  3. CDK Stack: The stack instantiates the SalesforceCustomResource with the necessary properties to configure the Salesforce object.

When you deploy the stack, the custom resource will trigger the Lambda function, which will authenticate with Salesforce and create or update the specified Salesforce object based on the defined fields. This approach leverages AWS CDK's ability to integrate with external systems using custom resources and Lambda functions.

AllanOricil commented 6 months ago

I hope chatpgt 4o isn't hallucinating

Yes, you can create a custom resource with a specific namespace like Salesforce::Something. This is useful for creating custom resource types that resemble AWS native resource types. To achieve this, you need to define a custom resource provider and specify the custom resource type when creating the custom resource in your CDK stack.

Steps to Create a Custom Resource with a Custom Namespace

  1. Create the Lambda function to handle the custom resource.
  2. Create a custom resource provider.
  3. Define the custom resource with a custom namespace.

1. Create the Lambda Function to Handle the Custom Resource

This Lambda function will handle the creation, update, and deletion of Salesforce metadata.

lambda/index.js

const fs = require('fs');
const path = require('path');
const jsforce = require('jsforce');
const archiver = require('archiver');

exports.handler = async function(event, context) {
  const {
    CLIENT_ID,
    CLIENT_SECRET,
    USERNAME,
    PASSWORD,
    SECURITY_TOKEN
  } = process.env;

  try {
    // Authenticate with Salesforce
    const conn = new jsforce.Connection({
      oauth2: {
        clientId: CLIENT_ID,
        clientSecret: CLIENT_SECRET,
        loginUrl: 'https://login.salesforce.com'
      }
    });

    await conn.login(USERNAME, PASSWORD + SECURITY_TOKEN);

    if (event.RequestType === 'Create' || event.RequestType === 'Update') {
      const metadataDir = path.join(__dirname, 'metadata');
      const zipFilePath = '/tmp/package.zip';

      // Zip the metadata directory
      await zipDirectory(metadataDir, zipFilePath);

      // Read the zip file into a buffer
      const zipBuffer = fs.readFileSync(zipFilePath);

      // Deploy the metadata
      const deployResult = await conn.metadata.deploy(zipBuffer, { singlePackage: true }).complete();

      if (deployResult.success) {
        console.log('Deployment succeeded.');
        return { PhysicalResourceId: deployResult.id };
      } else {
        console.error('Deployment failed:', deployResult.details);
        throw new Error('Deployment failed.');
      }
    } else if (event.RequestType === 'Delete') {
      // Handle delete operation if necessary
      // (Not implemented here but can be added as needed)
    }

    return { PhysicalResourceId: event.PhysicalResourceId };
  } catch (error) {
    console.error('Error:', error);
    throw error;
  }
};

async function zipDirectory(sourceDir, outPath) {
  const archive = archiver('zip', { zlib: { level: 9 } });
  const stream = fs.createWriteStream(outPath);

  return new Promise((resolve, reject) => {
    archive
      .directory(sourceDir, false)
      .on('error', err => reject(err))
      .pipe(stream);

    stream.on('close', () => resolve());
    archive.finalize();
  });
}

2. Create a Custom Resource Provider

The custom resource provider is defined using the aws-cdk-lib/custom-resources module. This provider will use the Lambda function created above.

lib/salesforce-stack.ts

import * as cdk from '@aws-cdk/core';
import * as lambda from '@aws-cdk/aws-lambda';
import * as cr from '@aws-cdk/custom-resources';
import { SalesforceMetadataConstruct } from './salesforce-metadata-construct';

export class SalesforceStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Define Salesforce metadata
    const metadataConstruct = new SalesforceMetadataConstruct(this, 'SalesforceMetadata', {
      metadata: [
        {
          type: 'CustomObject',
          members: ['MyCustomObject__c']
        },
        // Add more metadata types and members as needed
      ],
    });

    // Create the Lambda function
    const onEvent = new lambda.Function(this, 'OnEventHandler', {
      runtime: lambda.Runtime.NODEJS_14_X,
      handler: 'index.handler',
      code: lambda.Code.fromAsset('lambda'),
      environment: {
        CLIENT_ID: 'YOUR_CLIENT_ID',
        CLIENT_SECRET: 'YOUR_CLIENT_SECRET',
        USERNAME: 'YOUR_USERNAME',
        PASSWORD: 'YOUR_PASSWORD',
        SECURITY_TOKEN: 'YOUR_SECURITY_TOKEN',
      },
    });

    const provider = new cr.Provider(this, 'SalesforceProvider', {
      onEventHandler: onEvent,
    });

    // Define the custom resource with a specific namespace
    new cdk.CustomResource(this, 'SalesforceResource', {
      serviceToken: provider.serviceToken,
      resourceType: 'Custom::Salesforce::Metadata',
      properties: {
        // Add any properties required for your custom resource
      },
    });
  }
}

3. Define the Custom Resource with a Custom Namespace

In the SalesforceStack class, you can define the custom resource with a custom namespace like Custom::Salesforce::Metadata.

Full Example of CDK Code

Here's the complete example to create a custom resource with the namespace Custom::Salesforce::Metadata:

lib/salesforce-stack.ts

import * as cdk from '@aws-cdk/core';
import * as lambda from '@aws-cdk/aws-lambda';
import * as cr from '@aws-cdk/custom-resources';
import { SalesforceMetadataConstruct } from './salesforce-metadata-construct';

export class SalesforceStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Define Salesforce metadata
    const metadataConstruct = new SalesforceMetadataConstruct(this, 'SalesforceMetadata', {
      metadata: [
        {
          type: 'CustomObject',
          members: ['MyCustomObject__c']
        },
        // Add more metadata types and members as needed
      ],
    });

    // Create the Lambda function
    const onEvent = new lambda.Function(this, 'OnEventHandler', {
      runtime: lambda.Runtime.NODEJS_14_X,
      handler: 'index.handler',
      code: lambda.Code.fromAsset('lambda'),
      environment: {
        CLIENT_ID: 'YOUR_CLIENT_ID',
        CLIENT_SECRET: 'YOUR_CLIENT_SECRET',
        USERNAME: 'YOUR_USERNAME',
        PASSWORD: 'YOUR_PASSWORD',
        SECURITY_TOKEN: 'YOUR_SECURITY_TOKEN',
      },
    });

    const provider = new cr.Provider(this, 'SalesforceProvider', {
      onEventHandler: onEvent,
    });

    // Define the custom resource with a specific namespace
    new cdk.CustomResource(this, 'SalesforceResource', {
      serviceToken: provider.serviceToken,
      resourceType: 'Custom::Salesforce::Metadata',
      properties: {
        // Add any properties required for your custom resource
      },
    });
  }
}

Synthesize and Deploy

Run the following commands to synthesize and deploy your stack:

cdk synth
cdk deploy

Explanation

  1. Custom Resource Type: By setting resourceType to Custom::Salesforce::Metadata, you create a custom resource with a specific namespace.
  2. Lambda Function and Provider: The Lambda function handles the custom resource logic, and the provider ensures the Lambda function is invoked for custom resource lifecycle events (create, update, delete).
  3. Custom Resource Definition: The custom resource uses the service token from the provider and specifies the custom resource type.

When you deploy this stack, the CloudFormation template will include the custom resource with the specified namespace, and the Lambda function will handle deploying the Salesforce metadata.

scolladon commented 6 months ago

It seems to be a lot of work... I wonder how generic we could be using describe or SDR repository directly

AllanOricil commented 6 months ago

It would be a huge amount of work to make it work really well because it would be necessary to create L1 constructs for every single metadata type, including constraints. The latter is not really necessary because the metadata api would fail if wrong metadata is provided. However, implementing it would improve DX because it would allow devs to know what is wrong during Synthesis. Meaning they wouldn't wait for the api to return the error.

I think that the minimum stuff can be done for a poc, just to enable a super simple deployment, like an object with a field.

I will try chat gpt suggestion this week and share the repo with you, and whoever else is interested

AllanOricil commented 5 months ago

@scolladon I started it here

https://github.com/AllanOricil/cdk-salesforce-iac-poc

but I'm already facing some problems

https://github.com/aws/aws-cdk/issues/30339

scolladon commented 5 months ago

I close this enhancement as it is dealt somewhere else 👍