dbhowell / pino-cloudwatch

AWS CloudWatch Logs transport for pino
MIT License
38 stars 13 forks source link

Allow config.json for options #7

Open dbhowell opened 7 years ago

arvi commented 6 years ago

Hello @dbhowell, do you have any guide or tips how to do it? I wouldn't want to expose my aws access id and keys on my repository if I do something like:

start: "cross-env NODE_ENV=dev nodemon bin/dev | pino-cloudwatch --aws_access_key_id=ABCDE12345 --aws_secret_access_key=12EFGHI --aws_region=us-east-2 --group=/var/log/messages",

so far it is working with this setup but I don't know how to set it up so my credentials are not exposed in the repo.

Thanks a lot.

arvi commented 6 years ago

I also tried doing something like this, but it's not working:

cloudwatch.js

const { exec } = require('child_process');
require('dotenv').config();

exec(
  `pino-cloudwatch --aws_access_key_id=${
    process.env.AWS_CLOUDWATCH_ACCESS_ID
  } --aws_secret_access_key=${process.env.AWS_CLOUDWATCH_ACCESS_KEY} --aws_region=${
    process.env.AWS_CLOUDWATCH_REGION
  } --group=${process.env.AWS_CLOUDWATCH_LOG_GROUP_NAME}`,
  (err, stdout, stderr) => {
    if (err) {
      console.log(err);
      // node couldn't execute the command
      return;
    }

    // the *entire* stdout and stderr (buffered)
    console.log(`stdout: ${stdout}`);
    console.log(`stderr: ${stderr}`);
  },
);

package.json

start: "cross-env NODE_ENV=dev nodemon bin/dev | node cloudwatch.js"
dbhowell commented 6 years ago

Hi @arvi , sorry if you've already tried this but if you omit the aws_secret_access_key and aws_access_key_id does it pick up the environment variables?

arvi commented 6 years ago

Hello @dbhowell,

It's okay now after n attempts 😅, I finally figure out how not to expose it without having to set my environment variables one by one before start script e.g AWS_SECRET_ACCESS_KEY=12345 node index.js

I used better-npm-run package which reads my .env file prior to start script resulting for me to be able to use those environment variables in my start script.

Sharing my implementation just in case others wouldn't want to expose their pino-cloudwatch options in their git repository.

  "scripts": {
    "start": "better-npm-run build:dev",
  },
  "betterScripts": {
    "build:dev":
      "DEBUG=dbg:* cross-env NODE_ENV=dev nodemon bin/dev | pino-cloudwatch --aws_access_key_id=${AWS_CLOUDWATCH_ACCESS_ID} --aws_secret_access_key=${AWS_CLOUDWATCH_ACCESS_KEY} --aws_region=${AWS_CLOUDWATCH_REGION} --group=${AWS_CLOUDWATCH_LOG_GROUP_NAME}"
  },

Thanks for your work on pino-cloudwatch. Great job ❤️

dbhowell commented 6 years ago

I'm glad you got around it. Would it help if pino-cloudwatch used process.env.AWS_SECRET_ACCESS_KEY and process.env.AWS_ACCESS_KEY_ID by default, then you wouldn't need to put them in as arguments in the build:dev script?

arvi commented 6 years ago

Hello @dbhowell I created a pull request wherein user can also choose loading credentials from config.json or shared ini credentials. I have tried my forked package with these changes and both options are working as expected.

With regards to your process.env comment, based on my understanding on AWS sdk, it would look for that in environment variables if none is defined or looked at the tied EC2 machine IAM role. Though I have not tried that personally.

ulver2812 commented 6 years ago

I'm glad you got around it. Would it help if pino-cloudwatch used process.env.AWS_SECRET_ACCESS_KEY and process.env.AWS_ACCESS_KEY_ID by default, then you wouldn't need to put them in as arguments in the build:dev script?

This would be great. I need this in my project :-)

arvi commented 6 years ago

@ulver2812 , I already created a pull request of that feature. However, it is not yet pushed by the author. What I did for the meantime is use another npm package called better-npm-run and my package.json looks like this.

  "scripts": {
    "start": "better-npm-run build:dev",
  },
  "betterScripts": {
    "build:dev": {
      "command": "DEBUG=dbg:* nodemon bin/dev | pino-cloudwatch --aws_access_key_id=${AWS_CLOUDWATCH_ACCESS_ID} --aws_secret_access_key=${AWS_CLOUDWATCH_ACCESS_KEY} --aws_region=${AWS_CLOUDWATCH_REGION} --group=${AWS_CLOUDWATCH_LOG_GROUP_NAME} --prefix=${AWS_CLOUDWATCH_PREFIX}",
      "env": {
        "NODE_ENV": "development"
      }
    },
  },

the AWS config are on my .env file :smile:

tgouala commented 4 years ago

I found another way without using an extra package such as better-npm-run. I am not sure it is the most conventional way of using child processes so don't hesitate to comment if necessary.

  1. I write a script cloudwatchLaunch.js that will spawn the pino cloudwatch process with the right environment variable and pipe the sdtin from parent process to child process.
import { spawn } from "child_process";

const child = spawn("pino-cloudwatch", [
  `--aws_access_key_id=${process.env.AWS_ACCESS_KEY_ID}`,
  `--aws_secret_access_key=${process.env.AWS_SECRET_ACCESS_KEY}`,
  `--aws_region=${process.env.AWS_REGION}`,
  `--group=${process.env.AWS_CLOUDWATCH_LOG_GROUP_NAME}`,
]);

process.stdin.pipe(child.stdin);
  1. I ensure my environment variable are correctly loaded by calling this script in package.json
    "scripts": {
    "serve": "node -r dotenv/config ./server | node -r dotenv/config ./cloudwatchLaunch",
    }

One inconvenient of this method is that the time the child process starts, I miss some logs. Did not test yet what happens if the child process fails.