edmunds / shadowreader

Serverless load testing for replaying website traffic. Powered by AWS Lambda.
Apache License 2.0
158 stars 13 forks source link

Running in to error trying to playback parsed logs #57

Open lkamenkovich opened 5 years ago

lkamenkovich commented 5 years ago

Hello, I have followed the instructions to parse logs from the local file and it successfully parsed the logs to local-parsed-logs S3 bucket. Now when I want to send the traffic from parsed logs I run in to this error: An error occurred: SrDataBucketS3Bucket - local-parsed-logs already exists. How I can run Shadowrunner so it does not do the parse, since it was already done from the local file and just do the playback part? Thank you very much.

ysawa0 commented 5 years ago

Hi @lkamenkovich, thanks for reporting this. Can you post here the contents of your shadowreader.yml and serverless.yml files? I suspect what is happening is that Serverless is trying (and failing) to create a local-parsed-logs bucket because that is what parsed_data_bucket is set to in serverless.yml. Most likely if you set parsed_data_bucket to something else, it should deploy.

lkamenkovich commented 5 years ago

Hello, Yuki,

Thank you for your fast response.

Yes, the error happens when I point it to the local-parsed-logs because this is where I have the previously parsed logs stored. If I put some other value in Serverless.yml then it creates an empty folder, but does it playback the logs from local-parsed-logs? The output then looks like this: Serverless: Stack update finished... Service Information service: sr stage: dev region: us-west-2 stack: sr-dev resources: 15 api keys: None endpoints: None functions: orchestrator-past: sr-dev-orchestrator-past producer: sr-dev-producer consumer-worker: sr-dev-consumer-worker consumer-master-past: sr-dev-consumer-master-past layers: None Serverless: Removing old service artifacts from S3…

Attached are my Serverless and SR ymls.

Thank you very much for your help.

On May 31, 2019, at 9:56 PM, Yuki Sawa notifications@github.com<mailto:notifications@github.com> wrote:

Hi @lkamenkovichhttps://github.com/lkamenkovich, thanks for reporting this. Can you post here the contents of your shadowreader.yml and serverless.yml files? I suspect what is happening is that Serverless is trying (and failing) to create a local-parsed-logs bucket because that is what parsed_data_bucket is set to in serverless.yml. Most likely if you set parsed_data_bucket to something else, it should deploy.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/edmunds/shadowreader/issues/57?email_source=notifications&email_token=AMHAR7FKXGLKQPJZJ7XCKDLPYHJNTA5CNFSM4HR7LT6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWWWNGI#issuecomment-497903257, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AMHAR7ASO4YMBDIR7KRVY5LPYHJNTANCNFSM4HR7LT6A.

ysawa0 commented 5 years ago

I don't think the files were attached successfully. Could you try it again?

lkamenkovich commented 5 years ago

Here are the yml files in text format, could not attach .yml files. So if I set parsed_data_bucket to the local-parsed-log (where the parsed local logs are stored from previous run of local parser.py) it gives me the error: SrDataBucketS3Bucket - local-parsed-logs already exists. If I set it to something else, it creates an empty parsed folder and no traffic is played back. Thank you, Larissa serverless-yml.txt Shadoreader-yml.txt

ysawa0 commented 5 years ago

Thanks for posting the configs. Can you try this? Set parsed_data_bucket to local-parsed-log then comment out this whole section in serverless.yml:

  SrDataBucketS3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: ${self:custom.parsed_data_bucket}
      Tags:
        -
          Key: "Name"
          Value: "shadowreader"

      LifecycleConfiguration:
        Rules:
        - Id: ExpireDatain30Days
          Prefix: ''
          Status: Enabled
          ExpirationInDays: '30'

This will prevent Serverless from trying to deploy a bucket with the same name.