Open jjohnson1994 opened 3 years ago
When I start serverless offline (serverless offline start), any requests from previous sessions are replayed. I think this then leads Node to run out of memory. Node doesn't run out of memory every time though.
serverless offline start
My lambda consuming the DynamoDB stream:
export const handler = async (event, context) => { try { const promises = event.Records.map(record => { const { eventName } = record; const { S: model } = { ...record.dynamodb.NewImage, ...record.dynamodb.OldImage }.model; const topicArn = generateTopicArn( eventName, model, awsRegion, IS_OFFLINE === 'true' ? '123456789012' : awsAccountId ); return SNS.publish({ Message: JSON.stringify(record), TopicArn: topicArn }) .promise() }); await Promise.all(promises) return 200; } catch (error) { console.error("error in stream", error) throw error; } }
My serverless config
org: jjohnson1994 app: test_app service: test_app plugins: - serverless-dotenv-plugin - serverless-webpack - serverless-dynamodb-local - serverless-offline-sns - serverless-offline-dynamodb-streams - serverless-offline custom: tableName: 'test_app' serverless-offline: httpPort: 3001 webpack: webpackConfig: 'webpack.config.js' dynamodb: stages: dev start: migrate: true serverless-offline-dynamodb-streams: apiVersion: '2013-12-02' endpoint: http://0.0.0.0:8000 region: ${env:AWS_REGION} accessKeyId: root secretAccessKey: root skipCacheInvalidation: false readInterval: 500 serverless-offline-sns: port: 4002 debug: false package: individually: true provider: name: aws runtime: nodejs12.x stage: dev region: eu-west-1 iamRoleStatements: - Effect: Allow Action: - dynamodb:Query - dynamodb:Scan - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem - sns:Publish - sns:Subscribe - s3:PutObject - s3:PutObjectAcl Resource: - "*" environment: DB: ${self:custom.tableName} NODE_ENV: ${env:NODE_ENV} AWS_ACCOUNT_ID: ${env:AWS_ACCOUNT_ID} functions: app: handler: index.handler events: - http: method: ANY path: / cors: true - http: method: ANY path: '{proxy+}' cors: true stream: handler: functions/stream/stream.handler events: - stream: enabled: true type: dynamodb batchSize: 1 startingPosition: TRIM_HORIZON arn: Fn::GetAtt: - DynamoDBTestApp - StreamArn resources: Resources: DynamoDBTestApp: Type: 'AWS::DynamoDB::Table' Properties: TableName: ${self:custom.tableName} ProvisionedThroughput: ReadCapacityUnits: 20 WriteCapacityUnits: 20 StreamSpecification: StreamViewType: NEW_AND_OLD_IMAGES AttributeDefinitions: - AttributeName: hk AttributeType: S - AttributeName: sk AttributeType: S - AttributeName: model AttributeType: S - AttributeName: slug AttributeType: S KeySchema: - AttributeName: hk KeyType: HASH - AttributeName: sk KeyType: RANGE GlobalSecondaryIndexes: - IndexName: gsi1 KeySchema: - AttributeName: model KeyType: HASH - AttributeName: sk KeyType: RANGE Projection: ProjectionType: ALL ProvisionedThroughput: ReadCapacityUnits: 5 WriteCapacityUnits: 5 - IndexName: gsi2 KeySchema: - AttributeName: model KeyType: HASH - AttributeName: slug KeyType: RANGE Projection: ProjectionType: ALL ProvisionedThroughput: ReadCapacityUnits: 5 WriteCapacityUnits: 5
When starting the offline server, the terminal logs each repeated request:
Then running out of memory:
The expected behavior is that requests are only sent once, (unless the lamba returns an error response, then they can be replayed?)
Seeing similar with S3. Config in serverless.yml:
serverless-offline-s3: endpoint: http://localstack:4566
When I start serverless offline (
serverless offline start
), any requests from previous sessions are replayed. I think this then leads Node to run out of memory. Node doesn't run out of memory every time though.My lambda consuming the DynamoDB stream:
My serverless config
When starting the offline server, the terminal logs each repeated request:
Then running out of memory:
The expected behavior is that requests are only sent once, (unless the lamba returns an error response, then they can be replayed?)