Open christophgysin opened 5 years ago
yeah, I'm going to make this the default, and add a monkey-patch to the remove command if there's an existing restapi that has already been migrated, so it should play nice for new and existing users. I'll try to get this out today.
adding serverless-associate-waf to the plugins this is causing issues with
We too have encountered this problem when using serverless-plugin-split-stacks
together with serverless-domain-manager
and I have been able to work around the immediate error by adding the following to stacks-map.js:
module.exports = {
'AWS::ApiGateway::Resource': false,
'AWS::ApiGateway::RestApi': false
};
However, we now have 27 endpoints in the API we have constructed with serverless and are still breaching the 200 resource limit with the following output coming from the deploy sequence (scaled back to a point that it works - one more api and it fails):
Serverless: [serverless-plugin-split-stacks]: Summary: 35 resources migrated in to 2 nested stacks
Serverless: [serverless-plugin-split-stacks]: Resources per stack:
Serverless: [serverless-plugin-split-stacks]: - (root): 200
Serverless: [serverless-plugin-split-stacks]: - PermissionsNestedStack: 8
Serverless: [serverless-plugin-split-stacks]: - VersionsNestedStack: 27
It's clear to see that the root stack has 200 items in it already! Now I understand that my adjustment of the plugin configuration is changing the way in which the stacks are broken down, but in order to make the serverless-domain-manager
work with the afore mentioned configuration I can see that it makes the plugin rather conservative in its splitting. Is there a cunning way to achieve this? Maybe a different configuration, which would be ideal for us?
We are also using serverless-aws-documentation
which I've seen mentioned in some tickets regarding similar problems. Could a config adjustment there also make this work?
I've been able to fix our problem with config such as...
serverless.yml:
splitStacks:
perType: true
stacks-map.js:
module.exports = (resource, logicalId) => {
if (
logicalId.startsWith('ServerlessDeploymentBucket') ||
logicalId.startsWith('ApiGatewayResource') ||
logicalId.startsWith('ApiGatewayRestApi')
)
return false;
else if (logicalId.endsWith('LogGroup')) return { destination: 'LogGroup' };
else if (logicalId.endsWith('LambdaExecution') || logicalId.endsWith('LambdaFunction'))
return { destination: 'Lambda' };
else if (logicalId.startsWith('ApiGatewayMethod')) return { destination: 'ApiGatewayMethod' };
};
In our case, we completely removed the deployment. When re-deploying using this configuration, the output is as follows:
Serverless: [serverless-plugin-split-stacks]: Summary: 179 resources migrated in to 5 nested stacks
Serverless: [serverless-plugin-split-stacks]: Resources per stack:
Serverless: [serverless-plugin-split-stacks]: - (root): 59
Serverless: [serverless-plugin-split-stacks]: - ApiGatewayMethodNestedStack: 46
Serverless: [serverless-plugin-split-stacks]: - LambdaNestedStack: 28
Serverless: [serverless-plugin-split-stacks]: - LogGroupNestedStack: 27
Serverless: [serverless-plugin-split-stacks]: - PermissionsNestedStack: 51
Serverless: [serverless-plugin-split-stacks]: - VersionsNestedStack: 27
AWS was a bit painful with the deployment removal and I had to manually remove LogGroups which had failed to be deleted by the sls remove -s dev
command. Prior to that I was getting some deployment failures; always around LogGroups.
I encountered the same issue today as TS.
When using
serverless-plugin-split-stacks
together withserverless-domain-manager
, deployment fails with:serverless-domain-manager
expects the resourceAWS::ApiGateway::RestApi
to be in the root stack.A simple fix is to prevent it from being moved to a nested stack, using:
stacks-map.js
It seems that similar issues are caused by moving
AWS::ApiGateway::RestApi
to a nested stack, see #24 and #9.