Open neebs12 opened 1 year ago
For issue: tester count is hardcoded: to be determined based on VU numbers
import * as config from './config.json';
/**
* This function will require straight from config, this function outputs VU count and desired count per region
* @param region string
* @returns {vu: number, desiredCount: number}
*/
const getVUAndDesiredCountByRegion = (region: keyof typeof config.REMOTE_REGIONS): { VU: number, desiredCount: number } => {
const MAX_VU_PER_CONTAINER = 200;
const regionVU = config.REMOTE_REGIONS[region];
// first case: MAX_VU_PER_CONTAINER >= regionVU, thus all VU will be in one container
if (MAX_VU_PER_CONTAINER >= regionVU) {
return {
VU: regionVU,
desiredCount: 1,
};
}
// second case: MAX_VU_PER_CONTAINER < regionVU, thus we need to split VU EVENLY into multiple containers, will result in some VU error
const desiredCount = Math.ceil(regionVU / MAX_VU_PER_CONTAINER);
const specificVU = Math.ceil(regionVU / desiredCount);
return {
VU: specificVU,
desiredCount: desiredCount
}
}
See tests:
/*
for given config.json ...
{
"DURATION": 2000,
"HOME_REGION": "us-east-1",
"REMOTE_REGIONS": {
"ap-northeast-1": 5,
"us-east-2": 200,
"ca-central-2": 201,
"ca-central-3": 203,
"ca-central-4": 204,
"us-west-1": 199,
"ca-central-1": 1000,
"eu-west-3": 2219
}
}
*/
const regions = Object.keys(config.REMOTE_REGIONS) as Array<keyof typeof config.REMOTE_REGIONS>;
regions.forEach(region => {
const { VU, desiredCount } = getVUAndDesiredCountByRegion(region);
const finalVU = VU * desiredCount;
const actualVU = config.REMOTE_REGIONS[region];
console.log("------------------");
console.log(`Region: ${region}, VU per container: ${VU}, desiredCount: ${desiredCount}, finalVU: ${finalVU}, actualVU: ${actualVU}, error: ${finalVU - actualVU}`);
});
Outputs are - note on "error". I think this is acceptable. I think error could reach desiredCount - 1 at most
. Considering size of MAX_VU_PER_CONTAINER
, I think, proportionally, this is acceptable, but we would need to take note of this.
------------------
Region: ap-northeast-1, VU per container: 5, desiredCount: 1, finalVU: 5, actualVU: 5, error: 0
------------------
Region: us-east-2, VU per container: 200, desiredCount: 1, finalVU: 200, actualVU: 200, error: 0
------------------
Region: ca-central-2, VU per container: 101, desiredCount: 2, finalVU: 202, actualVU: 201, error: 1
------------------
Region: ca-central-3, VU per container: 102, desiredCount: 2, finalVU: 204, actualVU: 203, error: 1
------------------
Region: ca-central-4, VU per container: 102, desiredCount: 2, finalVU: 204, actualVU: 204, error: 0
------------------
Region: us-west-1, VU per container: 199, desiredCount: 1, finalVU: 199, actualVU: 199, error: 0
------------------
Region: ca-central-1, VU per container: 200, desiredCount: 5, finalVU: 1000, actualVU: 1000, error: 0
------------------
Region: eu-west-3, VU per container: 185, desiredCount: 12, finalVU: 2220, actualVU: 2219, error: 1
Ran into the following issue when running npm run deploy:parallel:all
Solved the issue by going to the Home Regions -> cloudformation and deleting the CDK Toolkit.
CDK bootstrapping also fails sometimes for remote regions even though it may appear to be successful upon initial deployment.
Ran into the following error message after re-running npm run deploy:parallel:all
To resolve this issue, run npm run destroy:parallel:all
and go into cloudformation and the CDKToolkit for the region that failed the bootstrapping process.
Throttling error when running node ./src/index.js teardown-all
Appears to not allow my account to deploy to eu-central-2
π
no idea why
Problem:
Solutions:
Note: Error name is: "TooManyRequestsException"
See config file (culprit π)
Outside of just reducing the polling rate (which should be a future work topic itself, replace with server side events or pub/sub), we can probably put finding and expanding performance bounds for just about everything in future work
Overview
Errors and Resolution
aws/lambda/orchestrator/index.js
is commented out while we are still developing - uncomment out if you want to testResolved
for loop
instead of.forEach
for a delayed loop. Also added a further 500ms delay for each table creation write to ensure RateLimitError is not hit. Implemented a solution in a soon to be made PR - specific file is/scripts/utils/clearTimestream.js
. @A-Thresher feel free to test again. I have tested with 4 regions (thus 8 tables in current schema) and it works as required..ts
script that has a fn which specifies max VU per container. - see solution in comments.await timestreamWriteClient.send( new DeleteTableCommand({...}))
in/scripts/utils/clearTimestream.js
is not guaranteed to delete a table but to only request for it. Consider applying a recursive fn with base-case of noOfTables to undefined with a sleep of 500 to 1000ms to also avoid and any potential rate limit errors given queries to the client./scripts/utils/clearTimestream.js
. @A-Thresher feel free to test again. I have tested with 4 regions (thus 8 tables in current schema) and it works as required.