quipucords / yupana

Service app to process & upload data from quipucords and satellite to the host based inventory
GNU General Public License v3.0
4 stars 14 forks source link

User Story: Performance testing and improvements to support 500 customers #224

Closed kholdaway closed 4 years ago

kholdaway commented 5 years ago

Feature description:

Performance testing and improvements to support 500 customers. This is not necessarily a large story, but we need to make sure we can support the load of a large customer base.

Performance Testing Spreadsheet: https://docs.google.com/spreadsheets/d/1WXdm8mOYzeELt56gnoZupulEeM47z3Z-cMgSe-vBVBk/edit?usp=sharing

myersCody commented 4 years ago

At the 200K report level we are hit with:

Payload too large: 20159581. Should not exceed 11010048 bytes

We need to see if CI is the same, it may not be.

myersCody commented 4 years ago

At the 200K report level we are hit with:

Payload too large: 20159581. Should not exceed 11010048 bytes

We need to see if CI is the same, it may not be.

The CI/QA/Prod are set to 100MB.

So, within the E2E repo the variable to change the max file size is called INGRESS_MAXSIZE However, if you are running locally the variable is called MAX_LENGTH & is located in the insights_upload/utils/config.py config.py.

However, although the insights_upload repo is setup to allow you to set environment variables in order to overwrite the defaults:

MAX_LENGTH = int(os.getenv('MAX_LENGTH', 11010048))

We can not just simply add MAX_LENGTH to our .host_env because the insights_upload dockerfile is not currently set up to accept all environment variable changes.

Dockerfile

For example with the current docker compose setup you would have to add the MAX_LENGTH to the environment section below.

environment:
      - AWS_ACCESS_KEY_ID=$MINIO_ACCESS_KEY
      - AWS_SECRET_ACCESS_KEY=$MINIO_SECRET_KEY

However instead I just changed the value in the config.py here:

MAX_LENGTH = int(os.getenv('MAX_LENGTH', 100000000))

Because it was easier for me to automate sed replacing 11010048 => 100000000 just to test local yupana performance.

myersCody commented 4 years ago

At the 200K report level we are hit with:

Payload too large: 20159581. Should not exceed 11010048 bytes

We need to see if CI is the same, it may not be.

The CI/QA/Prod are set to 100MB.

So, within the E2E repo the variable to change the max file size is called INGRESS_MAXSIZE However, if you are running locally the variable is called MAX_LENGTH & is located in the insights_upload/utils/config.py config.py.

However, although the insights_upload repo is setup to allow you to set environment variables in order to overwrite the defaults:

MAX_LENGTH = int(os.getenv('MAX_LENGTH', 11010048))

We can not just simply add MAX_LENGTH to our .host_env because the insights_upload dockerfile is not currently set up to accept all environment variable changes.

Dockerfile

For example with the current docker compose setup you would have to add the MAX_LENGTH to the environment section below.

environment:
      - AWS_ACCESS_KEY_ID=$MINIO_ACCESS_KEY
      - AWS_SECRET_ACCESS_KEY=$MINIO_SECRET_KEY

However instead I just changed the value in the config.py here:

MAX_LENGTH = int(os.getenv('MAX_LENGTH', 100000000))

Because it was easier for me to automate sed replacing 11010048 => 100000000 just to test local yupana performance.

We discovered that the upload setup we were using for local development is outdated. Therefore, I have transitioned to working on this issue to get is back up to date with the current workflow: https://github.com/quipucords/yupana/issues/270

abaiken commented 4 years ago

We tested locally & documented the findings [here].(https://docs.google.com/spreadsheets/d/1WXdm8mOYzeELt56gnoZupulEeM47z3Z-cMgSe-vBVBk/edit?usp=sharing)