I’ve created a simple welcome bot (kind of demo bot) and tested it.
We've performed load testing of the converse API using the benchmarking tool (Benchmark tool is hosted in the different AWS EC2 instance - t2.medium) and analyzed the below result.
No of Users -> 100
Messages per User -> 5
No. of Servers/nodes -> 2
Instance type -> t2.medium
Total No of messages -> 500
Average Latency (ms) -> 2221
Minimum Latency (ms) -> 535
Maximum Latency (ms) -> 3486
We need your input on the following,
We believe that the above performance with the converse API is not up to the mark with this configuration. Please correct us if this sounds incorrect or guide us on how we can work on improving this.
We are performing the load testing on Converse API with the below configuration:
Server Configurations EC2 instance | Instance Type: t2.medium, vCPU: 2 Cores, RAM: 4 GB, Storage: 25GB RDS Instance (PostgreSQL) | Instance type: db.t3.medium, vCPU -> 2, RAM -> 4 GB, Storage -> 19 GiB ElastiCache (Redis) | Node type: cache.t2.micro Application Load Balancer | Server selection strategy: Round Robbin sticky session.
Environment Variables: DATABASE_URL=postgres://xxxxx:xxxx@xxxx.us-east-2.rds.amazonaws.com:5432/botpress PRO_ENABLED=true CLUSTER_ENABLED=true REDIS_URL=redis://xxxxxx.ng.0001.use2.cache.amazonaws.com:6379 BP_REDIS_SCOPE=staging BPFS_STORAGE=database
I’ve created a simple welcome bot (kind of demo bot) and tested it.
We've performed load testing of the converse API using the benchmarking tool (Benchmark tool is hosted in the different AWS EC2 instance - t2.medium) and analyzed the below result.
No of Users -> 100 Messages per User -> 5 No. of Servers/nodes -> 2 Instance type -> t2.medium Total No of messages -> 500 Average Latency (ms) -> 2221 Minimum Latency (ms) -> 535 Maximum Latency (ms) -> 3486
We need your input on the following,
We believe that the above performance with the converse API is not up to the mark with this configuration. Please correct us if this sounds incorrect or guide us on how we can work on improving this.
Your quick response will be highly appreciated.