Closed waisingyiu closed 7 months ago
mobile-n10n:eventconsumer
to CODEmobile-n10n:schedule
to CODEmobile-n10n:football
to CODEmobile-n10n:fakebreakingnewslambda
to CODEmobile-n10n:reportextractor
to CODEmobile-n10n:report
to CODEmobile-n10n:notification
to CODEmobile-n10n:slomonitor
to CODEmobile-n10n:registration
to CODEmobile-n10n:notificationworkerlambda
to CODE
What does this change?
Editorials reported that they sent a breaking news notification via ed tools but no notifications were received. The notification record was not found on Ophan dashboard either.
After investigation, it was found that the breaking news tool sent a HTTP request to our notification endpoint but received a 503 exception.
The cloudwatch metric (ELB 5xx) for the load balancer of the notification API suggested that a 5xx response had been served from the load balancer at that time. One of the EC2 instances was also terminated due to health check failure around that minute.
We noticed that EC2 instances of notification service failed health check from time to time. The application logs of an unhealthy instance did not show any exception or error message, but the syslog of the OS indicated that there was out-of-memory error on OS process level.
I believe that the JVM of the service ran out of system memory when it was expanding its heap, given that we are using
t4g.micro
instance which has 1G memory only but the notification service and the AWS kinesis agent have a max heap size of 256M and 512M respectively.This PR changes the cloudformation stack to use bigger EC2 instance,
t4g.small
, which has 2G memory.How to test
I applied this bigger EC2 instance on CODE and the instance has stayed healthy for more than 1 day. It may be good to apply it on PROD too, and see if the problem with instances becoming unhealthy ceases to exist.
How can we measure success?
No instances become unhealthy.