aramalipoor / aws-cost-saver

A tiny CLI tool to help save costs in development environments when you're asleep and don't need them!
MIT License
305 stars 9 forks source link

Issue: Error on stop-fargate-ecs-services #34

Open karthikholla opened 3 years ago

karthikholla commented 3 years ago

Version - aws-cost-saver@0.1.0 Service/Trick - stop-fargate-ecs-services

I have around 23 ECS tasks running. The current latest version is throwing this error though the stop-fargate-ecs-services is successful and in AWS I could see all tasks desired count is set to 0. I guess this is because of some timeout settings.

Error - ✔ partially finished, with 1 failed tricks out of 3.

◎ conserve: stop-fargate-ecs-services
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
  ✔ service/*******
   ✔ desired count
   set desired count to zero
   ↓ auto scaling
    ↓ no scalable targets defined
************************
✔ partially finished, with 1 failed tricks out of 3.
    Error: ConservePartialFailure
aramalipoor commented 3 years ago

Hi @karthikholla, based on your comment are you not seeing this on 0.1.0 and only on latest 0.2.1 version?

karthikholla commented 3 years ago

@aramalipoor Just an update to the same issue. I have around 30+ Fargate tasks that get stopped and started back again as part of the ECS trick. The issue is the trick is expecting all the fargate tasks to be scaled down and waits for them which results in 1hr timeout limit. Can the trick just change the desired count and expect it to take care by AWS to scaledown at its own phase rather than waiting for it.

  Dry run: no
❯ running in the background, the summary will be printed at the end...
ERROR: Job failed: execution took longer than 1h0m0s seconds