raycast / extensions

Everything you need to extend Raycast.
https://developers.raycast.com
MIT License
5.28k stars 2.98k forks source link

[Amazon AWS]... can't see any of pipelines... #13304

Closed arabshapt closed 2 months ago

arabshapt commented 3 months ago

Extension

https://raycast.com/Falcon/aws

Raycast Version

1.77.3

macOS Version

No response

Description

Error:

ThrottlingException: Rate exceeded
    at throwDefaultError (/Users/user/.config/raycast/extensions/cdcb4b69-7df7-4f7e-977b-1406967eb3e7/codepipeline.js:13:5039)
    at /Users/user/.config/raycast/extensions/cdcb4b69-7df7-4f7e-977b-1406967eb3e7/codepipeline.js:13:5204
    at de_CommandError (/Users/user/.config/raycast/extensions/cdcb4b69-7df7-4f7e-977b-1406967eb3e7/codepipeline.js:38:97251)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /Users/user/.config/raycast/extensions/cdcb4b69-7df7-4f7e-977b-1406967eb3e7/codepipeline.js:2:11136
    at async /Users/user/.config/raycast/extensions/cdcb4b69-7df7-4f7e-977b-1406967eb3e7/codepipeline.js:3:6280
    at async /Users/user/.config/raycast/extensions/cdcb4b69-7df7-4f7e-977b-1406967eb3e7/codepipeline.js:13:15725
    at async /Users/user/.config/raycast/extensions/cdcb4b69-7df7-4f7e-977b-1406967eb3e7/codepipeline.js:1:7426
    at async /Users/user/.config/raycast/extensions/cdcb4b69-7df7-4f7e-977b-1406967eb3e7/codepipeline.js:69:11342
    at async Promise.all (index 0)

Steps To Reproduce

just open codepipeline in raycast with aws profile with multiple pipelines

Current Behaviour

No response

Expected Behaviour

No response

raycastbot commented 3 months ago

Thank you for opening this issue!

🔔 @victor-falcon @Hodglim @JonathanWbn @gebeto @momme-rtf @duboiss @hexpl0it @crisboarna @sidhant92 @DorukAkinci @frese @nagauta @vineus @jfkisafk @srikirank @b0lle you might want to have a look.

💡 Author and Contributors commands The author and contributors of `Falcon/aws` can trigger bot actions by commenting: - `@raycastbot close this issue` Closes the issue. - `@raycastbot rename this issue to "Awesome new title"` Renames the issue. - `@raycastbot reopen this issue` Reopens the issue. - `@raycastbot assign me` Assigns yourself to the issue. - `@raycastbot good first issue` Adds the "Good first issue" label to the issue. - `@raycastbot keep this issue open` Make sure the issue won't go stale and will be kept open by the bot.
arabshapt commented 3 months ago

I tried deleting the extension and adding it anew, but it didn't fix the issue

jfkisafk commented 3 months ago

How was it working before @arabshapt? B/c we literally just reverted back to the previous logic of handling pagination, ditching the raycast native pagination. Did the magnitude of pipelines considerably increase? This was the previous code:

https://github.com/raycast/extensions/blob/7e982250975b4f53ced43a2c11447dad73e04221/extensions/amazon-aws/src/codepipeline.tsx#L15-L33

There was a cachedPromise for each ListPipelineExecutions call, but still in order to get latest execution status multiple calls were made. This is the same thing we are doing now:

https://github.com/raycast/extensions/blob/de59e58ca342947f20f0da99922817ca213bb599/extensions/amazon-aws/src/hooks/use-codepipeline.ts#L75-L103

jfkisafk commented 3 months ago

In any case, I use cellular accounts, so I create one account per-service pipeline stage (I will only have 1-2 pipelines per account). It would be hard (and expensive) for me to test what's going on with multiple pipeline setup, and based on my limited bandwidth, I would recommend you should test the modifications after forking the extension @arabshapt. Or I can slowly try to mitigate it in a separate repo and guide/use you to test it?

I could not find any API TPS quotas on service limits as well, so I can't guess the ms wait time accurately that will work for you. My initial estimate was that loading with at most 50 ListPipelineExecutions (sequential) + 1 ListPipelines call will have enough network overhead to not throttle within a given second. Here are the possible things you can try. Replace:

https://github.com/raycast/extensions/blob/de59e58ca342947f20f0da99922817ca213bb599/extensions/amazon-aws/src/hooks/use-codepipeline.ts#L80-L91

with:

const pipelines: Pipeline[] = [];

for (const p of pipelineSummaries ?? []) {
  if (p.name) {
    const { pipelineExecutionSummaries } = await new CodePipelineClient({}).send(
      new ListPipelineExecutionsCommand({ pipelineName: p.name }),
    );

    const executions = (pipelineExecutionSummaries ?? []).filter((e) => !!e.pipelineExecutionId);
    const pipeline = { ...p, ...(executions.length > 0 && { latestExecution: executions[0] }) } as Pipeline;

    pipelines.push(pipeline);
  }
}

and see if that works?

arabshapt commented 3 months ago

Thank you for your help:) I will try that in a fork. When I try to run a fork of this I get "Terminated Connection interrupted" for the whole aws extension. Teminated or closed:

image

I see at the bottom: Loading pipelines 50 Pipelines CleanShot 2024-07-03 at 11 47 50@2x

jfkisafk commented 3 months ago

@arabshapt please try this: https://github.com/raycast/extensions/pull/13004#issuecomment-2190015021 Please let me know how it goes and I can help you with the review process.

arabshapt commented 3 months ago

Thank you for the link:) I was able to run it locally and your code snippet has fixed the issue. The loading takes some time (maybe 10 sec?) then I see 50pipelines loading message and after some more time 73 pipelines were loaded. I think they get cached, because that initial loading i can search immediately. The question is how long does it take for a new pipeline to appear there. Thank you for the fix 👍

jfkisafk commented 3 months ago

Perfect @arabshapt . Thanks for testing! So we root-caused it to 50 async calls being triggered (instead of sequential) that raised the TPS to throttle limits.

I think they get cached, because that initial loading i can search immediately.

Yes we are using cached promise, so you will be seeing stale results but every time you reload the command it will revalidate.

The question is how long does it take for a new pipeline to appear there.

This is what we need to optimize next. Can you add logging after each ListPipelineExecutions call to track what is the avg latency spent here? Based on that we can rebalance max items loaded to fit both TPS and low latency thresholds.

jfkisafk commented 2 months ago

Hey @arabshapt do you have some data around the avg. latency you are seeing per ListPipelineExecutions call?

arabshapt commented 2 months ago

Hi @jfkisafk , sorry for late response. (93 pipelines) Here is the latency:

19:16:10.948 ListPipelineExecutions latency: 371ms 19:16:10.948 ListPipelineExecutions latency: 387ms 19:16:11.642 ListPipelineExecutions latency: 698ms 19:16:11.690 ListPipelineExecutions latency: 739ms 19:16:12.291 ListPipelineExecutions latency: 649ms 19:16:12.337 ListPipelineExecutions latency: 648ms 19:16:12.670 ListPipelineExecutions latency: 381ms 19:16:12.706 ListPipelineExecutions latency: 369ms 19:16:13.269 ListPipelineExecutions latency: 598ms 19:16:13.395 ListPipelineExecutions latency: 690ms 19:16:13.850 ListPipelineExecutions latency: 582ms 19:16:13.932 ListPipelineExecutions latency: 536ms 19:16:14.241 ListPipelineExecutions latency: 390ms 19:16:14.326 ListPipelineExecutions latency: 393ms 19:16:14.657 ListPipelineExecutions latency: 417ms 19:16:14.739 ListPipelineExecutions latency: 414ms 19:16:15.024 ListPipelineExecutions latency: 367ms 19:16:15.107 ListPipelineExecutions latency: 369ms 19:16:15.399 ListPipelineExecutions latency: 373ms 19:16:15.562 ListPipelineExecutions latency: 455ms 19:16:15.859 ListPipelineExecutions latency: 462ms 19:16:15.952 ListPipelineExecutions latency: 390ms 19:16:16.241 ListPipelineExecutions latency: 381ms 19:16:16.336 ListPipelineExecutions latency: 383ms 19:16:16.601 ListPipelineExecutions latency: 360ms 19:16:16.705 ListPipelineExecutions latency: 369ms 19:16:17.160 ListPipelineExecutions latency: 558ms 19:16:17.233 ListPipelineExecutions latency: 528ms 19:16:17.661 ListPipelineExecutions latency: 501ms 19:16:17.722 ListPipelineExecutions latency: 488ms 19:16:18.021 ListPipelineExecutions latency: 359ms 19:16:18.096 ListPipelineExecutions latency: 374ms 19:16:18.400 ListPipelineExecutions latency: 379ms 19:16:18.489 ListPipelineExecutions latency: 391ms 19:16:18.771 ListPipelineExecutions latency: 372ms 19:16:18.858 ListPipelineExecutions latency: 374ms 19:16:19.564 ListPipelineExecutions latency: 792ms 19:16:19.683 ListPipelineExecutions latency: 822ms 19:16:20.158 ListPipelineExecutions latency: 595ms 19:16:20.314 ListPipelineExecutions latency: 631ms 19:16:20.525 ListPipelineExecutions latency: 367ms 19:16:20.705 ListPipelineExecutions latency: 391ms 19:16:21.186 ListPipelineExecutions latency: 660ms 19:16:21.321 ListPipelineExecutions latency: 614ms 19:16:21.807 ListPipelineExecutions latency: 622ms 19:16:21.909 ListPipelineExecutions latency: 590ms 19:16:22.187 ListPipelineExecutions latency: 380ms 19:16:22.289 ListPipelineExecutions latency: 380ms 19:16:22.576 ListPipelineExecutions latency: 390ms 19:16:22.639 ListPipelineExecutions latency: 350ms 19:16:22.952 ListPipelineExecutions latency: 372ms 19:16:23.006 ListPipelineExecutions latency: 361ms 19:16:23.382 ListPipelineExecutions latency: 433ms 19:16:23.435 ListPipelineExecutions latency: 435ms 19:16:23.737 ListPipelineExecutions latency: 355ms 19:16:23.796 ListPipelineExecutions latency: 361ms 19:16:24.188 ListPipelineExecutions latency: 451ms 19:16:24.258 ListPipelineExecutions latency: 462ms 19:16:24.655 ListPipelineExecutions latency: 468ms 19:16:24.713 ListPipelineExecutions latency: 455ms 19:16:25.020 ListPipelineExecutions latency: 364ms 19:16:25.092 ListPipelineExecutions latency: 378ms 19:16:25.774 ListPipelineExecutions latency: 752ms 19:16:25.821 ListPipelineExecutions latency: 730ms 19:16:26.271 ListPipelineExecutions latency: 499ms 19:16:26.337 ListPipelineExecutions latency: 516ms 19:16:26.969 ListPipelineExecutions latency: 697ms 19:16:27.030 ListPipelineExecutions latency: 693ms 19:16:27.606 ListPipelineExecutions latency: 637ms 19:16:27.632 ListPipelineExecutions latency: 602ms 19:16:28.258 ListPipelineExecutions latency: 652ms 19:16:28.290 ListPipelineExecutions latency: 657ms 19:16:28.929 ListPipelineExecutions latency: 669ms 19:16:29.010 ListPipelineExecutions latency: 723ms 19:16:29.294 ListPipelineExecutions latency: 369ms 19:16:29.371 ListPipelineExecutions latency: 359ms 19:16:29.660 ListPipelineExecutions latency: 364ms 19:16:29.733 ListPipelineExecutions latency: 362ms 19:16:30.348 ListPipelineExecutions latency: 689ms 19:16:30.422 ListPipelineExecutions latency: 688ms 19:16:30.941 ListPipelineExecutions latency: 592ms 19:16:31.001 ListPipelineExecutions latency: 579ms 19:16:31.326 ListPipelineExecutions latency: 385ms 19:16:31.362 ListPipelineExecutions latency: 361ms 19:16:32.119 ListPipelineExecutions latency: 793ms 19:16:32.127 ListPipelineExecutions latency: 766ms 19:16:32.807 ListPipelineExecutions latency: 681ms 19:16:32.838 ListPipelineExecutions latency: 710ms 19:16:33.177 ListPipelineExecutions latency: 377ms 19:16:33.207 ListPipelineExecutions latency: 369ms 19:16:33.630 ListPipelineExecutions latency: 452ms 19:16:33.652 ListPipelineExecutions latency: 445ms 19:16:34.018 ListPipelineExecutions latency: 364ms 19:16:34.024 ListPipelineExecutions latency: 396ms 19:16:34.387 ListPipelineExecutions latency: 361ms 19:16:34.392 ListPipelineExecutions latency: 378ms 19:16:34.736 ListPipelineExecutions latency: 349ms 19:16:35.104 ListPipelineExecutions latency: 368ms 19:16:35.752 ListPipelineExecutions latency: 1358ms 19:16:35.891 ListPipelineExecutions latency: 375ms 19:16:36.097 ListPipelineExecutions latency: 345ms 19:16:36.267 ListPipelineExecutions latency: 375ms 19:16:36.754 ListPipelineExecutions latency: 487ms 19:16:36.881 ListPipelineExecutions latency: 374ms 19:16:37.218 ListPipelineExecutions latency: 463ms 19:16:37.245 ListPipelineExecutions latency: 365ms 19:16:37.614 ListPipelineExecutions latency: 397ms 19:16:37.757 ListPipelineExecutions latency: 512ms 19:16:38.024 ListPipelineExecutions latency: 410ms 19:16:38.243 ListPipelineExecutions latency: 486ms 19:16:38.392 ListPipelineExecutions latency: 368ms 19:16:38.595 ListPipelineExecutions latency: 352ms 19:16:38.770 ListPipelineExecutions latency: 378ms 19:16:38.980 ListPipelineExecutions latency: 384ms 19:16:39.139 ListPipelineExecutions latency: 371ms 19:16:39.359 ListPipelineExecutions latency: 383ms 19:16:39.566 ListPipelineExecutions latency: 425ms 19:16:39.726 ListPipelineExecutions latency: 364ms 19:16:39.938 ListPipelineExecutions latency: 372ms 19:16:40.097 ListPipelineExecutions latency: 372ms 19:16:40.291 ListPipelineExecutions latency: 352ms 19:16:40.514 ListPipelineExecutions latency: 417ms 19:16:40.653 ListPipelineExecutions latency: 364ms 19:16:40.889 ListPipelineExecutions latency: 375ms 19:16:41.262 ListPipelineExecutions latency: 372ms 19:16:41.346 ListPipelineExecutions latency: 691ms 19:16:41.618 ListPipelineExecutions latency: 356ms 19:16:41.916 ListPipelineExecutions latency: 571ms 19:16:42.269 ListPipelineExecutions latency: 353ms 19:16:42.298 ListPipelineExecutions latency: 680ms 19:16:42.633 ListPipelineExecutions latency: 364ms 19:16:42.843 ListPipelineExecutions latency: 545ms 19:16:42.992 ListPipelineExecutions latency: 359ms 19:16:43.211 ListPipelineExecutions latency: 367ms 19:16:43.471 ListPipelineExecutions latency: 479ms 19:16:43.586 ListPipelineExecutions latency: 376ms 19:16:43.937 ListPipelineExecutions latency: 463ms 19:16:44.019 ListPipelineExecutions latency: 430ms 19:16:44.287 ListPipelineExecutions latency: 353ms 19:16:44.473 ListPipelineExecutions latency: 456ms 19:16:44.650 ListPipelineExecutions latency: 362ms 19:16:44.931 ListPipelineExecutions latency: 458ms 19:16:45.057 ListPipelineExecutions latency: 408ms 19:16:45.298 ListPipelineExecutions latency: 367ms 19:16:45.550 ListPipelineExecutions latency: 492ms 19:16:45.653 ListPipelineExecutions latency: 355ms 19:16:45.904 ListPipelineExecutions latency: 355ms 19:16:46.028 ListPipelineExecutions latency: 376ms 19:16:46.288 ListPipelineExecutions latency: 379ms 19:16:46.541 ListPipelineExecutions latency: 512ms 19:16:46.637 ListPipelineExecutions latency: 353ms 19:16:46.910 ListPipelineExecutions latency: 369ms 19:16:46.997 ListPipelineExecutions latency: 360ms 19:16:47.264 ListPipelineExecutions latency: 353ms 19:16:47.403 ListPipelineExecutions latency: 406ms 19:16:47.623 ListPipelineExecutions latency: 356ms 19:16:47.779 ListPipelineExecutions latency: 376ms 19:16:47.981 ListPipelineExecutions latency: 360ms 19:16:48.144 ListPipelineExecutions latency: 365ms 19:16:48.391 ListPipelineExecutions latency: 410ms 19:16:48.535 ListPipelineExecutions latency: 389ms 19:16:48.746 ListPipelineExecutions latency: 356ms 19:16:48.908 ListPipelineExecutions latency: 375ms 19:16:49.115 ListPipelineExecutions latency: 370ms 19:16:49.265 ListPipelineExecutions latency: 358ms 19:16:49.473 ListPipelineExecutions latency: 356ms 19:16:49.646 ListPipelineExecutions latency: 380ms 19:16:49.913 ListPipelineExecutions latency: 438ms 19:16:50.264 ListPipelineExecutions latency: 354ms 19:16:50.399 ListPipelineExecutions latency: 753ms 19:16:50.622 ListPipelineExecutions latency: 356ms 19:16:50.911 ListPipelineExecutions latency: 511ms 19:16:51.273 ListPipelineExecutions latency: 361ms 19:16:51.277 ListPipelineExecutions latency: 655ms 19:16:51.648 ListPipelineExecutions latency: 376ms 19:16:51.799 ListPipelineExecutions latency: 523ms 19:16:52.156 ListPipelineExecutions latency: 355ms 19:16:52.325 ListPipelineExecutions latency: 676ms 19:16:52.517 ListPipelineExecutions latency: 362ms 19:16:52.702 ListPipelineExecutions latency: 377ms 19:16:53.167 ListPipelineExecutions latency: 650ms 19:16:53.305 ListPipelineExecutions latency: 602ms 19:16:53.541 ListPipelineExecutions latency: 373ms 19:16:53.667 ListPipelineExecutions latency: 361ms 19:16:54.129 ListPipelineExecutions latency: 588ms 19:16:54.482 ListPipelineExecutions latency: 357ms

arabshapt commented 2 months ago

can't we send these requests in parallel with Promise.allSettled? And also use whatever was successful without throwing an error? I mean we can throw an error(to indicate, that something is wrong), but still use what we get as successful response

arabshapt commented 2 months ago

Please use the following code:)

const fetchPipelines = async (toast: Toast, nextToken?: string, aggregate: Pipeline[] = []): Promise<Pipeline[]> => {
  const client = new CodePipelineClient({});

  try {
    const { pipelines: pipelineSummaries = [], nextToken: cursor } = await client.send(
      new ListPipelinesCommand({ nextToken, maxResults: 100 })
    );

    const pipelineExecutionPromises = pipelineSummaries
      .filter(p => p.name)
      .map(p => client.send(new ListPipelineExecutionsCommand({ pipelineName: p.name })));

    const pipelineExecutionResults = await Promise.allSettled(pipelineExecutionPromises);

    const pipelines = pipelineSummaries.map((p, index) => {
      const executions = pipelineExecutionResults[index].status === "fulfilled"
        ? pipelineExecutionResults[index].value?.pipelineExecutionSummaries?.filter(e => e.pipelineExecutionId) || []
        : [];

      return {
        ...p,
        ...(executions.length > 0 && { latestExecution: executions[0] })
      } as Pipeline;
    });

    const updatedAggregate = [...aggregate, ...pipelines];
    updateToast(toast, updatedAggregate.length, cursor ? "Loading" : "Success");

    return cursor
      ? await fetchPipelines(toast, cursor, updatedAggregate)
      : updatedAggregate;

  } catch (error) {
    console.error("Error fetching pipelines:", error);
    toast.style = Toast.Style.Failure;
    toast.title = "❌ Failed to load pipelines";
    toast.message = "An error occurred while fetching pipelines";
    throw error;
  }
};

const updateToast = (toast: Toast, pipelineCount: number, status: "Loading" | "Success") => {
  toast.message = `${pipelineCount} pipelines`;
  if (status === "Success") {
    toast.style = Toast.Style.Success;
    toast.title = "✅ Loaded pipelines";
  }
};
arabshapt commented 2 months ago

hmm, some of the promises are rejected... need to think about how to solve this... resending the requests for the rejected ones with an exponential delay could work

jfkisafk commented 2 months ago

hmm, some of the promises are rejected... need to think about how to solve this... resending the requests for the rejected ones with an exponential delay could work

Hey @arabshapt, we can also change the default retry behavior for the SDK and try exponential backoff retry. But I think Promises.allSettled will lead to a varying UX in case there is no cache - some pipelines will report their status and some won't. B/c at the core this does not solve the TPS spike problem and still creates 51 TPS.

For your testing, make sure to clear assets cache and local storage and cache for the command. Then it will not display the cached result and you'd see results as a new user of the command.

I was thinking we can still keep the await Promises.all (no changes to production code) and based on the latency analysis change maxResults in between 3-10 (starting with 5), rather than the current value of 50. Then it will be faster performance and won't lead to throttling.

arabshapt commented 2 months ago

The most interesting pipelines are those that are most likely not up to date and they are those that are most recent(execution date). I would get all the pipelines at once without pagination, since that call doesn't cause any problem. Then I would create chunks of 5 items myself, but before chunking I would sort all the pipelines by last execution timestamp and get their details first. What do you think?

On Tue, Jul 9, 2024, 23:25 stelo @.***> wrote:

hmm, some of the promises are rejected... need to think about how to solve this... resending the requests for the rejected ones with an exponential delay could work

Hey @arabshapt https://github.com/arabshapt, we can also change the default retry behavior for the SDK and try exponential backoff retry. But I think Promises.allSettled will lead to a varying UX in case there is no cache - some pipelines will report their status and some won't. B/c at the core this does not solve the TPS spike problem and still creates 51 TPS.

For your testing, make sure to clear assets cache and local storage and cache for the command. Then it will not display the cached result and you'd see results as a new user of the command.

I was thinking we can still keep the await Promises.all (no changes to production code) and based on the latency analysis change maxResults in between 3-10 (starting with 5), rather than the current value of 50. Then it will be faster performance and won't lead to throttling.

— Reply to this email directly, view it on GitHub https://github.com/raycast/extensions/issues/13304#issuecomment-2218757637, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGJVZ6MMZUW4JELB7VTMIX3ZLRINTAVCNFSM6AAAAABKJC5W22VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJYG42TONRTG4 . You are receiving this because you were mentioned.Message ID: @.***>

jfkisafk commented 2 months ago

Would it be possible to just use older version of the sdk for this usecase?

If they removed it, it was probably b/c CodePipeline team might have asked them too. Any batch call is a serious consumption of resources I'd imagine, so my preference would be to stick to the latest sdk

Then I would create chunks of 5 items myself, but before chunking I would sort all the pipelines by last execution timestamp and get their details first. What do you think?

We can definitely get all the pipelines at once, but I am not sure how you'd get the last execution timestamp from ListPipelines response for chunking. Is update field updated per execution (I believe it is just the timestamp for last update to pipeline structure)?

jfkisafk commented 2 months ago

That's why I was recommending to slow down using chunking in ListPipeline output itself. But given that will also add to network overhead, I think we can start with maxResults 10 and see how that performs

arabshapt commented 2 months ago

Would it be possible to just use older version of the sdk for this usecase?

If they removed it, it was probably b/c CodePipeline team might have asked them too. Any batch call is a serious consumption of resources I'd imagine, so my preference would be to stick to the latest sdk

Then I would create chunks of 5 items myself, but before chunking I would sort all the pipelines by last execution timestamp and get their details first. What do you think?

We can definitely get all the pipelines at once, but I am not sure how you'd get the last execution timestamp from ListPipelines response for chunking. Is update field updated per execution (I believe it is just the timestamp for last update to pipeline structure)?

You are right... Updated doesn't mean last execution time. So then as you said we need to reduce the size of the pages to 10 or even 5. As long as it is not too slow, but is reliable it would be 👍