backstage / backstage

Backstage is an open framework for building developer portals
https://backstage.io/
Apache License 2.0
28.56k stars 6.06k forks source link

🐛 Bug Report: `publish:github:pull-request` fails with `HttpError: You have exceeded a secondary rate limit.` #17188

Open rmartine-ias opened 1 year ago

rmartine-ias commented 1 year ago

📜 Description

The scaffolder publish:github:pull-request action fails to make a PR with the error Pull request creation failed; caused by HttpError: You have exceeded a secondary rate limit. Please wait a few minutes before you try again.

We think this is because the underlying plugin used by backstage to create pull requests is inefficient in its queries in some way that scales with the number of files in the sourcePath.

Here is what I think is happening:

backstage is sending type: Files to createPullRequest, which calls createTree, which runs a Promise.all on Object.keys(changes.files) (it makes a new promise for each file passed to it, and fires them all at the same time, if I understand correctly). Each Promise calls valueToTreeObject. Annotated and condensed:

export async function valueToTreeObject(
  ...
  value: string | File // It's a File!
) {
  ...
  if (typeof value === "string") {
    // Nope!
  }

  // This is what gets called!
  const { data } = await octokit.request(
    "POST /repos/{owner}/{repo}/git/blobs",
    {
      owner,
      repo,
      ...value,
    }
  );
  ...
}

So this is POSTing to GitHub for every file (critically, every file in the sourcePath, NOT just every changed file). Ordinarily, this would just take a long time to complete, but backstage disables octokit throttling, so instead of retrying, it fails immediately (firing ~500 POST requests at the same time means some fail with the rate limit error, meaning Promise.all fails).

👍 Expected behavior

  1. PRs should take longer to complete instead of failing
  2. PRs should not take long to complete, where possible

For 1), I think throttling should be re-enabled (this PR reverted) -- it is important to respect GitHub's rate limits. If they are ignored, then templates fail halfway through their run, potentially after some resources have already been created. It would be more robust if they took longer instead of failing. Still, if there are issues like a PR taking ten minutes, that is a sign that something is wrong, and the underlying usage of the API should be fixed.

For 2), because 10-minute PRs are not viable, one of the following should also be done:

  1. The publish:github:pull-request should very clearly state in the docs that it will be slower with many files. They should explain the best practices for usage (making a PR with a sparse file tree with only changes). The step should log a warning if called with >100 (number picked randomly) files.
  2. octokit-plugin-create-pull-request should be updated to use a more efficient method of querying GitHub -- maybe the GraphQL API? Or better yet, it should not query an API at all to make a blob.
  3. fetch:plain and then a consecutive publish:github:pull-request should be updated to only send the diff to octokit-plugin-create-pull-request -- this could maybe be done by pulling the .git directory as well and doing an actual diff to filter to only changed files.
  4. publish:github:pull-request should send strings instead of Files to octokit-plugin-create-pull-request, working around its issue. I don't know if this will work, because you do some custom things to the mode.

I can make a PR for re-enabling throttling, if that is desired. I could make a PR for 2.1 if you let me know where it should go (the JSON Schema description? Here somewhere?) I don't think I can get the time to make a PR for 2.2 or 2.3. I can possibly make a PR for 2.4.

👎 Actual Behavior with Screenshots

In development and production, PRs fail with:

12023-03-30T22:34:15.277Z Beginning step Onboarding PR: <MYREPO>
22023-03-30T22:34:17.739Z GithubResponseError: Pull request creation failed; caused by HttpError: You have exceeded a secondary rate limit. Please wait a few minutes before you try again.
3    at Object.handler (/app/node_modules/@backstage/plugin-scaffolder-backend/dist/index.cjs.js:3673:15)
4    at runMicrotasks (<anonymous>)
5    at runNextTicks (node:internal/process/task_queues:61:5)
6    at listOnTimeout (node:internal/timers:528:9)
7    at processTimers (node:internal/timers:502:7)
8    at async NunjucksWorkflowRunner.execute (/app/node_modules/@backstage/plugin-scaffolder-backend/dist/index.cjs.js:4831:11)
9    at async TaskWorker.runOneTask (/app/node_modules/@backstage/plugin-scaffolder-backend/dist/index.cjs.js:5042:26)
10    at async run (/app/node_modules/p-queue/dist/index.js:163:29)

👟 Reproduction steps

Set up GitHub integration with a personal access token (org application works too).

Create a scaffolder template that fetches a repo, manipulates a file, and PRs back to the repo:

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template

metadata:
  name: test-template
  title: Test template

spec:
  owner: myTeam
  type: service

  parameters:
    - title: Test
      properties:
        test:
          title: Test
          type: string

  steps:
    # Fetch a remote repo with a good number (>500) files
    - id: fetch-remote
      name: Fetch Remote Repo
      action: fetch:plain
      input:
        url: https://github.com/myorg/myrepo.git
        targetPath: myrepo

    # Manipulate a file inside
    - id: do-something
      name: Update file
      action: some:action
      input:
        path: myrepo/folder/somefile.yml

    # PR back the change
    - id: pr-repo
      name: PR to repo
      action: publish:github:pull-request
      input:
        repoUrl: github.com?repo=myrepo&owner=myorg
        branchName: some-branch-name
        gitCommitMessage: Message
        title: PR Title
        sourcePath: myrepo
        targetPath: '.'

Execute the template. If you're not getting rate limit errors, duplicate the PR step with a second branch. Sometimes we were only getting it after the second PR, with a total of ~1000 files between both repos.

To show that this is indeed caused by ignoring the rate limit headers, edit node_modules/@backstage/plugin-scaffolder-backend/dist/index.cjs.js and remove the line ...{ throttle: { enabled: false } }. Re-run the template. For me, this took ~3m to run the first PR, and ~11 to run the second, but they both succeeded.

To show that the issue is resolved by not running a PR from the full repo directory, amend the template to something more like:

Mostly duplicated template ```yaml apiVersion: scaffolder.backstage.io/v1beta3 kind: Template metadata: name: test-template title: Test template spec: owner: myTeam type: service parameters: - title: Test properties: test: title: Test type: string steps: # Fetch a remote repo with a good number (>500) files - id: fetch-remote name: Fetch Remote Repo action: fetch:plain input: url: https://github.com/myorg/myrepo.git targetPath: myrepo # Manipulate a file inside - id: do-something name: Update file action: some:action input: path: myrepo/folder/somefile.yml outputPath: myrepo-bare/folder/somefile.yml # amend syntax to action in use # PR back the change - id: pr-repo name: PR to repo action: publish:github:pull-request input: repoUrl: github.com?repo=myrepo&owner=myorg branchName: some-branch-name gitCommitMessage: Message title: PR Title sourcePath: myrepo-bare targetPath: '.' ```

Refresh the template to make sure you're using the updated one, and run it again. It should succeed quickly for each PR (<10s).

📃 Provide the context for the Bug.

The template we are making needs to make pull requests into two other repositories -- one to onboard it to our CICD platform, and one to onboard it to our Flux admin repository. What we're actually doing is using roadiehq:utils:jsonata:yaml:transform to manipulate some yaml files to add the repo name, and some metadata about it.

We've worked around this for now by cloning to one directory, reading and manipulating the files, writing the changed files to a second, bare directory, and then running the pull request from the bare directory. This works, but is unintuitive because it differs greatly from a CLI git workflow. (Clone, edit, commit, push, PR -- not Clone, edit, copy edited file to a blank directory, commit, push, PR).

Until we figured out the cause, this was slowing down development for ~2 weeks.

🖥️ Your Environment

yarn backstage-cli info:

yarn run v1.22.19
$ /Users/rmartine/dev/ias-backstage/node_modules/.bin/backstage-cli info
OS:   Darwin 22.4.0 - darwin/arm64
node: v16.18.1
yarn: 1.22.19
cli:  0.22.3 (installed)
backstage:  1.8.0

Dependencies:
  @backstage/app-defaults                          1.1.0
  @backstage/backend-app-api                       0.4.1
  @backstage/backend-common                        0.18.3
  @backstage/backend-dev-utils                     0.1.1
  @backstage/backend-plugin-api                    0.4.0, 0.5.0
  @backstage/backend-tasks                         0.4.3, 0.5.0
  @backstage/catalog-client                        1.4.0
  @backstage/catalog-model                         1.2.1
  @backstage/cli-common                            0.1.12
  @backstage/cli                                   0.22.3
  @backstage/config-loader                         1.1.9
  @backstage/config                                1.0.7
  @backstage/core-app-api                          1.4.0
  @backstage/core-components                       0.12.4
  @backstage/core-plugin-api                       1.4.0
  @backstage/errors                                1.1.5
  @backstage/eslint-plugin                         0.1.1
  @backstage/integration-aws-node                  0.1.2
  @backstage/integration-react                     1.1.10
  @backstage/integration                           1.4.3
  @backstage/plugin-api-docs                       0.8.14
  @backstage/plugin-app-backend                    0.3.42
  @backstage/plugin-auth-backend                   0.18.0
  @backstage/plugin-auth-node                      0.2.12
  @backstage/plugin-catalog-backend-module-github  0.2.5
  @backstage/plugin-catalog-backend                1.8.0
  @backstage/plugin-catalog-common                 1.0.12
  @backstage/plugin-catalog-graph                  0.2.26
  @backstage/plugin-catalog-import                 0.9.4
  @backstage/plugin-catalog-node                   1.3.4
  @backstage/plugin-catalog-react                  1.3.0
  @backstage/plugin-catalog                        1.7.2
  @backstage/plugin-events-node                    0.2.3
  @backstage/plugin-kubernetes-backend             0.9.3
  @backstage/plugin-kubernetes-common              0.6.0
  @backstage/plugin-kubernetes                     0.7.8
  @backstage/plugin-org                            0.6.4
  @backstage/plugin-permission-backend             0.5.17
  @backstage/plugin-permission-common              0.7.4
  @backstage/plugin-permission-node                0.7.6
  @backstage/plugin-permission-react               0.4.10
  @backstage/plugin-proxy-backend                  0.2.35
  @backstage/plugin-scaffolder-backend             1.12.0
  @backstage/plugin-scaffolder-common              1.2.6
  @backstage/plugin-scaffolder-node                0.1.1
  @backstage/plugin-scaffolder-react               1.1.0
  @backstage/plugin-scaffolder                     1.11.0
  @backstage/plugin-search-backend-module-pg       0.5.3
  @backstage/plugin-search-backend-node            1.1.3
  @backstage/plugin-search-backend                 1.2.2
  @backstage/plugin-search-common                  1.2.2
  @backstage/plugin-search-react                   1.4.0
  @backstage/plugin-search                         1.0.7
  @backstage/plugin-tech-radar                     0.5.20
  @backstage/plugin-techdocs-backend               1.5.2
  @backstage/plugin-techdocs-module-addons-contrib 1.0.9
  @backstage/plugin-techdocs-node                  1.4.5
  @backstage/plugin-techdocs-react                 1.1.2
  @backstage/plugin-techdocs                       1.4.3
  @backstage/plugin-user-settings                  0.5.1
  @backstage/release-manifests                     0.0.8
  @backstage/test-utils                            1.2.4
  @backstage/theme                                 0.2.17
  @backstage/types                                 1.0.2
  @backstage/version-bridge                        1.0.3
Done in 0.77s.

👀 Have you spent some time to check if this bug has been raised before?

🏢 Have you read the Code of Conduct?

Are you willing to submit PR?

Yes I am willing to submit a PR!

benjdlambert commented 1 year ago

@rmartine-ias thanks for the great issue report!

Hmm its unfortunate that it's hard coded to disable the throttling, I wonder if we should open up the ability to configure that at least, so basically switching the options spreading around so that we can enable it like the PR did originally.

I wonder if we should also raise an issue similar to this one to the upstream library that we're using under the hood to see if theres any interest there in helping building the improvements there too?

Otherwise I'm happy to keep this issue open if anyone else from the community wants to pick it up and give it a go! :pray:

rmartine-ias commented 1 year ago

Thank you for the great project!

I wonder if we should open up the ability to configure that at least...

My main objection is "it is never preferable to hard-fail instead of slowing down", but this is complicated by the fact that you can get away with ignoring rate limits a little. So re-enabling throttling will slow everyone down (at least somewhat -- it makes ~1s PRs take ~10s in testing), but prevent some uses from actually failing. Keeping it will make everything as fast as possible, up to the limit where it fails. Unfortunately, I don't know quite where this is, and it seems to be variable.

I would prefer robustness over speed. It is annoying to have to structure scaffolder templates so all PRs happen before creating objects in other systems, to prevent potential manual cleanup. If this is an option, I think it should be opt-in.

raise an issue similar to this one to the upstream library

Done: https://github.com/gr2m/octokit-plugin-create-pull-request/issues/121

Looking at the backstage code again, would it be possible to send non-symlink and non-executable files as strings instead of base64 encoded Files? I think that would largely resolve the speed issue.

rmartine-ias commented 1 year ago

(@benjdlambert -- forgot to @)

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

rmartine-ias commented 1 year ago

Still an issue, still willing to submit patches pending above discussion

rmartine-ias commented 1 year ago

Came back to this, read the source more deeply, and read a bit more about git -- I'm pretty sure the issue is being caused because Backstage is sending all files as base64-encoded, which the Octokit plugin treats as binary files. The plugin uses the GitHub API instead of the git API, and there is no way to batch-upload binary files using the GitHub API. Thus, one request per file. There is nothing else the plugin can do, given its constraints.

The tree which must be sent to GitHub requires either a content string[0] which creates a blob, or a sha which references an already created blob. (So sha must be used for binary files.) The plugin creates blobs by posting to the API. There does not seem to be another way to use the GitHub API to make a PR that includes binary files.

Backstage sends all (non-symlink) files as base64 encoded. The Octokit plugin does not do any magic to figure out which ones are safe to decode and send as text. So all Files are assumed to be binary. And binary files are expensive to send.

I think this could be avoided by using a tool like isbinaryfile or the lighter-weight isbinarypath (two of the first I found while searching -- there may be better options) in Backstage, to determine which files need to be base64-encoded, and which can be sent to the plugin as utf-8. This saves an API call for each non-binary, non-executable, non-symlink file sent. (Most of them.)

The upstream plugin has made requests sequential instead of concurrent, but with throttling disabled this doesn't help here. There isn't anything else they can do, and they've closed the issue.

I can work on a PR to:

  1. Determine which files should be sent as binary automatically
    • I'd pick something like isbinaryfile for robustness, I think. It is closer to what git uses under the hood. I could also do exactly what git does, and look for the null byte. This would avoid adding dependencies, so that may be preferred.
    • It may be a good idea to have an override option, that makes Backstage send matching files as binary or text. Ordinarily this would be handled by .gitattributes, so maybe support for that? This may be out of scope. I do not want to break anyone's workflow, which this change would do, if they use .gitattributes to send files as binary that git would detect as text.
    • This needs to either include a patch upstream to treat utf-8 100644 Files as strings instead of blobs or a patch to Backstage to send strings instead of Files in this case. I asked if we can do the former, and the latter seems like the fix to do.
  2. Re-enable throttling, pending the above preventing long PR times.

Is this something you would want?

[0]: Which I assume is utf-8 encoded, I tried sending a .gif as base64 encoded and GitHub failed to recognize it as a binary file.

benjdlambert commented 1 year ago

@rmartine-ias sorry I seemed to have missed your last ping on this message.

Great work digging into this and coming up with some suggestions as to what we want to do here, really awesome job.

We've actually had a discussion a little bit about these pull requests actions, and are kind of leaning towards in the future just providing some more generic git actions to use instead of these specific ones for each provider. The thinking is that we should be able to re-use more of the generic actions to be able to create branches / forks and push to these sources, and then we have an action to create a PR or Merge Requests using the providers APIs with the base and target branches.

With that said however, that's not coming any time soon, and I think we want to explore that a little more before committing to it being something we want, so I'm happy to proceed with some of your suggestions in the short term to get this working!

I'd pick something like isbinaryfile for robustness

Let's do this, seems sane!

Ordinarily this would be handled by .gitattributes, so maybe support for that? This may be out of scope.

I wonder if we can mask up this behaviour into opt in or something for now as you suggested, so that you can test it when it get's released in anger without breaking anyones workflow and then we can look at options if we want to mimic .gitattributes.

Hope this is OK? :pray:

rmartine-ias commented 1 year ago

@benjdlambert Thanks for the reply! And no worries, the workarounds we have are doing just fine.

Generic git actions seems like a great idea. Protocols > APIs, imo. Depending on how you end up doing it, that could have avoided this issue entirely. (For example, we have a Jenkins step which will create a pull request, and it shells out to the git binary to do everything -- shallow cloning, making branches, committing, and pushing. Since it has the whole git tree, it's able to send just the diff. Then, like you said, it uses the GitHub API to make the PR.) Also that way you'll have some support for Codeberg or Sourcehut or whatever people are using, without having to write additional code.

Opt-in seems like a good idea! Is the right place to put this knob in the action's schema? I don't know how stable you try and keep that.

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

rmartine-ias commented 1 year ago

Not stale, haven't been able to find the time yet.

pedronastasi commented 1 year ago

At Expedia Group, we discovered this issue as it was causing a memory leak... Has it happened to any other team?. the memory leak?. It makes so many retries to/api/v3/repos/<org>/<repo>/git/blobs 460 times and the /api/v3/repositories/89693/git/blobs = 300 times..

benjdlambert commented 1 year ago

@pedronastasi I'm not sure that it's a memory leak, as mentioned earlier in this issue there's a lot of requests that are made when creating PR's against a GH repo if you have a lot of files in the working directory.

It might be worth having a look into the octokit plugin to see if there's something that is going wrong there leading to too many requests, but I'm feeling that the best way to solve a lot of these issues is for us to have actions which use the git protocol directly, to be able to push changes to a branch on the repo or a fork, and then use the provider specific API's to create Pull Requests or Merge Requests from and source branch and target branch.

pedronastasi commented 1 year ago

I was able to reproduce the issue locally and measure it with clinic.js . This change in the OSS code is what is causing the memory leak . As we can see from the image it makes 1500 requests through a Promise.all() . Even though all the requests are handled after 6 minutes, RSS remains high, causing the leak image

If I comment out that change this is what happens

image

RSS remains stable and it makes around 5 request at a time.

However, This action with that option commented out takes forever and after a 15 minutes it hits the scaffolder timeout

benjdlambert commented 1 year ago

I'm still not convinced that this is a memory leak though, the heap size is stable, which is the amount of memory actually being used after GC and everything else that's going on. What you're seeing is the amount of memory allocated, but it's going to allocate more memory to do everything in parallel without any throttling.

Reading from the docs here under clinic this looks pretty normal right? https://clinicjs.org/documentation/doctor/04-reading-a-profile/#memory-usage-mb

pedronastasi commented 1 year ago

Yes, the memory heap remains stable. .. However, at the moment the 1500 requests occur in parallel the RSS increases up to 1500Mbs, and once all the requests are handled, it never releases it, which seems to be in a accordance to the memory leak concept. right?.

benjdlambert commented 1 year ago

@pedronastasi what do you have set for --max-old-space-size? I'm wondering if it's just not been claimed back by GC yet? It's possible that it could be use of Buffer or some other things. It could also be worth opening another issue to discuss this, as I feel like we're straying a little off topic here 😅

pedronastasi commented 1 year ago

Thanks very much @benjdlambert for your help.. I'll try to fix this issue with my team .. It doesn't sound like this is something that the Backstage dependencies are causing.

luis-guts commented 1 year ago

Hi @rmartine-ias, I'm facing this issue in some PR's during the day. Do you have any workaround to suggest to avoid this? I think is better ask to the person who dig all this details to search by my self.

rmartine-ias commented 1 year ago

@luis-guts Our current workaround is to only send the files that have changed. Internally, this is referred to as "the skeleton/flesh pattern", and looks something like this:

template.yaml excerpt ```yaml - id: fetch-flux-platform-admin name: Pull latest flux-platform-admin action: fetch:plain input: url: https://github.com/${{ (parameters.repoUrl | parseRepoUrl).owner }}/flux-platform-admin.git targetPath: flesh/flux-platform-admin - id: template-flux-platform-admin name: Fetch flux-platform-admin skeleton action: fetch:template input: url: skeleton/flux-platform-admin targetPath: './flux-platform-admin' values: repo: ${{ parameters.repoUrl | parseRepoUrl }} suffix: ${{ r/repo=flux-tenant-(.*?)&/g.exec(parameters.repoUrl)[1] }} owningTeam: ${{ steps['fetch-group'].output.entity.metadata.name }} - id: update-flux-platform-admin-dev-kustomization name: Update flux-platform-admin dev kustomization.yaml action: roadiehq:utils:jsonata:yaml:transform input: path: flesh/flux-platform-admin/tenants/dev-cluster-path/kustomization.yaml expression: |- ( $suffix := $match("${{ (parameters.repoUrl | parseRepoUrl).repo }}", /^flux-tenant-(.*)+$/).groups[0]; $ ~> | $ | { "resources": [ resources, $join(["../base/", $suffix]) ], "patchesStrategicMerge": [ patchesStrategicMerge, $join([$suffix, "-patch.yaml"]) ] } | ) - id: write-dev-kustomization name: Write dev kustomization changes action: roadiehq:utils:fs:write input: path: flux-platform-admin/tenants/dev-cluster-path/kustomization.yaml content: ${{ steps["update-flux-platform-admin-dev-kustomization"].output.result }} - id: pr-flux-platform-admin name: 'Onboarding PR: flux-platform-admin' action: publish:github:pull-request input: repoUrl: "github.com?repo=flux-platform-admin\ &owner=${{ (parameters.repoUrl | parseRepoUrl).owner }}" description: ${{ parameters.repoDescription }} branchName: >- onboard-tenant-${{ (parameters.repoUrl | parseRepoUrl).repo }} gitCommitMessage: >- [${{ parameters.jiraTicket }}] Onboarding ${{ (parameters.repoUrl | parseRepoUrl).repo }} title: >- [${{ parameters.jiraTicket }}] Onboarding ${{ (parameters.repoUrl | parseRepoUrl).repo }} sourcePath: './flux-platform-admin/' targetPath: '.' ```

We have three directories:

The naming scheme isn't the most clear, but I think you get the gist. Only repo/ is passed off to publish:github:pull-request. It contains only the changeset.

The real fix is me (or someone else) submitting a patch to Backstage, and maybe the plugin author accepting my patch there. I have some time to work on this today and will see what I can do.

luis-guts commented 1 year ago

@rmartine-ias Thanks for the reply, it worked for my problem! Good job compiling all this informations, helped me a lot

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

rmartine-ias commented 1 year ago

Still an issue, still working on solution

vinzscam commented 1 year ago

assigned to you @rmartine-ias 🙏

sblausten commented 1 year ago

Thanks for this great issue @rmartine-ias - we found another workaround that seems to work well for some use cases:

The issue we work around is that the fetch:plain action pulls the whole repo down to the default workspace so that when the PR is opened all the files are added. So we propose using fs:rename like so https://roadie.io/docs/scaffolder/writing-templates/#pull-request-creation-failed-caused-by-httperror-you-have-exceeded-a-secondary-rate-limit---publishgithubpull-request so that the pr is only opened with the files that have actually changed.

On this theme, another two potential solutions we considered were:

Your fix sounds like the best option though if that's what you are working on.

Thanks again!

github-actions[bot] commented 10 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

ZacHigi commented 10 months ago

Not stale

tudi2d commented 10 months ago

Hej @rmartine-ias, would you still be interested in picking this up? Otherwise I would unassign you & open it for someone else to tackle! :)

Alok650 commented 8 months ago

Hi guys facing the same issue when trying to use confluence-to-markdown template i.e. the action publish:github:pull-request is causing error: Pull request creation failed; caused by HttpError: You have exceeded a secondary rate limit. I have tried using a new token just for this template, but that also didn't solve this.

were we able to find a solution / temporary work-around (within backstage) for this?

Xantier commented 8 months ago

@Alok650 , here is a workaround for that: https://roadie.io/docs/scaffolder/troubleshooting/#pull-request-creation-failed-caused-by-httperror-you-have-exceeded-a-secondary-rate-limit---publishgithubpull-request

github-actions[bot] commented 6 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] commented 4 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Xantier commented 4 months ago

Still valid

billyatroadie commented 2 months ago

Sheesh.. Over a year later and we haven't been able to fix this yet? I wish I could help, but the changes are so far down in the bowels of backstage..

benjdlambert commented 2 months ago

@billyatroadie yeah, there's a path forward which I think is probably best highlighted in https://github.com/backstage/backstage/issues/22244 but need to find some time to close out that RFC and start implementing things.

rmartine-ias commented 1 week ago

Sorry about that.. been swamped with other work, unsure if I'll be able to get back to this soon. This should be 90% there: https://github.com/backstage/backstage/pull/19709 (now all that's left is for someone to pick up the other 90%...)