kubernetes-client / javascript

Javascript client
Apache License 2.0
1.95k stars 494 forks source link

Proposal to swap deprecated request for node-fetch #754

Open OliverMKing opened 2 years ago

OliverMKing commented 2 years ago

Proposal to swap deprecated request for node-fetch

Request is fully deprecated requiring us to switch libraries (see #414 for more information).

@brendanburns outlined some of the clear options forward. This change should ideally be a long-term solution leaving us with the option of either modifying the node-typescript open-api-generator to use a different request library or migrate to one of the other generator options.

Modifying an outdated node-typescript library does not seem like the right path forward. It would require us to still use a new library which would likely change the api and still be a breaking change. We might as well migrate to a newer version of openapi-generator while we do that and acquire the advantages of a more up-to-date version.

Choosing a new library

@d-cs posted a nice summary of the different choices. We narrowed it down to Axios or Fetch. The other libraries didn't match how widely adopted and focused these libraries are.

Fetch is simply the standard JavaScript as a whole is moving to, which brings more longterm support aligning us better with our goal of migrating to a long-term solution. node-fetch has significantly more weekly downloads than axios. The package size of node-fetch is also noticeably smaller than axios.

Additionally, future improvements could likely be done to allow us to also support the browser #165 since Fetch is native to the browser.

Node-fetch

Fetch is a native browser API that is not implemented in Node.js by default. Node-fetch is the natural choice for the Node implementation because it is the most widely adopted.

The openapi-generator uses the browser implementation of Fetch so we must substitute node-fetch in manually.

Implementation steps

@davidgamero and I will work to implement the following changes.

Other repositories

Kubernetes-client repository

import fetch from 'node-fetch';

// inject node-fetch
if (!globalThis.fetch) {
    // @ts-ignore
    globalThis.fetch = fetch;
    globalThis.Headers = Headers;
    globalThis.Request = Request;
    globalThis.Response = Response;
}
brendandburns commented 2 years ago

This approach is going to introduce breaking changes for consumers of this client library (there were 151k downloads last week alone).

We need to have a more careful plan for how we roll out this change.

I would like to support existing users for at least a few versions (perhaps 3?) while we make this transition.

Perhaps the right way to do this is to introduce a new major version to signify the new generator?

e.g.

0.17.x == old generator, Kubernetes 1.23 API 0.18.x == old generator, Kubernetes 1.24 API 0.19.x == old generator, Kubernetes 1.25 API Support for old generator stops after 1.25

1.0.x == new generator, Kubernetes 1.23 API 1.1.x == new generator, Kubernetes 1.24 API 1.2.x == new generator, Kubernetes 1.25 API Support for subsequent kubernetes versions continues with the new generator.

This will require introducing a new branch for the 1.x series of releases, that's probably going to be necessary anyway since we don't want to make this change in a single PR.

Please note that this is going to be a big change that will touch nearly the entire library, I think

OliverMKing commented 2 years ago

That phasing plan makes sense to me. It's in-line with the npm semantic versioning.

It might make sense to deprecate the 0.x releases with a warning telling them that support for the old client api will no longer be supported after a certain version and encouraging them to upgrade. That should alert existing users of the changes.

I'm prepared to take on the responsibility associated with this upgrade.

dominykas commented 2 years ago

Is using both generators and fetch/request in parallel not an option during the transition? I haven't thought through all implications or the workload of doing that, just thinking out loud, but it would be nice if you could do something like makeApiClient(CoreV1Api, { fetch: true }) for the next 3 releases or smth, and only do the breaking change then?

davidgamero commented 2 years ago

I agree it would be nice if we could keep the user experience that cohesive, but unfortunately these changes seem too broad and breaking for that to be doable. Given the extent of the proposed changes, it could introduce significant overhead and complexity to have two major versions bundled in a single release.

I've taken a shot at incrementing OPENAPI_GENERATOR_COMMIT as proposed, and it appears that several response types have changed, an example is the removals of the .body properties ex:

(await api.listNamespacedPod(namespace)).body -> (await api.listNamespacedPod({ namespace }))

Another predictable area of major changes is the error interfaces since we are changing the network library, and the proposal includes incrementing typescript by a major version.

This may be a config option that I've inadvertently changed, but IMO it does come across cleaner.

This could be a concern if we had { fetch: true } as an option, since the option wouldn't just change functionality under the hood, but also several parts of the api interfaces.

I'll keep checking it out. There could be an easier way tweaking the generator options that I'm not familiar with.

brendandburns commented 2 years ago

One other option would be to package the old generated code and the new generated code into different directories and effectively have:

import api_old from @kubernetes/node-client

or

import api_new from @kubernetes/node-client

e.g. keep everything in one place from a release perspective, but introduce a fork in the imports that you can use.

davidgamero commented 2 years ago

That could definitely work if we introduce a named export for api_old. I'm not sure how we could do that without potentially breaking existing default import compatibility unless we nest the api_new inside it.

Another possibility would be to have

import api_old from @kubernetes/node-client

or

import api_new from @kubernetes/node-client/fetch

to retain existing import compatibility for import k8s from @kubernetes/node-client

  1. Current Top Level Exports

    // ✅  `import { CoreV1Api } from @kubernetes/node-client`
    // ✅ `import k8s from @kubernetes/node-client`
    @kubernetes/node-client
    ├── CoreV1Api
    ...
    └── KubeConfig
  2. Named Level Exports

    // ✅ `import { api_new } from @kubernetes/node-client`
    // ✅ `import { api_old } from @kubernetes/node-client`
    // ❌ `import k8s from @kubernetes/node-client` <- no longer works
    @kubernetes/node-client
    ├── api_old
    │    ├─ CoreV1Api
    │    ...
    │    └─ KubeConfig
    │
    └── api_new
     ├─ CoreV1Api
     ...
     └─ KubeConfig
  3. Messy Nested Export

    // ✅ `import { api_new } from @kubernetes/node-client`
    // ✅ `import { api_old } from @kubernetes/node-client`
    // ✅ `import k8s from @kubernetes/node-client` <- works still, but gross way to do it
    @kubernetes/node-client
    ├── CoreV1Api
    ...
    ├── KubeConfig
    └── api_new
     ├─ CoreV1Api
     ...
     └─ KubeConfig
brendandburns commented 2 years ago

I think there are different kinds of breaking changes. I feel like changing an import statement is a much lower cost breaking change vs changing the entire API surface area. So I would be ok to require

import { api_new as api } from @kubernetes/node-client

or

import { api_old as api } from @kubernetes/node-client

And I think that is feasible...

OliverMKing commented 2 years ago

I don't think putting both the new api and old api in the same package / directory makes sense to me.

The fact that it's a breaking change between them means that someone can't just swap out API's when making the change.

// switching from this
import { api_new as api } from @kubernetes/node-client
// to this would still require code changes in lots of places where the api is used
import { api_old as api } from @kubernetes/node-client

Someone migrating to the new api would have to change every single node-client import statement in their repo which would be time consuming and tedious. It's simpler from a user-standpoint to simply upgrade their version then fix any errors in the code.

By having both the old and new api in the same package we also wouldn't be able to take advantage of a npm deprecation warning which would help warn users of the upcoming drop of support for the old api.

Users wouldn't be upgraded to the breaking api without doing it manually (or changing their package.json to allow for major release updates) due to npm semantic versioning.

From a user-standpoint I don't think it's easier to change the import and I believe one of our goals should be gently pushing consumers to migrate when they get the chance (or at least be aware of upcoming drops in support). From our standpoint it's more difficult to work with a directory layout that has two apis rather than just different branches. Storing the new and old api in the in the same package would make more sense to me if there was some reason why a user would want to use both the old and new api but I can't think of one.

davidgamero commented 2 years ago

The major version suggestion seems be the cleanest way.

Putting the changes in a new major version would make upgrading a clear decision without interfering with existing code.

That way we don't mix code from two generators in a single package.

ivanstanev commented 2 years ago

Completely supporting the major version upgrade. Our team doesn't mind spending some time adjusting code to the new API, as long as we can stop using deprecated or regularly vulnerable packages! 😄

brendandburns commented 2 years ago

Ok, sounds like the consensus is for a major version upgrade. I have created a release-0.x branch, which we will use for the releases of the old generated client for the next ~9 months.

We can start sending PRs to the master branch now to start making these changes.

brendandburns commented 2 years ago

We should also probably turn this discussion into a markdown document that describes our approach and links off of the main README.md

OliverMKing commented 2 years ago

Quick update: @davidgamero and I have been working on this for the last two weeks. We're having to make some changes to open-api generator to support the way we need to do auth. @davidgamero has been working on identifying the exact improvements necessary.

davidgamero commented 2 years ago

Update 2: Two PRs have been completed against OpenAPITools/openapi-generator (the typescript generator template in specific), and a third is in progress.

Complete (PR)

In Progress (Issue) [REQ][typescript] Enable passing custom SecurityAuthentication to API

With this last PR, the ability to pass certificates to the fetch client should be exposed in the typescript generator, clearing the way for full auth migration

davidgamero commented 2 years ago

Update 3: Custom Authentication is now complete in the typescript generator. Currently migrating the request implementations outside the generated code (in /src)

https://github.com/OpenAPITools/openapi-generator/pull/11400

loynoir commented 2 years ago

fetch

https://nodejs.org/en/blog/release/v17.5.0/

Adds experimental support to the fetch API. This adds a --experimental-fetch flag that installs the fetch, Request, Response and Headers globals.

node 17's fetch is using undici

https://github.com/nodejs/node/commit/76a229c4ff

OliverMKing commented 2 years ago

fetch

  • node >=17.5.0 has --experimental-fetch, which is using undici
  • node >= 16, has undici support
  • node < 16, maybe fallback to node-fetch

https://nodejs.org/en/blog/release/v17.5.0/

Adds experimental support to the fetch API. This adds a --experimental-fetch flag that installs the fetch, Request, Response and Headers globals.

node 17's fetch is using undici

nodejs/node@76a229c4ff

Hopefully the TypeScript generator swaps over to this way.

haoxins commented 2 years ago

https://nodejs.org/en/blog/announcements/v18-release-announce/#fetch-experimental Node 18 is here~

davidgamero commented 2 years ago

This will be great! It should be simpler to switch from node-fetch to the included fetch in the future (since their syntax is very similar)

chadbr commented 2 years ago

Hello,

If I understand correctly - Here:

https://github.com/kubernetes-client/javascript/blob/b4ccd6e6588960edb02b89bc0a3b32e797d71554/FETCH_MIGRATION.md?plain=1#L62

The latest status is: Fix errors in /src folder (due to new generated api)

Is this something I can help with? Is there a branch with source I can help to get compiling, I'm happy to try to help.

Thanks, Chad

davidgamero commented 2 years ago

Hi @chadbr , the current working branch is https://github.com/kubernetes-client/javascript/tree/release-1.x and the next step is migrating all the auth providers and their unit tests. It's mostly mapping headers and query params to use the new RequestContext ex PR: https://github.com/kubernetes-client/javascript/pull/790/files

Any help migrating those would be very appreciated! I'm finishing up the log.ts migration atm as it's one of the last places we still directly call the request api outside the generated code. As for auth providers, I only did the azure_auth so far, so the rest are all in need of a migration PR

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

dpryden commented 1 year ago

What is the current status of this? It looks like the tests in the release-1.x branch are passing. Is the FETCH_MIGRATION.md document up to date? What work remains before this branch can be released?

davidgamero commented 1 year ago

The scope of the initial release was reduced to get us to a testable state faster, so the watch and a couple other kubectl-style wrappers are not fully implemented; however, the generated api parts are all passing tests and functional for a testable release atm.

Nokel81 commented 1 year ago

/remove-lifecycle stale

maddie-j commented 1 year ago

Heya, just wondering what tasks there are left before request is gone for good? Our team really wants request out of our repo with all its associated security risks, so I can probably bug my manager to give me a few days to help out here to get things moving.

brendandburns commented 1 year ago

@maddie-j see the comment here:

https://github.com/kubernetes-client/javascript/issues/754#issuecomment-1147618317

The branch is there and generally works but is incomplete. if you want to see if it works for your use-cases and send PRs if it doesn't that will help push things along.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

Nokel81 commented 1 year ago

/remove-lifecycle stale

dweitzman-codaio commented 1 year ago

There's a new CVE in request (https://github.com/advisories/GHSA-p8p7-x288-28g6), so folks who audit their node deps for security issues may notice today that @kubernetes/client-node is one of the libraries that still depends on request

brendandburns commented 1 year ago

For anyone who arrives here from that CVE, there is a 1.x release branch that you can try that uses fetch instead. It is incomplete though (help is welcome!)

mstruebing commented 1 year ago

@brendandburns is there a good overview of tasks that needs to be done for that branch to be able to help efficiently?

brendandburns commented 1 year ago

@mstruebing much of the core functionality should work, I think the main task would be to go through all of the example code and port it to the new APIs and verify that it all works correctly.

Alternatively, just trying it in your application and seeing if it works correctly and then filing bugs (or fixing them in PRs :) would also be very helpful.

cc @davidgamero who has been mostly driving this effort.

davidgamero commented 1 year ago

@mstruebing i just put up a PR #1014 with a more detailed update on the status of 1.x release TODOs. I’m continuing work on auth, so the main two open items are ‘src/log.ts’ and ‘src/watch.ts’ need significant engineering to update their async types and implementations to fully remove ‘request’

@brendandburns is correct- the examples also need to be updated to fit the new method signatures, which shouldn’t be too bad with a couple exceptions for examples relying on request manipulation directly.

If you’d like to take any of the items I can start making issues for them and we can manage the remaining work that way. I don’t remember if we have a dedicated release-1.x tag already but that could keep things tidy.

Fryuni commented 1 year ago

Since the 1.x branch is working for most operations, would it possible to release a 1.0.0-alpha flagged as a pre-release on NPM? This way, those that depend only on what is already working may opt to update.

Dependencies to a Git branch don't work when the code is in TypeScript.

brendandburns commented 1 year ago

@Fryuni I have just pushed 1.0.0-rc1 out to NPM.

https://www.npmjs.com/package/@kubernetes/client-node/v/1.0.0-rc1

Please give it a try and report any issues.

Note that it will require rewriting your client code as the API has changed somewhat.

Nokel81 commented 1 year ago

I have given it a try for Lens (https://github.com/lensapp/lens/pull/7418), at it works great. Though we don't use any of the authorization flow from the JS library so that might be why we encountered no issues.

davidgamero commented 1 year ago

I'm so glad to see that it's working for you!

rbnayax commented 1 year ago

@Fryuni I have just pushed 1.0.0-rc1 out to NPM.

https://www.npmjs.com/package/@kubernetes/client-node/v/1.0.0-rc1

Please give it a try and report any issues.

Note that it will require rewriting your client code as the API has changed somewhat.

I see that request is still listed as a dependency, but not being used, is it possible to remove it?

brendandburns commented 1 year ago

@rbnayax happy to take a PR to update package.json or we'll get to it eventually.

rbnayax commented 1 year ago

@rbnayax happy to take a PR to update package.json or we'll get to it eventually.

Gladly - https://github.com/kubernetes-client/javascript/pull/1033

briandealwis commented 1 year ago

@brendandburns would it be possible to cut an 1.0.0-rc2 with #1033 to have a version without any references to request?

Saeger commented 1 year ago

Feedback, Tests, Considerations

Hi, First of all, thanks for all the hard work here. I have given the RC1 a try here @digitalroute and it have worked mostly for my tests. I would like to provide some considerations as part of the tests and usage as form of contribution to this important refactor.

About the breaking change

The breaking change so far are simple to solve TBH (But in our case, all K8S comms are done by one internal module which wraps @kubernetes-client/javascript), this made our life easier and I totally recommend these type of approach when using microservice architecture. Back to what matters!

Considerations / Issues

Below are some considerations and issues which I would like to ensure I haven't done anything wrong on my side @brendandburns @davidgamero @OliverMKing:

  1. The patchNamespacedConfigMap is missing the header parameter. This way, one can't use the application/strategic-merge-patch+json header.
  2. The deleteNamespacedService is now returning a V1Status instead of V1Service. I guess this was wrong before actually and maybe it was addressed in a future version since we were using 0.18.1, but wanted to confirm this change is not related to the RC1 itself.
  3. The patchNamespacedService and patchNamespacedCustomObject have the same issue as @1
  4. Not really an issue but I wonder if methods like createNamespaceCustomObject will become Promise from now on ? And I will need to assume that it's interface will follow the new ones returning the body only instead of both response & body. I think this needs to be mentioned in the migration documentation somehow?
  5. Still a bit related to @4, I think methods like deleteNamespacedCustomObject should still return Promise since this follows the pattern of the library, doesn't matter if it's a Customer Object really, the interface of the return message is still the same and shouldn't be changed to Promise
  6. I understand that now, since we're moving away from "request" library, the HttpError type is no longer valid. Will we have an alternative? Should this be considered in the migration documentation? My interest in this point is how to understand the HTTP errors thrown by the library so I can track down the 404 for when resources are not found (so we can treat them differently).

I am not sure how should I report these issues so I decided to post it here.

Kudos

I would like to also give some kudos because of the named parameters! No more "this.namespace, undefined, undefined, undefined, undefined, propagationPolicy" 💯

brendandburns commented 1 year ago

@briandealwis sorry for the delay, an rc2 has been pushed to npm. Please give it a try.

felix-gohla commented 1 year ago

EDIT: The issue down below already exists at #1075. Sorry for bothering.

Not sure what I am doing wrong but trying out both of the rcs I get "Error: None of the given media types are supported: " on various operations (for example I tried to create a namespace).

This is the bit that calls the ObjectSerializer with an empty array as parameter.

https://github.com/kubernetes-client/javascript/blob/147d11e70fffe019fdf0f389bdf512e0b7da50fa/src/gen/apis/CoreV1Api.ts#L2778-L2779

And here's the implementation of the ObjectSerializer.

https://github.com/kubernetes-client/javascript/blob/147d11e70fffe019fdf0f389bdf512e0b7da50fa/src/gen/models/ObjectSerializer.ts#L1745-L1766

It seems like the generator produces code that always fails (or am I missing something here?).

For reference: I am using node v18.13.0.

pauliusg commented 12 months ago

As @Saeger has mentioned, with @kubernetes/client-node@1.0.0-rc2 patch methods do not work as there is no way to pass headers. I tried to use middleware but maybe I do that in a wrong way. My TS example which fails to patch pod:

import * as k8s from '@kubernetes/client-node';
import {
    createConfiguration,
    Middleware,
    RequestContext,
    ResponseContext,
    ApiException,
} from '@kubernetes/client-node';

(async () => {
    const kc = new k8s.KubeConfig();
    kc.loadFromDefault();

    const k8sApi = kc.makeApiClient(k8s.CoreV1Api);

    const namespace = 'test-ns';
    const podName = 'test-pod';
    const middleWare: Middleware = {
        async pre(context: RequestContext) {
            context.setHeaderParam('Content-type', k8s.PatchUtils.PATCH_FORMAT_JSON_PATCH);
            return context;
        },
        async post(context: ResponseContext) {
            return context;
        },
    };
    const configuration = createConfiguration({ promiseMiddleware: [middleWare] });

    try {
        const podResponse = await k8sApi.readNamespacedPod({
            name: podName,
            namespace,
        });
        const annotationsObject = podResponse.metadata?.annotations || {};
        const costAnnotationName = 'testAnnotation';
        const costOldValue = annotationsObject[costAnnotationName];
        const newAnnotationValue = Date.now();
        const costNewValue = newAnnotationValue.toString();

        if (costOldValue !== costNewValue) {
            annotationsObject[costAnnotationName] = costNewValue;

            const patch = [
                {
                    op: 'replace',
                    path: '/metadata/annotations',
                    value: annotationsObject,
                },
            ];

            await k8sApi.patchNamespacedPod(
                {
                    name: podName,
                    namespace,
                    body: patch,
                },
                configuration
            );
        }
    } catch (e: any) {
        const logData = e instanceof ApiException ? e.body : e;
        console.log('Failed to update pod annotation.', logData);
    }
})();

I get this in console:

Failed to update pod annotation. TypeError [ERR_INVALID_URL]: Invalid URL                                                           
    at new NodeError (node:internal/errors:399:5)                                                                                   
    at new URL (node:internal/url:560:13)                                                                                           
    at new RequestContext (/tmp/node_modules/@kubernetes/client-node/dist/gen/http/http.js:44:20)                                   
    at ServerConfiguration.makeRequestContext (/tmp/node_modules/@kubernetes/client-node/dist/gen/servers.js:46:16)                 
    at CoreV1ApiRequestFactory.patchNamespacedPod (/tmp/node_modules/@kubernetes/client-node/dist/gen/apis/CoreV1Api.js:8004:51)    
    at ObservableCoreV1Api.patchNamespacedPod (/tmp/node_modules/@kubernetes/client-node/dist/gen/types/ObservableAPI.js:8977:59)   
    at ObjectCoreV1Api.patchNamespacedPod (/tmp/node_modules/@kubernetes/client-node/dist/gen/types/ObjectParamAPI.js:2598:25)      
    at /tmp/test.ts:51:26                                                                                                           
    at Generator.next (<anonymous>)                                                                                                 
    at fulfilled (/tmp/test.ts:28:58) {                                                                                             
  input: '/api/v1/namespaces/test-ns/pods/test-pod',                                    
  code: 'ERR_INVALID_URL'                                                                                                           
}                                                                                                                                   

When I do not pass configuration I get this:

Failed to update pod annotation. Error: None of the given media types are supported: application/json-patch+json, application/merge-patch+json, application/strategic-merge-patch+json, application/apply-patch+yaml
    at Function.getPreferredMediaType (/tmp/node_modules/@kubernetes/client-node/dist/gen/models/ObjectSerializer.js:1760:19)
    at CoreV1ApiRequestFactory.patchNamespacedPod (/tmp/node_modules/@kubernetes/client-node/dist/gen/apis/CoreV1Api.js:8023:65)
    at ObservableCoreV1Api.patchNamespacedPod (/tmp/node_modules/@kubernetes/client-node/dist/gen/types/ObservableAPI.js:8977:59)
    at ObjectCoreV1Api.patchNamespacedPod (/tmp/node_modules/@kubernetes/client-node/dist/gen/types/ObjectParamAPI.js:2598:25)
    at /tmp/test.ts:51:26
    at Generator.next (<anonymous>)
    at fulfilled (/tmp/test.ts:28:58)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)

@Saeger instead of HttpError you should use ApiException.

joshperry commented 9 months ago

@pauliusg Problem with ERR_INVALID_URL is that when passing in a configuration to a method, it overrides the whole client configuration, and if you create a configuration without specifying a particular property, it uses a default. So the baseServer config property get set to a default empty value and the error gets thrown when the URL lib attempts to parse the hostless URL.

I worked around that like this:


const patchconf = client => createConfiguration({
  promiseMiddleware: [
    {
      pre: async (context) => {
        context.setHeaderParam('Content-type', k8s.PatchUtils.PATCH_FORMAT_JSON_PATCH);
        return context;
      },
      post: async (context) => {
        return context;
      },
    },
  ],
  baseServer: client.api.configuration.baseServer,
});

The problem here is that when this error is resolved, the media type error returns. Turns out that problem is caused by a bug in the generated code that can't be worked around.

In the action method it attempts to automate the request content-type selection:

        // Body Params
        const contentType = ObjectSerializer_1.ObjectSerializer.getPreferredMediaType([
            "application/json-patch+json",
            "application/merge-patch+json",
            "application/strategic-merge-patch+json",
            "application/apply-patch+yaml"
        ]);

That function looks like this:

const supportedMediaTypes = {
    "application/json": Infinity,
    "application/octet-stream": 0,
    "application/x-www-form-urlencoded": 0
};

...

    static getPreferredMediaType(mediaTypes) {
        /** According to OAS 3 we should default to json */
        if (mediaTypes.length === 0) {
            return "application/json";
        }
        const normalMediaTypes = mediaTypes.map(this.normalizeMediaType);
        let selectedMediaType = undefined;
        let selectedRank = -Infinity;
        for (const mediaType of normalMediaTypes) {
            if (supportedMediaTypes[mediaType] > selectedRank) {
                selectedMediaType = mediaType;
                selectedRank = supportedMediaTypes[mediaType];
            }
        }
        if (selectedMediaType === undefined) {
            throw new Error("None of the given media types are supported: " + mediaTypes.join(", "));
        }
        return selectedMediaType;
    }

I think it should be obvious why this is doomed to permanently fail...

I've spent half the day working on getting a local dev setup to iterate on a potential fix, but haven't yet succeeded. Until then, I've told my security people that the package update date is unfortunately indeterminate.

brendandburns commented 9 months ago

@joshperry if the bug is indeed in the generated code, the code generator is here:

https://github.com/OpenAPITools/openapi-generator/tree/master/modules/openapi-generator/src/main/resources/typescript-fetch

They're usually pretty good about merging fixes, and then we can regenerate.

davidgamero commented 9 months ago

which media type are you missing? it's possible we just need to bump the gen hash to include https://github.com/OpenAPITools/openapi-generator/pull/16386

joshperry commented 9 months ago

@davidgamero The problem is that the getPreferredMediaType throws if it's called with a list that does not contain one of the entries in supportedMediaTypes, for patch* methods that means always.

@brendandburns Yeah, I got the three projects cloned, just struggling to weave them together so I can generate locally. Looks like the gen project actually can't be pointed at a fork with ENV vars using the tooling as it is. Perhaps there are some docs on setting up local dev workflow that I haven't found.

ETA: I see, I can set GEN_ROOT to my local clone path.