kubernetes-client / javascript

Javascript client
Apache License 2.0
2.03k stars 518 forks source link

1.0.0-rc4 Example for overriding the content-type header of patching operations do not work #1499

Open tugtugtug opened 10 months ago

tugtugtug commented 10 months ago

Describe the bug The configuration passed into the patching APIs do not honor the middleware overrides. The generated code does not even look at the passed in configuration for the middlewares, it only looks at the middlewares of its member configuration object. e.g.

    patchNamespacedStatefulSetWithHttpInfo(name, namespace, body, pretty, dryRun, fieldManager, fieldValidation, force, _options) {
        const requestContextPromise = this.requestFactory.patchNamespacedStatefulSet(name, namespace, body, pretty, dryRun, fieldManager, fieldValidation, force, _options);
        // build promise chain
        let middlewarePreObservable = (0, rxjsStub_1.from)(requestContextPromise);
        for (let middleware of this.configuration.middleware) {
            middlewarePreObservable = middlewarePreObservable.pipe((0, rxjsStub_2.mergeMap)((ctx) => middleware.pre(ctx)));
        }

Client Version 1.0.0-rc4

Server Version 1.25.1

To Reproduce Steps to reproduce the behavior:

Expected behavior The middleware should be called to override the request.

Example Code see https://github.com/kubernetes-client/javascript/blob/62e5ab1701cb5659656f1941ef11eb748e626c25/examples/patch-example.js

Environment (please complete the following information):

Additional context Related to https://github.com/kubernetes-client/javascript/issues/1398

Workaround

        return createConfiguration({
            baseServer: baseServerConfig,
            middleware: [mw],
            authMethods: {
                default: {
                    applySecurityAuthentication: async (req) => {
                        await mw.pre(req).toPromise();
                        await kc.applySecurityAuthentication(req);
                    }
                },
            },
        });
brendandburns commented 10 months ago

I think that you will need to fix this in the upstream generator: https://github.com/OpenAPITools/openapi-generator/tree/master/modules/openapi-generator/src/main/resources/typescript-fetch

Once the fix is merged there, we can regenerate the client.

tugtugtug commented 10 months ago

thanks @brendandburns , related to https://github.com/OpenAPITools/openapi-generator/issues/14549

tugtugtug commented 9 months ago

The above issue was created a year ago, and given the repo's issue # keeps increasing and now at 4k, I don't have faith this will get addressed soon. I'm okay with my workaround for now, I'll keep this open so ppl can find the workaround here.

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 4 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-client/javascript/issues/1499#issuecomment-2171767796): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
impatient commented 2 months ago

Can confirm this is still happening. Had to use the workaround mentioned on 1.0.0-rc6

https://github.com/nullplatform/k8s-lease-lock/commit/6b189fd6fa8f001f96d61a52f835cce691edff3a#diff-d8eee3b9c50488b328e4b9805b642354ccbaefadf361cbd9bd763925ed61ebeaR50

Can confirm that it was indeed a pain to work with. Is there any interest in documenting the workaround? For how small the PR was, there was a not insignificant amount of frustration. Is there a preferred way this could be handled, like passing in a patchType?

Anyhow, was nice to get rid of the 4 or 5 undefineds on the patch call.

brendandburns commented 2 months ago

/reopen /lifecycle frozen

k8s-ci-robot commented 2 months ago

@brendandburns: Reopened this issue.

In response to [this](https://github.com/kubernetes-client/javascript/issues/1499#issuecomment-2288929334): >/reopen >/lifecycle frozen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
hax commented 1 month ago

Note, not only middlwares but also httpApi are never used. So configuration param is generally broken.

In fact, I really feel configuration param is not easy to use.

This is my current workaround. It change the api signature, drop configuration, add { signal: AbortSignal } to support abortion which solve #1613 (by using AbortSignal.timeout). It should also possible to add headers.

import fetch from "node-fetch"

import { type ApiType, type KubeConfig, type Configuration, createConfiguration, ServerConfiguration, ResponseContext, wrapHttpLibrary } from "@kubernetes/client-node"

export type AbortableV1Api<T> = {
    [K in keyof T]: T[K] extends (param: infer Param) => infer Return
        ? (param: Param, options?: { signal?: AbortSignal }) => Return : never
}

export function makeAbortableApiClient<T extends ApiType>(kubeConfig: KubeConfig, apiClientType: new (config: Configuration) => T) {

    const cluster = kubeConfig.getCurrentCluster();
    if (!cluster) {
         throw new Error('No active cluster!');
    }
    const baseServer = new ServerConfiguration(cluster.server, {})

    const httpApi = wrapHttpLibrary({
        async send(request) {
            console.log("send", request)
            const signal = (request as any).signal
            const response = await fetch(request.getUrl(), {
                method: request.getHttpMethod(),
                headers: request.getHeaders(),
                body: request.getBody(),
                agent: request.getAgent(),
                signal,
            })
            return new ResponseContext(
                response.status,
                Object.fromEntries(response.headers.entries()),
                {
                    text() {
                        return response.text()
                    },
                    async binary() {
                        return Buffer.from(await response.arrayBuffer())
                    },
                },
            )
        },
    })

    const config = createConfiguration({ httpApi, authMethods: { default: kubeConfig } })
    const api = new apiClientType(config)

    const methodCache = new WeakMap()
    return new Proxy(api, {
        get(target, prop, receiver) {
            const orig = Reflect.get(target, prop, receiver)
            if (typeof orig != "function") return orig
            if (methodCache.has(orig)) return methodCache.get(orig)
            console.log("create method", orig.name)
            const method = async function (this: any, ...args: any[]) {
                args[1] = withSignal(args[1]?.signal)
                // console.log(orig.name, args.length, args[1])
                return Reflect.apply(orig, this, args)
            }
            Object.defineProperty(method, "name", { value: orig.name })
            methodCache.set(orig, method)
            return method
        }
    }) as AbortableV1Api<T>

    function withSignal(signal: AbortSignal | null | undefined) {
        return createConfiguration({
            baseServer: {
                makeRequestContext(endpoint, httpMethod) {
                    const req = baseServer.makeRequestContext(endpoint, httpMethod)
                    if (signal != null) {
                        (req as any).signal = signal
                    }
                    return req
                },
            },
        })
    }
}