Open tugtugtug opened 10 months ago
I think that you will need to fix this in the upstream generator: https://github.com/OpenAPITools/openapi-generator/tree/master/modules/openapi-generator/src/main/resources/typescript-fetch
Once the fix is merged there, we can regenerate the client.
thanks @brendandburns , related to https://github.com/OpenAPITools/openapi-generator/issues/14549
The above issue was created a year ago, and given the repo's issue # keeps increasing and now at 4k, I don't have faith this will get addressed soon. I'm okay with my workaround for now, I'll keep this open so ppl can find the workaround here.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Can confirm this is still happening. Had to use the workaround mentioned on 1.0.0-rc6
Can confirm that it was indeed a pain to work with. Is there any interest in documenting the workaround? For how small the PR was, there was a not insignificant amount of frustration. Is there a preferred way this could be handled, like passing in a patchType?
Anyhow, was nice to get rid of the 4 or 5 undefineds on the patch call.
/reopen /lifecycle frozen
@brendandburns: Reopened this issue.
Note, not only middlwares
but also httpApi
are never used. So configuration
param is generally broken.
In fact, I really feel configuration
param is not easy to use.
This is my current workaround. It change the api signature, drop configuration
, add { signal: AbortSignal }
to support abortion which solve #1613 (by using AbortSignal.timeout
). It should also possible to add headers
.
import fetch from "node-fetch"
import { type ApiType, type KubeConfig, type Configuration, createConfiguration, ServerConfiguration, ResponseContext, wrapHttpLibrary } from "@kubernetes/client-node"
export type AbortableV1Api<T> = {
[K in keyof T]: T[K] extends (param: infer Param) => infer Return
? (param: Param, options?: { signal?: AbortSignal }) => Return : never
}
export function makeAbortableApiClient<T extends ApiType>(kubeConfig: KubeConfig, apiClientType: new (config: Configuration) => T) {
const cluster = kubeConfig.getCurrentCluster();
if (!cluster) {
throw new Error('No active cluster!');
}
const baseServer = new ServerConfiguration(cluster.server, {})
const httpApi = wrapHttpLibrary({
async send(request) {
console.log("send", request)
const signal = (request as any).signal
const response = await fetch(request.getUrl(), {
method: request.getHttpMethod(),
headers: request.getHeaders(),
body: request.getBody(),
agent: request.getAgent(),
signal,
})
return new ResponseContext(
response.status,
Object.fromEntries(response.headers.entries()),
{
text() {
return response.text()
},
async binary() {
return Buffer.from(await response.arrayBuffer())
},
},
)
},
})
const config = createConfiguration({ httpApi, authMethods: { default: kubeConfig } })
const api = new apiClientType(config)
const methodCache = new WeakMap()
return new Proxy(api, {
get(target, prop, receiver) {
const orig = Reflect.get(target, prop, receiver)
if (typeof orig != "function") return orig
if (methodCache.has(orig)) return methodCache.get(orig)
console.log("create method", orig.name)
const method = async function (this: any, ...args: any[]) {
args[1] = withSignal(args[1]?.signal)
// console.log(orig.name, args.length, args[1])
return Reflect.apply(orig, this, args)
}
Object.defineProperty(method, "name", { value: orig.name })
methodCache.set(orig, method)
return method
}
}) as AbortableV1Api<T>
function withSignal(signal: AbortSignal | null | undefined) {
return createConfiguration({
baseServer: {
makeRequestContext(endpoint, httpMethod) {
const req = baseServer.makeRequestContext(endpoint, httpMethod)
if (signal != null) {
(req as any).signal = signal
}
return req
},
},
})
}
}
Describe the bug The configuration passed into the patching APIs do not honor the middleware overrides. The generated code does not even look at the passed in configuration for the middlewares, it only looks at the middlewares of its member configuration object. e.g.
Client Version
1.0.0-rc4
Server Version
1.25.1
To Reproduce Steps to reproduce the behavior:
patchNamespacedStatefulSet
with the configuration overriding with the middleware.Expected behavior The middleware should be called to override the request.
Example Code see https://github.com/kubernetes-client/javascript/blob/62e5ab1701cb5659656f1941ef11eb748e626c25/examples/patch-example.js
Environment (please complete the following information):
Additional context Related to https://github.com/kubernetes-client/javascript/issues/1398
Workaround