Closed tenzen-y closed 8 months ago
If you use Goland (JetBrains), you could face this error, and your local gopls and definition jumpings will stop working since Goland automatically runs the following commands every time:
go list -modfile=${GOPATH}/src/sigs.k8s.io/kueue/go.mod -m -json -mod=mod all`
I tried to avoid the above error, and then I couldn't find any way to resolve the issue completely.
However, I found that we can temporarily avoid the above error by disabling Goland's Go Module Integrations. I left the way:
Also, you needs to add the following dependencies to the go.mod,
replace (
k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.28.3
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.28.3
k8s.io/kubectl => k8s.io/kubectl v0.28.3
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.28.3
)
and then run go list -modfile=${GOPATH}/src/sigs.k8s.io/kueue/go.mod -m -json -mod=mod all
to gather indexes for modules, then remove the added modules in the above.
I'm on the fence about whether or not we should have kubernetes dependency versions in our go modules only for "go list". @alculquicondor @trasc @kerthcet @mimowo WDYT?
... replace ( k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.28.3 k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.28.3 k8s.io/kubectl => k8s.io/kubectl v0.28.3 k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.28.3 )
The dependency is coming from cluster-autoscaler, which imports k/k to reuse the scheduler logic.
I think the best solution is for cluster-autoscaler to put their APIs in a separate go module. cc @x13n @kisieland
I thought maybe we could put the provisioning-request controller in a separate go module, but that would lead to the same dependency tree.
Yeah, if CA APIs are imported in other components, it makes sense to extract them to a separate module with trimmed-down dependencies. We'd still keep CA & API modules versioning in sync, but it would address the dependency problem.
@tenzen-y is this something you could work on in k/autoscaling?
@tenzen-y is this something you could work on in k/autoscaling?
@alculquicondor Yes, I can.
it makes sense to extract them to a separate module with trimmed-down dependencies
@x13n That makes sense.
I'm facing the same problem in GoLand too. Hope this could be fixed soon.
I'm facing the same problem in GoLand too. Hope this could be fixed soon.
@B1F030 I'm starting now: https://github.com/kubernetes/autoscaler/issues/6307 You can temporary avoid this issue by this workaround: https://github.com/kubernetes-sigs/kueue/issues/1345#issuecomment-1818164133
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
What happened: I could not perform
go list -modfile=./go.mod -m all
to get all dependencies list, and I faced the following errors:What you expected to happen: The command is successful.
How to reproduce it (as minimally and precisely as possible): We can reproduce with
go list -modfile=./go.mod -m all
on our local.Anything else we need to know?: I guess that the error was caused by cluster-autoscaler dependencies. Actually, once I remove the cluster-autoscaler dependencies from our
go.mod
, and then the above error went away.Also, I found that we can avoid the above error once I specify the module versions in the following:
Environment:
kubectl version
):git describe --tags --dirty --always
): main branchcat /etc/os-release
):uname -a
):