kubernetes / client-go

Go client for Kubernetes.
Apache License 2.0
8.79k stars 2.91k forks source link

How to force delete a pod #1249

Closed blueseller closed 3 months ago

blueseller commented 1 year ago

Sometimes, the status of the job will always be terminal. So I expect to detect that if a job is in the terminal state for a long time, it will be directly and forcibly deleted. But I found that there is no such option on client go.

code example: return clientset.BatchV1(). Jobs(namespace). Delete( context.Background(), job.ObjectMeta.Name, metav1.DeleteOptions{ GracePeriodSeconds: &gracePeriodSeconds, }, )

ljluestc commented 9 months ago
package main

import (
    "context"
    "fmt"
    "os"
    "time"

    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/rest"
    "k8s.io/client-go/util/wait"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

func main() {
    // Load Kubernetes configuration from the default location or a kubeconfig file
    config, err := rest.InClusterConfig()
    if err != nil {
        fmt.Println("Error loading in-cluster configuration:", err)
        os.Exit(1)
    }

    // Create a Kubernetes clientset
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
        fmt.Println("Error creating clientset:", err)
        os.Exit(1)
    }

    // Specify the namespace and pod name you want to delete
    namespace := "your-namespace"
    podName := "your-pod-name"

    // Define a grace period (in seconds) for the pod deletion
    gracePeriodSeconds := int64(0) // 0 for immediate deletion

    // Create a context with a timeout for the deletion operation
    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
    defer cancel()

    // Delete the pod with immediate deletion
    err = clientset.CoreV1().Pods(namespace).Delete(ctx, podName, metav1.DeleteOptions{
        GracePeriodSeconds: &gracePeriodSeconds,
    })

    if err != nil {
        fmt.Printf("Error deleting pod %s in namespace %s: %v\n", podName, namespace, err)
        os.Exit(1)
    }

    fmt.Printf("Pod %s in namespace %s deleted successfully\n", podName, namespace)
}
k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/client-go/issues/1249#issuecomment-2026961121): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.