kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.56k stars 4.89k forks source link

problema al ejecutar minikube service facturador-prueba #18812

Closed LeidyMuffin closed 1 month ago

LeidyMuffin commented 6 months ago

Los comandos necesarios para reproducir la incidencia: minikube service facturador-prueba

El resultado completo del comando que ha fallado:

facturador@facturatech1:~$ cat /tmp/minikube_service_bcc29b78159d429103db4e013e8c5480b1333295_0.log Log file created at: 2024/05/06 12:40:37 Running on machine: facturatech1 Binary: Built with gc go1.22.1 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0506 12:40:37.700023 15191 out.go:291] Setting OutFile to fd 1 ... I0506 12:40:37.700258 15191 out.go:338] TERM=xterm,COLORTERM=, which probably does not support color I0506 12:40:37.700262 15191 out.go:304] Setting ErrFile to fd 2... I0506 12:40:37.700265 15191 out.go:338] TERM=xterm,COLORTERM=, which probably does not support color I0506 12:40:37.700413 15191 root.go:338] Updating PATH: /home/facturador/.minikube/bin W0506 12:40:37.700515 15191 root.go:314] Error reading config file at /home/facturador/.minikube/config/config.json: open /home/facturador/.minikube/config/config.json: no such file or directory I0506 12:40:37.700640 15191 mustload.go:65] Loading cluster: minikube I0506 12:40:37.700960 15191 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0506 12:40:37.701344 15191 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0506 12:40:37.716772 15191 host.go:66] Checking if "minikube" exists ... I0506 12:40:37.717040 15191 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0506 12:40:37.774263 15191 info.go:266] docker info: {ID:405b4e89-7b21-4639-a498-422163d497b2 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2024-05-06 12:40:37.762445595 +0200 CEST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.15.0-25-generic OperatingSystem:Ubuntu 22.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:4116402176 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:facturatech1 Labels:[] ExperimentalBuild:false ServerVersion:26.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:}} I0506 12:40:37.774400 15191 api_server.go:166] Checking apiserver status ... I0506 12:40:37.774442 15191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0506 12:40:37.774490 15191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0506 12:40:37.801340 15191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/facturador/.minikube/machines/minikube/id_rsa Username:docker} I0506 12:40:37.903450 15191 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1903/cgroup W0506 12:40:37.912990 15191 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1903/cgroup: Process exited with status 1 stdout: stderr: I0506 12:40:37.913042 15191 ssh_runner.go:195] Run: ls I0506 12:40:37.915758 15191 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0506 12:40:37.919552 15191 api_server.go:279] https://192.168.49.2:8443/healthz returned 200: ok I0506 12:40:37.919568 15191 host.go:66] Checking if "minikube" exists ... I0506 12:40:37.919847 15191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0506 12:40:37.969369 15191 service.go:214] Found service: &Service{ObjectMeta:{facturador-prueba default 4eb23c4d-cfdc-4806-a9c8-323de3f2ce58 1658 0 2024-05-06 12:39:34 +0200 CEST map[app:facturador-prueba] map[] [] [] [{kubectl-expose Update v1 2024-05-06 12:39:34 +0200 CEST FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":8080,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:32106,AppProtocol:nil,},},Selector:map[string]string{app: facturador-prueba,},ClusterIP:10.108.218.124,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.108.218.124],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I0506 12:40:37.973224 15191 host.go:66] Checking if "minikube" exists ... I0506 12:40:37.973522 15191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0506 12:40:38.009386 15191 out.go:177] * Opening service default/facturador-prueba in default browser... I0506 12:40:44.744034 15191 out.go:177] W0506 12:40:44.748502 15191 out.go:239] X Exiting due to HOST_BROWSER: open url failed: [default facturador-prueba 8080 http://192.168.49.2:32106]: exit status 4 W0506 12:40:44.748534 15191 out.go:239] * W0506 12:40:44.750042 15191 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ * Please also attach the following file to the GitHub issue: │ │ * - /tmp/minikube_service_bcc29b78159d429103db4e013e8c5480b1333295_0.log │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ I0506 12:40:44.753326 15191 out.go:177]

El resultado del comando minikube logs:

La versión del sistema operativo que utilizaste: ubuntu 22.04 lts logs_2.txt

nnzv commented 6 months ago

Falta un poco de contexto aquí. Entiendo que Minikube solo crea un servicio automáticamente cuando se crea el clúster, y ese servicio es el "default". Si has creado un servicio personalizado, por favor proporciona más detalles utilizando comandos como describe, etc.

% minikube service --all
|-----------|------------|-------------|--------------|
| NAMESPACE |    NAME    | TARGET PORT |     URL      |
|-----------|------------|-------------|--------------|
| default   | kubernetes |             | No node port |
|-----------|------------|-------------|--------------|
k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/18812#issuecomment-2401387735): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.