Open yuxh opened 1 year ago
Ah I get the same issue too, when using chartPath
locally it errors
Got the same error, any workaround?
Happening to me, too. It was working just fine, then out of nowhere this has been stopping me in my tracks. Full removal and reinstall didn't fix
I hit this issue and it was due to changes in a local helm chart being deployed by skaffold (using manifests.helm.releases in skaffold spec).
I started undoing changes until error went away, it seems to have been caused by me commenting out a cookie cutter PVC helm template. (Adding #
to every line).
I found that if I remove the labels lines, I do not get the error. With it present, I get the same missing Resource metadata
. It appears some level of templating is happening with these comments? Perhaps the yaml comments are not respected properly?
Reproduces error:
# kind: PersistentVolumeClaim
# apiVersion: v1
# metadata:
# name: {{ include "test.fullname" . }}-test
# labels:
# {{- include "test.labels" . | nindent 4 }}
# spec:
# accessModes:
# {{- range .Values.storage.accessModes }}
# - {{ . | quote }}
# {{- end }}
# resources:
# requests:
# storage: {{ .Values.storage.size | quote }}
# {{- with .Values.storage.storageClassName }}
# storageClassName: {{ . | quote }}
# {{- end }}
Does not reproduce error / works as expected. (Label include lines removed)
# kind: PersistentVolumeClaim
# apiVersion: v1
# metadata:
# name: {{ include "test.fullname" . }}-test
# spec:
# accessModes:
# {{- range .Values.storage.accessModes }}
# - {{ . | quote }}
# {{- end }}
# resources:
# requests:
# storage: {{ .Values.storage.size | quote }}
# {{- with .Values.storage.storageClassName }}
# storageClassName: {{ . | quote }}
# {{- end }}
Hello, just bumping it up. It's 1 am at night and I just arrived here. Have the same error with the same root case.
In my case the issue was that my manifests were using CRLF (Windows line break). By switching to LF line break the issue was resolved.
I was facing a similar a issue. I was trying to isolate each config separately and realised the issue might be happening in Windows for "ClusterRoleBinding" manifests (for ClusterRole roleRef). When I comment that part of the yaml, everything gets loaded as usual. Could this be an issue?
@c3c @prestonyun @userbradley
have you found a solution?
I tried many times and still the same, the problem is exactly the same as in the first post and I am able to reproduce it.
I received the same error. I was deploying postgresql , and I used configMap for defining the db name and db host, but when i removed that code, and directly used the value, It worked like charms! This was for local setup.
Old Code:
#Removed configMap and its referred value
apiVersion: v1
kind: ConfigMap
metadata:
name: task-postgres-configmap
data:
task-db-host: my_db_host_value
task-db-name: my_db_name
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: task-postgres-depl
spec:
replicas: 1
selector:
matchLabels:
app: task-postgres
template:
metadata:
labels:
app: task-postgres
spec:
containers:
- name: task-postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: task-db-user-secret
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: task-db-password-secret
key: DB_PASSWORD
- name: DB_HOST
#Removed
valueFrom:
configMapKeyRef:
name: task-postgres-configmap
key: task-db-host
- name: DB_NAME
#Removed
valueFrom:
configMapKeyRef:
name: task-postgres-configmap
key: task-db-name
---
apiVersion: v1
kind: Service
metadata:
name: task-postgres-srv
spec:
selector:
app: task-postgres
type: ClusterIP
ports:
- protocol: TCP
port: 5432
targetPort: 5432
Updated Code:
apiVersion: apps/v1
kind: Deployment
metadata:
name: task-postgres-depl
spec:
replicas: 1
selector:
matchLabels:
app: task-postgres
template:
metadata:
labels:
app: task-postgres
spec:
containers:
- name: task-postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: task-db-user-secret
key: DB_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: task-db-password-secret
key: DB_PASSWORD
- name: POSTGRES_HOST
value: my_db_host_value
- name: POSTGRES_DB
value: my_db_name_value
---
apiVersion: v1
kind: Service
metadata:
name: task-postgres-srv
spec:
selector:
app: task-postgres
type: ClusterIP
ports:
- protocol: TCP
port: 5432
targetPort: 5432
Hi does anyone have a solution for this? I'm currently using v4beta11, it worked properly some weeks ago but it suddenly returned the following message and exit
Cleaning up...
- No resources found Pruning images... missing Resource metadata
Here is my skaffold.yaml
apiVersion: skaffold/v4beta11
kind: Config
metadata:
name: skaffold
manifests:
rawYaml:
- ./infra/k8s/auth-depl.yaml
build:
local:
push: false
artifacts:
- image: meccar/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.js"
dest: .
Here is the auth-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
labels:
app: auth
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: meccar/auth:latest
---
apiVersion: v1
kind: Service
metadata:
name: auth-clusterip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 8001
targetPort: 8001
I tried to reinstall skaffold using choco but the error still remains
My OS is Windows 11 Pro Minikube version: v1.33.1 Skaffold version: v2.12.0 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Docker version: 26.1.4
I received the same error, and resolved by delete comment in service yaml( # )!!!
I had this problem too, on my end it was because there was a helm values file that skaffold tried to load as manifests.rawYaml
as I defined it as
manifests:
rawYaml:
- *.yaml
Expected behavior
skaffold dev
running properlyActual behavior
exit with "missing Resource metadata", no resource except images were created
Information
Steps to reproduce the behavior
1.https://github.com/piomin/sample-spring-microservices-kubernetes.git
skaffold dev