Closed edisonwang closed 6 years ago
Hi @edisonwang fixed in 0c1c8b20cdd995b9e94c1f68fd0b9a1376997ea3
@camilb Thanks! just tried grafana works but prometheus still the same.
error: error validating "manifests/prometheus/prometheus-k8s.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
Check for uncommitted changes
OK! No uncommitted changes detected
Enter desired namespace to deploy prometheus [monitoring]:
Creating monitoring namespace.
Error from server (AlreadyExists): namespaces "monitoring" already exists
1) AWS
2) GCP
3) Azure
4) Custom
Please select your cloud provider:4
Deploying on custom providers without persistence
Setting components version
Enter Prometheus Operator version [v0.23.1]:
Enter Prometheus version [v2.3.2]:
Enter Prometheus storage retention period in hours [168h]:
Enter Prometheus storage volume size [10Gi]:
Enter Prometheus memory request in Gi or Mi [1Gi]:
Enter Grafana version [5.2.2]:
Enter Alert Manager version [v0.15.1]:
Enter Node Exporter version [v0.16.0]:
Enter Kube State Metrics version [v1.3.1]:
Enter Prometheus external Url [http://127.0.0.1:9090]:
Enter Alertmanager external Url [http://127.0.0.1:9093]:
Do you want to use NodeSelector to assign monitoring components on dedicated nodes?
Y/N [N]:
Do you want to set up an SMTP relay?
Y/N [N]:
Do you want to set up slack alerts?
Y/N [N]:
Removing all the sed generated files
Deploying Prometheus Operator
serviceaccount/prometheus-operator unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-operator configured
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator configured
service/prometheus-operator unchanged
deployment.apps/prometheus-operator configured
Waiting for Operator to register custom resource definitions...done!
Deploying Alertmanager
secret/alertmanager-main unchanged
service/alertmanager-main unchanged
alertmanager.monitoring.coreos.com/main unchanged
Deploying node-exporter
daemonset.extensions/node-exporter unchanged
service/node-exporter unchanged
Deploying Kube State Metrics exporter
serviceaccount/kube-state-metrics unchanged
clusterrole.rbac.authorization.k8s.io/kube-state-metrics configured
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics configured
role.rbac.authorization.k8s.io/kube-state-metrics-resizer unchanged
rolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
deployment.apps/kube-state-metrics configured
service/kube-state-metrics unchanged
Deploying Grafana
configmap/grafana-dashboards unchanged
configmap/grafana-dashboard-k8s-cluster-rsrc-use unchanged
configmap/grafana-dashboard-k8s-node-rsrc-use unchanged
configmap/grafana-dashboard-k8s-resources-cluster unchanged
configmap/grafana-dashboard-k8s-resources-namespace unchanged
configmap/grafana-dashboard-k8s-resources-pod unchanged
configmap/grafana-dashboard-nodes unchanged
configmap/grafana-dashboard-pods unchanged
configmap/grafana-dashboard-statefulset unchanged
configmap/grafana-dashboard-deployments unchanged
configmap/grafana-dashboard-k8s-cluster-usage unchanged
configmap/grafana-datasources unchanged
deployment.apps/grafana created
serviceaccount/grafana unchanged
service/grafana unchanged
Grafana default credentials
user: admin, password: admin
Deploying Prometheus
serviceaccount/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-k8s configured
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s configured
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules unchanged
servicemonitor.monitoring.coreos.com/alertmanager unchanged
servicemonitor.monitoring.coreos.com/kube-dns unchanged
servicemonitor.monitoring.coreos.com/kube-state-metrics unchanged
servicemonitor.monitoring.coreos.com/kubelet unchanged
servicemonitor.monitoring.coreos.com/node-exporter unchanged
servicemonitor.monitoring.coreos.com/prometheus-operator unchanged
servicemonitor.monitoring.coreos.com/prometheus unchanged
service/prometheus-k8s unchanged
error: error validating "manifests/prometheus/prometheus-k8s.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
servicemonitor.monitoring.coreos.com/kube-apiserver unchanged
servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged
servicemonitor.monitoring.coreos.com/kube-scheduler unchanged
Self hosted
service/kube-controller-manager-prometheus-discovery unchanged
service/kube-dns-prometheus-discovery unchanged
service/kube-scheduler-prometheus-discovery unchanged
servicemonitor.monitoring.coreos.com/kube-apiserver unchanged
servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged
servicemonitor.monitoring.coreos.com/kube-scheduler unchanged
Removing local changes
Done
@edisonwang Tested on MacOS and Ubuntu and works fine. Can you please try this command sed -i -e '1,8d;32,45d' manifests/prometheus/prometheus-k8s.yaml
in repo directory, then check if the manifests/prometheus/prometheus-k8s.yaml
file looks like this?
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: k8s
labels:
prometheus: k8s
spec:
replicas: 2
version: PROMETHEUS_VERSION
externalUrl: PROMETHEUS_EXTERNAL_URL
serviceAccountName: prometheus-k8s
serviceMonitorSelector:
matchExpressions:
- {key: k8s-app, operator: Exists}
ruleSelector:
matchLabels:
role: alert-rules
prometheus: k8s
nodeSelector:
node_label_key: node_label_value
resources:
requests:
memory: PROMETHEUS_MEMORY_REQUEST
alerting:
alertmanagers:
- namespace: CUSTOM_NAMESPACE
name: alertmanager-main
port: web
Previously, the first 2 lines were removed by the script
I see it complains for the first 2 lines, apiVersion
and kind
My bad, it works.... I changed the yaml to customized the storage class, and it works on grafana, but not on Prometheus... Thanks a lot for your help. I'll close this.
Hi,
just tried to deploy to my Kubeadm based bearmatel cluster and get following errors and couldn't figure out what happened....
Any idea where I should look at?