Closed dongjiang1989 closed 1 year ago
I don't know if it is a problem with the "go.uber.org/automaxprocs/maxprocs", please provide your environment information, such as the operating system version, the container runtime, the k8s version, etc.
provide
Kubernetes : v1.21.5 OS: centos 8.2 Kernal: 4.19.0-240.23.21.el8_2 containerd: v1.5.7
loggie daemonSet yaml:
---
# Source: loggie/templates/loggie-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: loggie
namespace: monitoring
---
# Source: loggie/templates/loggie-configmap.yaml
apiVersion: v1
data:
loggie.yml: |-
loggie:
defaults:
sink:
type: loki
url: http://loki.monitoring:3100/loki/api/v1/push
discovery:
enabled: true
kubernetes:
containerRuntime: containerd
dynamicContainerLog: true
parseStdout: false
rootFsCollectionEnabled: true
typeNodeFields:
clusterlogconfig: ${_k8s.clusterlogconfig}
nodename: ${_k8s.node.name}
os: ${_k8s.node.nodeInfo.osImage}
typePodFields:
containername: ${_k8s.pod.container.name}
logconfig: ${_k8s.logconfig}
namespace: ${_k8s.pod.namespace}
nodename: ${_k8s.node.name}
podname: ${_k8s.pod.name}
http:
enabled: true
port: 9196
monitor:
listeners:
filesource:
period: 10s
filewatcher:
period: 5m
pipeline:
period: 10s
queue:
period: 10s
reload:
period: 10s
sink:
period: 10s
logger:
enabled: true
period: 30s
reload:
enabled: true
period: 10s
kind: ConfigMap
metadata:
name: loggie-config-loggie
namespace: monitoring
---
# Source: loggie/templates/loggie-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: loggie-role-loggie
rules:
- apiGroups:
- ""
resources:
- pods
- pods/log
- nodes
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- events
verbs:
- get
- watch
- list
- update
- create
- patch
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- get
- list
- watch
- apiGroups:
- loggie.io
resources:
- logconfigs
- logconfigs/status
- clusterlogconfigs
- clusterlogconfigs/status
- sinks
- interceptors
verbs:
- get
- list
- watch
- update
- patch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- list
- update
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
---
# Source: loggie/templates/loggie-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: loggie-rolebinding-loggie
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: loggie-role-loggie
subjects:
- kind: ServiceAccount
name: loggie
namespace: monitoring
---
# Source: loggie/templates/loggie-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: loggie
instance: loggie
name: loggie
namespace: monitoring
spec:
ports:
- name: monitor
port: 9196
targetPort: 9196
selector:
app: loggie
instance: loggie
type: ClusterIP
---
# Source: loggie/templates/loggie-agent-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: loggie
instance: loggie
name: loggie
namespace: monitoring
spec:
selector:
matchLabels:
app: loggie
instance: loggie
template:
metadata:
labels:
app: loggie
instance: loggie
spec:
containers:
- args:
- -meta.nodeName=$(HOST_NAME)
- -config.system=/opt/loggie/loggie.yml
- -config.pipeline=/opt/loggie/pipeline/*.yml
- -log.jsonFormat=true
env:
- name: HOST_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: TZ
value: Asia/Shanghai
image: loggieio/loggie:v1.4.0
name: loggie
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /var/log/pods
name: podlogs
- mountPath: /var/lib/docker/containers
name: dockercontainers
- mountPath: /var/lib/kubelet/pods
name: kubelet
- mountPath: /opt/loggie/loggie.yml
name: loggie-config
subPath: loggie.yml
- mountPath: /opt/loggie/pipeline
name: pipeline
- mountPath: /data/
name: registry
- mountPath: /run/containerd/containerd.sock
name: containerdsocket
serviceAccountName: loggie
nodeSelector:
{}
affinity:
{}
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
volumes:
- hostPath:
path: /var/log/pods
type: DirectoryOrCreate
name: podlogs
- hostPath:
path: /var/lib/docker/containers
type: DirectoryOrCreate
name: dockercontainers
- hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
name: kubelet
- configMap:
defaultMode: 384
name: loggie-config-loggie
name: loggie-config
- hostPath:
path: /data/loggie-loggie
type: DirectoryOrCreate
name: registry
- emptyDir: {}
name: pipeline
- hostPath:
path: /run/containerd/containerd.sock
type: ""
name: containerdsocket
hostPID: true
updateStrategy:
type: RollingUpdate
---
# Source: loggie/templates/loggie-servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: loggie-config-loggie
namespace: monitoring
labels:
app: loggie
instance: loggie
release: metrics
spec:
jobLabel: jobLabel
endpoints:
- port: monitor
interval: 30s
honorLabels: true
selector:
matchLabels:
app: loggie
instance: loggie
I feel that the cgoup v2 version will cause this problem, because the lower version of automaxprocs does not support it. refer: https://github.com/uber-go/automaxprocs/releases/tag/v1.5.0
@dongjiang1989 Could you Please change the content of the PR to upgrade the version of automaxprocs and try again?
update v1.5.1 fix bug
fixs #488
What version of Loggie?
v1.4.0
Expected Behavior
Actual Behavior
loggie DaemonSets deploy to k8s
Steps to Reproduce the Problem