Closed miked98 closed 1 year ago
Can you provide more information? How do you install and run the exporter ? what are the configurations? please also provide some logs that you've been seeing after v1.13+
We're running it via Kubernetes and using this values:
artifactory:
url: https://my-artifactory.com/artifactory
existingSecret: false
accessToken: <my-artifactory.com/token>
options:
logLevel: info
logFormat: logfmt
telemetryPath: /metrics
verifySSL: false
timeout: 5s
optionalMetrics:
- replication_status
- federation_status
Have have added this once and never did any changes. In our nightly jobs it gets updated and then it started to print also a lot of debug logs.
Unfortunately we have deleted the logs so I can't send examples.
We're running it though Helm, configured in kustomize:
helmCharts:
...
- name: prometheus-artifactory-exporter
repo: https://peimanja.github.io/helm-charts
version: 0.6.0
namespace: artifactory
releaseName: artsvc
includeCRDs: false
valuesInline:
replicaCount: 2
rbac:
create: false
pspEnabled: false
pspUseAppArmor: false
artifactory:
url: http://myartifactory
existingSecret: prometheus-artifactory-exporter
options:
logLevel: info
logFormat: json
verifySSL: false
timeout: 10s
optionalMetrics:
- artifacts
- replication_status
- federation_status
resources: {}
serviceMonitor:
enabled: false
namespace: artifactory-
interval: 60s
timeout: 30s
The resulting deployment is the following:
❯ k get deployment artsvc-prometheus-artifactory-exporter -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2023-05-16T21:36:16Z"
generation: 1
labels:
app: prometheus-artifactory-exporter
argocd.argoproj.io/instance: artifactory
chart: prometheus-artifactory-exporter-0.6.0
heritage: Helm
release: artsvc
name: artsvc-prometheus-artifactory-exporter
namespace: artifactory
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: prometheus-artifactory-exporter
release: artsvc
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: prometheus-artifactory-exporter
release: artsvc
spec:
containers:
- args:
- --log.level=info
- --log.format=json
- --optional-metric=artifacts
- --optional-metric=replication_status
- --optional-metric=federation_status
env:
- name: WEB_LISTEN_ADDR
value: :9531
- name: WEB_TELEMETRY_PATH
value: /metrics
- name: ARTI_SCRAPE_URI
value: http://myartifactory
- name: ARTI_SSL_VERIFY
value: "false"
- name: ARTI_TIMEOUT
value: 10s
envFrom:
- secretRef:
name: prometheus-artifactory-exporter
image: peimanja/artifactory_exporter:v1.13.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: prometheus-artifactory-exporter
ports:
- containerPort: 9531
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: artsvc-prometheus-artifactory-exporter
serviceAccountName: artsvc-prometheus-artifactory-exporter
terminationGracePeriodSeconds: 30
As you can see the --log.level=info
flag is passed correctly. However when fetching the logs I see mostly logs on debug level, e.g.:
❯ k get deployment artsvc-prometheus-artifactory-exporter -o yaml
k logs artsvc-prometheus-artifactory-exporter-85744f9b59-r8qqx
{"caller":"log.go:124","level":"debug","msg":"Removing other characters to extract number from string","ts":"2023-06-05T09:55:48.391Z"}
{"caller":"log.go:124","level":"debug","msg":"Successfully converted string to number","number":0,"string":"0 bytes","ts":"2023-06-05T09:55:48.391Z"}
{"caller":"log.go:124","level":"debug","msg":"Successfully converted string to bytes","string":"0 bytes","ts":"2023-06-05T09:55:48.391Z","value":0}
{"caller":"log.go:124","level":"debug","msg":"Removing other characters to extract number from string","ts":"2023-06-05T09:55:48.391Z"}
{"caller":"log.go:124","level":"debug","msg":"Successfully converted string to number","number":0,"string":"0%","ts":"2023-06-05T09:55:48.391Z"}
{"caller":"log.go:124","level":"debug","metric":"repoFiles","msg":"Registering metric","package_type":"go","repo":"proxy-golang-go-virtual","ts":"2023-06-05T09:55:48.391Z","type":"virtual","value":0}
{"caller":"log.go:124","level":"debug","metric":"repoUsed","msg":"Registering metric","package_type":"go","repo":"proxy-golang-go-virtual","ts":"2023-06-05T09:55:48.391Z","type":"virtual","value":0}
{"caller":"log.go:124","level":"debug","metric":"repoFolders","msg":"Registering metric","package_type":"go","repo":"proxy-golang-go-virtual","ts":"2023-06-05T09:55:48.391Z","type":"virtual","value":0}
{"caller":"log.go:124","level":"debug","metric":"repoItems","msg":"Registering metric","package_type":"go","repo":"proxy-golang-go-virtual","ts":"2023-06-05T09:55:48.391Z","type":"virtual","value":0}
{"caller":"log.go:124","level":"debug","metric":"repoPercentage","msg":"Registering metric","package_type":"go","repo":"proxy-golang-go-virtual","ts":"2023-06-05T09:55:48.391Z","type":"virtual","value":0}
@miked98 can you try v1.13.2 or helm chart prometheus-artifactory-exporter-0.6.1.
After updating to prometheus-artifactory-exporter-0.6.1 the issue is resolved for us. Thanks for the quick fix!
Hello, we're facing some issues with the logs since v1.13.0. Also when you set the
log.level
toinfo
it prints a high amount of debug logs anyway.