docker-archive / infra-container_exporter

Prometheus exporter exposing container metrics
126 stars 43 forks source link

process_open_fds increases over time #15

Open bogue1979 opened 8 years ago

bogue1979 commented 8 years ago

I have observed that the container_exporter crashed because of to many open filehandles. When you have a look at the metric "process_open_fds" you can see that it increases over time.

Can you help to find out what is wrong?

I have tested container_exporter within docker and outside of docker with the same result.

nitram509 commented 8 years ago

+1 from my side. I also see continuous increase of used file handles. roughly 200 per hour. Any idea why this happens?

xbglowx commented 8 years ago

Hi all, I can try to look into this problem, but I am very new at golang. The original creator/maintainer (@discordianfish) no longer works at docker.

What are your setups, i.e. docker version, OS, sha commit used to build container_exporter?

bogue1979 commented 8 years ago

Here is my local environment:

docker info
Containers: 2
Images: 61
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-8:4-2494456-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: 
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 1.733 GB
 Data Space Total: 107.4 GB
 Data Space Available: 53.63 GB
 Metadata Space Used: 3.445 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.144 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /home/docker/devicemapper/devicemapper/data
 Metadata loop file: /home/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.110 (2015-10-30)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.2.5-1-ARCH
Operating System: Arch Linux (containerized)
CPUs: 4
Total Memory: 7.532 GiB

Anyway you can see this problem on centos7.1 with Docker v8.2 too.

To build container_exporter I use the following:

docker run --rm --privileged -v /usr/local/bin:/usr/src/myapp -w /usr/src/myapp golang:latest /bin/bash -c "go get github.com/docker-infra/container_exporter && cd /go/src/github.com/docker-infra/container_exporter/ && go build -v -o /usr/src/myapp/container_exporter"

The problem occours with the official image too.

docker run -p 9104:9104 -v /sys/fs/cgroup:/cgroup \
       -v /var/run/docker.sock:/var/run/docker.sock prom/container-exporter
discordianfish commented 8 years ago

I'm not using container-exporter currently so can't test myself but lsof should show which FDs are open. That should help tracking down where in the code some FD doesn't get closed.

bogue1979 commented 8 years ago

there are a lot of sockets open but lsof does not show which socket. I can imagine that the docker socket will not be closed.

when you do not use container-exporter, what is your tool of choise to expose docker metrics to prometheus?

bogue1979 commented 8 years ago

I found a solution. Unfortunately my skills are not sufficient to write tests :(

xbglowx commented 8 years ago

@bogue1979 have you looked at cAdvisor yet?: https://github.com/google/cadvisor This works with prometheus as well. My plan is to deprecate this repo in favor of cAdvisor this week.

ghost commented 8 years ago

I'm noticing a constant increase in memory usage, probably due to the open fds. Any solution already?

xbglowx commented 8 years ago

@VogonogoV https://github.com/docker-infra/container_exporter/pull/16 has been opened to fix this, but I haven't had time to test before merging. I would also like to deprecate this repo in favor of cAdvisor, but first need to make sure it at least does everything container-exporter does.