Open hurelhuyag opened 7 years ago
Hi! Thanks for reporting!
Please post the complete contents of /proc/meminfo
.
Sorry this container moved. here is another container's info.
Here is free -h
shows 16GB used, but htop
23.7GB
The difference is because htop counts shared memory (the 7.2GB that shows as "shared" in free
and the Shmem
line in /proc/meminfo
) as used memory.
Are you sure your LXD clients are returning correct values? For instance, is your swap really 100% in use and is cached memory only a few kbytes? htop is only returning what it sees in /proc. Note that in the first post, free -m
is also saying that shared memory is 16GB, above the 13GB total.
This may be an LXD bug.
Sime for me :
lxd
host$ free -m
total used free shared buff/cache available
Mem: 800 331 468 390 0 468
Swap: 5719 0 5719
lxd$ free -m
total used free shared buff/cache available
Mem: 16046 6545 1894 390 7606 5277
Swap: 5719 0 5719
/proc/meminfo @reported in lxd container | /proc/meminfo @reported at host |
---|---|
MemTotal: 819200 kB | MemTotal: 16431724 kB |
MemFree: 479744 kB | MemFree: 1937924 kB |
MemAvailable: 479744 kB | MemAvailable: 5401776 kB |
Buffers: 0 kB | Buffers: 356776 kB |
Cached: 56 kB | Cached: 3257936 kB |
SwapCached: 0 kB | SwapCached: 324 kB |
Active: 616 kB | Active: 4482948 kB |
Inactive: 0 kB | Inactive: 837364 kB |
Active(anon): 576 kB | Active(anon): 1840380 kB |
Inactive(anon): 0 kB | Inactive(anon): 265136 kB |
Active(file): 40 kB | Active(file): 2642568 kB |
Inactive(file): 0 kB | Inactive(file): 572228 kB |
Unevictable: 0 kB | Unevictable: 0 kB |
Mlocked: 0 kB | Mlocked: 0 kB |
SwapTotal: 5857276 kB | SwapTotal: 5857276 kB |
SwapFree: 5856616 kB | SwapFree: 5856616 kB |
Dirty: 4 kB | Dirty: 108 kB |
Writeback: 0 kB | Writeback: 0 kB |
AnonPages: 1700256 kB | AnonPages: 1705408 kB |
Mapped: 437756 kB | Mapped: 439772 kB |
Shmem: 399872 kB | Shmem: 399928 kB |
Slab: 0 kB | Slab: 4174288 kB |
SReclaimable: 0 kB | SReclaimable: 586372 kB |
SUnreclaim: 0 kB | SUnreclaim: 3587916 kB |
KernelStack: 18464 kB | KernelStack: 18544 kB |
PageTables: 42500 kB | PageTables: 43208 kB |
NFS_Unstable: 0 kB | NFS_Unstable: 0 kB |
Bounce: 0 kB | Bounce: 0 kB |
WritebackTmp: 0 kB | WritebackTmp: 0 kB |
CommitLimit: 14073136 kB | CommitLimit: 14073136 kB |
Committed_AS: 5754188 kB | Committed_AS: 5754584 kB |
VmallocTotal: 34359738367 kB | VmallocTotal: 34359738367 kB |
VmallocUsed: 0 kB | VmallocUsed: 0 kB |
VmallocChunk: 0 kB | VmallocChunk: 0 kB |
HardwareCorrupted: 0 kB | HardwareCorrupted: 0 kB |
AnonHugePages: 0 kB | AnonHugePages: 0 kB |
CmaTotal: 0 kB | CmaTotal: 0 kB |
CmaFree: 0 kB | CmaFree: 0 kB |
HugePages_Total: 0 | HugePages_Total: 0 |
HugePages_Free: 0 | HugePages_Free: 0 |
HugePages_Rsvd: 0 | HugePages_Rsvd: 0 |
HugePages_Surp: 0 | HugePages_Surp: 0 |
Hugepagesize: 2048 kB | Hugepagesize: 2048 kB |
DirectMap4k: 4576928 kB | DirectMap4k: 4576928 kB |
DirectMap2M: 12199936 kB | DirectMap2M: 12199936 kB |
lxc info
...
driver: lxc
driverversion: 2.0.7
kernel: Linux
kernelarchitecture: x86_64
kernelversion: 4.4.0-67-generic
server: lxd
serverpid: 4213
serverversion: 2.0.9
storage: zfs
storageversion: "5"
Hi. I have the same issues with ZFS backend and LXD running Ubuntu 16.04 containers. My guess here it's because how disk caching is handled with ZFS and it 'arc cache'. Regards
I can confirm that once zfs arc primarycache is set to 'metadata' (zfs set primarycache=metadata POOLNAME) , all lxc containers stopped, done a 'zpool export POOLNAME' 'zpool import POOLNAME', started back all LXC container that memory usage shown in htop is now consistent with top/free -m.
The issue seems to be, that MemFree and MemAvailable report the same value in a container. Not sure why.
@stgraber do you have an idea, why?
Not sure about the logic there, can you file a bug at https://github.com/lxc/lxcfs/issues so someone more knowledgeable about that part of the code can look into it?
Same in lxc (htop 2.0.2):
# free -m
total used free shared buffers cached
Mem: 8192 119 8072 4401 0 77
-/+ buffers/cache: 42 8149
Swap: 0 0 0```
# htop -v
htop 2.0.2 - (C) 2004-2016 Hisham Muhammad
Released under the GNU GPL.
# apt-cache policy htop
htop:
Installed: 2.0.2-1~bpo8+1
Candidate: 2.0.2-1~bpo8+1
Version table:
2.0.2-1 0
250 http://httpredir.debian.org/debian/ testing/main amd64 Packages
*** 2.0.2-1~bpo8+1 0
500 http://httpredir.debian.org/debian/ jessie-backports/main amd64 Packages
100 /var/lib/dpkg/status
1.0.3-1 0
500 http://ftp.debian.org/debian/ jessie/main amd64 Packages
kernel 4.4.67-1-pve #1 SMP PVE 4.4.67-92 (Fri, 23 Jun 2017 08:22:06 +0200) x86_64 GNU/Linux) use zfs
I'm seeing the same behaviour inside linux-vserver containers. However, it only happens with 2.0. Example:
# free -m
total used free shared buff/cache available
Mem: 2048 151 1896 0 4696 1896
Swap: 1024 0 1024
htop v1: Mem[|||| 31/2048MB]
htop v2: Mem[|||||||||||||||||||||||||||||||||||16.0Z/2.00G]
Indeed, MemAvailable is higher than MemTotal inside the contained, but that doesn't stop free
from working. Kernel is 3.18.82-vs2.3.7.5.
Same issue here.
Proxmox v5.1 htop 2.0.2 host+container on Debian 9 x64
Container memory informations:
# free -m
total used free shared buff/cache available
Mem: 512 153 320 435 37 320
Swap: 512 0 512
# cat /proc/meminfo
MemTotal: 524288 kB
MemFree: 328552 kB
MemAvailable: 328552 kB
Buffers: 0 kB
Cached: 38316 kB
SwapCached: 0 kB
Active: 24804 kB
Inactive: 24844 kB
Active(anon): 11352 kB
Inactive(anon): 17236 kB
Active(file): 13452 kB
Inactive(file): 7608 kB
Unevictable: 0 kB
Mlocked: 19900 kB
SwapTotal: 524288 kB
SwapFree: 524288 kB
Dirty: 192 kB
Writeback: 0 kB
AnonPages: 1591472 kB
Mapped: 374204 kB
Shmem: 446140 kB
Slab: 0 kB
SReclaimable: 0 kB
SUnreclaim: 0 kB
KernelStack: 14368 kB
PageTables: 26900 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 41321984 kB
Committed_AS: 6452076 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 4096 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 1311468 kB
DirectMap2M: 34242560 kB
DirectMap1G: 33554432 kB
Host memory informations:
# free -m
total used free shared buff/cache available
Mem: 64323 20713 25705 432 17903 42515
Swap: 8191 0 8191
# cat /proc/meminfo
MemTotal: 65866760 kB
MemFree: 26320316 kB
MemAvailable: 43533644 kB
Buffers: 548280 kB
Cached: 16713408 kB
SwapCached: 0 kB
Active: 2824340 kB
Inactive: 16001576 kB
Active(anon): 1655140 kB
Inactive(anon): 358884 kB
Active(file): 1169200 kB
Inactive(file): 15642692 kB
Unevictable: 19900 kB
Mlocked: 19900 kB
SwapTotal: 8388604 kB
SwapFree: 8388604 kB
Dirty: 204 kB
Writeback: 0 kB
AnonPages: 1584276 kB
Mapped: 371116 kB
Shmem: 443020 kB
Slab: 3603388 kB
SReclaimable: 1071680 kB
SUnreclaim: 2531708 kB
KernelStack: 14352 kB
PageTables: 26936 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 41321984 kB
Committed_AS: 6458300 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 4096 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 1311468 kB
DirectMap2M: 34242560 kB
DirectMap1G: 33554432 kB
lxc-top output:
I'm available for testing purposes.
I limited memory usage to 13GB to one of my lxd guests. htop show me 25GB memory usage.