vmware / vic

vSphere Integrated Containers Engine is a container runtime for vSphere.
http://vmware.github.io/vic
Other
640 stars 173 forks source link

Tether might take up too much CPU in Ubuntu #2455

Open chengwang86 opened 8 years ago

chengwang86 commented 8 years ago

If I start a container from the vanilla Ubuntu image, sometimes the CPU utilization of the tether process could be quite high (see example below). However, this does not happen on a container started from the busybox image.

Expected behavior:(a snippet from the output of the linux "top" command on the busybox container)

PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
21 2 root SW 0 0.0 0 1.9 [kworker/0:1]
1 0 root S 340m 16.9 0 1.3 /.tether/tether
208 1 root S 1188 0.0 0 0.0 sh

PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 21 2 root SW 0 0.0 0 1.9 [kworker/0:1] 1 0 root S 340m 16.9 0 1.3 /.tether/tether 208 1 root S 1188 0.0 0 0.0 sh

Actual behavior: (a snippet from the output of the linux "top" command on the Ubuntu container)

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4 root 20 0 0 0 0 R 71.7 0.0 1:09.17 kworker/0:0
1 root 20 0 349884 13688 4736 S 53.3 0.7 1:12.54 tether
244 root 20 0 0 0 0 S 0.3 0.0 0:00.08 kworker/1:1
mdubya66 commented 8 years ago

Needs to be triaged for GA. May be Iceboxed depending on the result

chengwang86 commented 8 years ago

This is not needed for GA IMO.

On Wed, Sep 28, 2016 at 11:28 AM, Matt Williamson notifications@github.com wrote:

Needs to be triaged for GA. May be Iceboxed depending on the result

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/vmware/vic/issues/2455#issuecomment-250220695, or mute the thread https://github.com/notifications/unsubscribe-auth/AVRU45rmebQsAa_caimdWR4gqTLc8mdGks5qupW-gaJpZM4KFNip .

hickeng commented 8 years ago

This is alarmingly high! @chengwang86 do you know if the same happens when not attached? i.e. when the tether is not performing significant amounts of copying, etc to handle output.

@caglar10ur if you've got pprof running in the tether in a neat fashion with -debug then this could be a good use for it.

chengwang86 commented 8 years ago

@hickeng

I run an ubuntu container (first figure below) and a busybox container (second figure) on my standalone ESXi and monitor the CPU util of the containers from the ESXi host client.

Clearly there are three stages in the cpu utilization traces:

It seems that as long as I run the "linux top" cmd, the cpu util of the container goes high, regardless tether is working or not. The output of the "top" cmd tells me that tether is the culprit.

ubuntu busybox

hickeng commented 8 years ago

@chengwang86

2621 adds a mechanism to launch pprof server in containerVMs.

Configure VCH with debug>1 and an external container network: vic-machine --debug=2 --container-network=vm-network:external

Then create a container on that external network: docker run --net=external ubuntu

You can then access the pprof server at port 6060 on that container

vburenin commented 8 years ago

It seems like tether is not efficient enough working with stderr/stdout and communication it to docker client. When there is a lot of screen updates it is where I see significant CPU usage by tether. It can cause significant performance degradation for applications which write lots of logs.

vburenin commented 8 years ago

image

It seems like I am on the right way

vburenin commented 7 years ago

CPU load is caused by intensive communication via virtual COM port. Most of the load went to the logger that was logging EVERYTHING. Switching to INFO level helped a little bit, but the root cause is not eliminated. I am afraid that applications which need communication with external world via STDOUT will run slow.

corrieb commented 7 years ago

I'm not sure why this is classified as low priority. If a customer wants to run a container that puts a significant amount of data out to stdout, if this has a noticeable impact on application throughput, that's a significant production issue.

I'm raising this to medium.