jdiazbb / rfxcmd

Automatically exported from code.google.com/p/rfxcmd
1 stars 0 forks source link

High memory and CPU usage #26

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. top
2.
3.

What is the expected output? What do you see instead?
  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND
 5070 rfxcmd    20   0  144m  12m 1948 S   0.7  0.1  64:33.56 rfxcmd.py

What version of the product are you using? On what operating system?
550

Please provide any additional information below.
I think rfxcmd.py is using lots of memory and is almost always at the top of 
CPU util. Normal?

Original issue reported on code.google.com by magnus.a...@gmail.com on 23 Jul 2013 at 11:47

GoogleCodeExporter commented 8 years ago
On a raspberry pi, running debian wheezy I started :
./rfxcmd.py -d /dev/ttyUSB0

At another terminal i get 'mpstat' :

Linux 3.6.11+ (raspberrypi)     25/07/13        _armv6l_        (1 CPU)

15:40:05     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  
%guest   %idle
15:40:05     all    3.43    0.01    1.90    0.46    0.03    0.41    0.00    
0.00   93.77

iostat -xtc 5 10 gives me :

Linux 3.6.11+ (raspberrypi)     25/07/13        _armv6l_        (1 CPU)

25/07/13 15:41:42
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.43    0.01    2.31    0.44    0.00   93.81

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           1.84     0.74    1.87    0.75    56.11    17.66    56.31     
0.11   41.60    2.57  139.20   3.63   0.95

25/07/13 15:41:47
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.63    0.00    4.38    0.00    0.00   93.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00    0.00    0.00   0.00   0.00

25/07/13 15:41:52
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           6.89    0.00    0.67    0.00    0.00   92.44

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00    0.00    0.00   0.00   0.00

25/07/13 15:41:57
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.52    0.00    2.42    0.22    0.00   93.85

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     0.44    0.00    1.10     0.00     6.15    11.20     
0.01    8.00    0.00    8.00   4.00   0.44

25/07/13 15:42:02
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.67    0.00    3.11    0.00    0.00   94.22

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00    0.00    0.00   0.00   0.00

25/07/13 15:42:07
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.09    0.00    2.65    0.00    0.00   94.26

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00    0.00    0.00   0.00   0.00

25/07/13 15:42:12
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.78    0.00    0.89    0.00    0.00   95.33

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00    0.00    0.00   0.00   0.00

25/07/13 15:42:17
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.10    0.00    2.88    0.00    0.00   94.03

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00    0.00    0.00   0.00   0.00

25/07/13 15:42:22
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.10    0.00    1.77    0.00    0.00   95.12

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     0.00    0.00    0.44     0.00     1.77     8.00     
0.00    5.00    0.00    5.00   5.00   0.22

25/07/13 15:42:27
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.65    0.00    2.65    0.00    0.00   94.70

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
mmcblk0           0.00     1.55    0.00    0.44     0.00     7.95    36.00     
0.00    5.00    0.00    5.00   5.00   0.22

So, conclusion : avg idle % = around 95%. Seems good to me ....

Agree ?

Original comment by embe...@gmail.com on 25 Jul 2013 at 1:43

GoogleCodeExporter commented 8 years ago
 PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND
5070 rfxcmd    20   0  144m  12m 1948 S   0.7  0.1  85:56.72 rfxcmd.py

Well, the memory hasnt increased so I guess that is normal. And I guess maybe I 
shouldnt worry about it being at the top in top..

("mpstat" isnt working on my Debian Wheezy i7-4765T. Showing same values no 
matter how much I increase the load.
07:12:22 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  
%guest   %idle
07:12:22 PM  all    0.30    0.00    0.07    0.00    0.00    0.00    0.00    
0.00   99.63)

Original comment by magnus.a...@gmail.com on 25 Jul 2013 at 5:13

GoogleCodeExporter commented 8 years ago
I have not noted same kind of CPU usage, but then again it is very depended 
what functions you have activated. Lot of sensors and big trigger list could 
cause something, and adding into database at the same time. If you can send me 
the config.xml (with possible passwords and IP addresses removed), I would be 
interested to see.

Original comment by sebastia...@gmail.com on 25 Jul 2013 at 7:50

GoogleCodeExporter commented 8 years ago
Pasted them below.. config and trigger. removed passwords etc.
-----------------------------------------------
<config>
        <version>1</version>

        <!-- Serial -->
        <serial_device>/dev/rfxtrx433</serial_device>
        <serial_rate>38400</serial_rate>
        <serial_timeout>9</serial_timeout>

        <!-- Process -->
        <process_rfxmsg>yes</process_rfxmsg>

        <!-- Daemon -->
        <daemon_active>yes</daemon_active>
        <daemon_pidfile>/tmp/rfxcmd.pid</daemon_pidfile>

        <!-- MySQL -->
        <mysql_active>no</mysql_active>
        <mysql_server>localhost</mysql_server>
        <mysql_database>xxx</mysql_database>
        <mysql_username>xxx</mysql_username>
        <mysql_password>xxx</mysql_password>

        <!-- Trigger -->
        <trigger_active>yes</trigger_active>
        <trigger_onematch>no</trigger_onematch>
        <trigger_file>trigger.xml</trigger_file>
        <trigger_timeout>10</trigger_timeout>

        <!-- Sqlite -->
        <sqlite_active>no</sqlite_active>
        <sqlite_database>sqlite.db</sqlite_database>
        <sqlite_table>rfxcmd</sqlite_table>

        <!-- Logging -->
        <loglevel>error</loglevel>
        <logfile>rfxcmd.log</logfile>

        <!-- Graphite -->
        <graphite_active>no</graphite_active>
        <graphite_server>127.0.0.1</graphite_server>
        <graphite_port>2003</graphite_port>

        <!-- xPL -->
        <xpl_active>no</xpl_active>
        <xpl_host>127.0.0.1</xpl_host>

        <!-- Socket server -->
        <socketserver>yes</socketserver>
        <sockethost>localhost</sockethost>
        <socketport>55000</socketport>

        <!-- Whitelist -->
        <whitelist_active>no</whitelist_active>
        <whitelist_file>whitelist.xml</whitelist_file>

</config>

-----------------------------------------------

<xml>
<trigger>
        <message>08xxxx..xxxxxx....</message>
        <action>~/firealert.sh</action>
</trigger>
<trigger>
        <message>0Bxxxx....xxxxxxxx....</message>
        <action>~/rfxsend.py -r 0Bxxxx</action>
</trigger>
<trigger>
        <message>0Bxxxx....xxxx....</message>
        <action>~/rfxsend.py -r 0Bxxxx0</action>
</trigger>
<trigger>
        <message>0Bxxxx....xxxxxxxxxx....</message>
        <action>~/rfxsend.py -r 0Bxxxx</action>
</trigger>
<trigger>
        <message>0Bxxxx....xxxxxxxx00....</message>
        <action>~/rfxsend.py -r 0Bxxxx</action>
</trigger>
<trigger>
        <message>08xxxx..xxxx......</message>
        <action>echo "$temperature$" >/tmp/tempout</action>
</trigger>
<trigger>
        <message>08xxxx..xxxx......</message>
        <action>echo "$temperature$" >/tmp/tempin</action>
</trigger>
</xml>
----------------------------------------------------------------

Original comment by magnus.a...@gmail.com on 25 Jul 2013 at 8:43

GoogleCodeExporter commented 8 years ago
Actually I see higher value on my RPI;

PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND             

17075 root      20   0 28200 8300 1812 S   4.2  1.7   0:04.18 rfxcmd.py         

But I don't see these that alarming yet.

Yours is showing 0.7% CPU usage which is very low, even it is "top of the top", 
I wouldn't be too worry about that. If it is the only application that is 
constantly running I would say it is normal.

The memory usage is not that bad either, and it could probably be optimized 
with less library imports. But still it is not that high.

I guess it could be even higher if MySQL or other database is in use.

Original comment by sebastia...@gmail.com on 3 Aug 2013 at 10:29