gschlossnagle / yappi

Automatically exported from code.google.com/p/yappi
MIT License
0 stars 0 forks source link

Negative stats in yappi 0.62 #37

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. import yappi in my code
2. start my code, let it settle
3. yappi.start()
3. yappi.stop()
4. yappi.get_stats()

What is the expected output? What do you see instead?

expected only positive time used stats
got negative time used stats for some functions

What version of the product are you using? On what operating system?

python-2.7.2; Linux 2.6.37; arch armv5tejl

Please provide any additional information below.

Here's top of my stats:

    "func_stats": [
        {
            "0": "/root/My/Code/lib/monotonic.py.time:17", 
            "1": 4575, 
            "2": -66.924316882, 
            "3": 0.0, 
            "4": -0.014628265985136613
        }, 
        {
            "0": "/usr/lib/python2.7/threading.py.isSet:380", 
            "1": 3828, 
            "2": -14.824437831000001, 
            "3": 0.0, 
            "4": -0.0038726326622257057
        }, 
        {
            "0": "/usr/lib/python2.7/threading.py.isAlive:697", 
            "1": 3828, 
            "2": -85.961018507, 
            "3": 0.0, 
            "4": -0.022455856454284225
        }, 
        {
            "0": "./__main__.py.<lambda>:435", 
            "1": 3022, 
            "2": 0.785272108, 
            "3": 0.594216168, 
            "4": 0.0002598517895433488
        },
    ...
    ]

Original issue reported on code.google.com by dim...@gmail.com on 8 Aug 2012 at 1:02

GoogleCodeExporter commented 8 years ago
negative stats still present in hg head 2012-08-08:
    "func_stats": [
        {
            "0": "/root/My/Code/lib/monotonic.py.time:17", 
            "1": 799, 
            "2": -7.923997691, 
            "3": 0.0, 
            "4": 15, 
            "5": [
                16, 
                26, 
                33, 
                35, 
                14
            ], 
            "6": -0.009917393856070088
        }, 
        {
            "0": "./__main__.py.<lambda>:435", 
            "1": 537, 
            "2": 0.137466405, 
            "3": 0.10580927100000001, 
            "4": 13, 
            "5": [
                0
            ], 
            "6": 0.0002559895810055866
        }, 
        ...
    ]

Original comment by dim...@gmail.com on 8 Aug 2012 at 2:06

GoogleCodeExporter commented 8 years ago
This is related with the OS+CPU combination, I think. We need more tests on ARM 
machines. Is this a 64 bit system BTW?

Original comment by sum...@gmail.com on 13 Dec 2012 at 10:09

GoogleCodeExporter commented 8 years ago
Dude, you must be from the future, 64-bit ARM is promised by 2014...

I think I ran these tests under usermode qemu, so the full stack is like this:
64-bit linux kernel / 64-bit static qemu executable / 32-bit arm shared libs 
and python

So python, and yappi run in 32-bit mode.

In my experience so far, running stuff under qemu is almost indistinguishable 
from running on real arm.

There could be differences if you, say, accessed /proc/self/maps, I think you 
might see 64-bit pointers there (?)

Original comment by dim...@gmail.com on 13 Dec 2012 at 10:39

GoogleCodeExporter commented 8 years ago
> Dude, you must be from the future, 64-bit ARM is promised by 2014...

LOL. I did not know that.:)

Hmm. I may try to see if we get any negative numbers from the OS provided clock 
by writing a simple patch on timing.c file. I will share the test file here in 
the following days(I cannot find time today or tomorrow) It would be great if 
you can test it in your setup.

Thanks!

Original comment by sum...@gmail.com on 16 Dec 2012 at 12:29

GoogleCodeExporter commented 8 years ago
I  can't promise to test, let's see if I find the exact setup.

In the end, I guess you only need to test 32 and 64-bit systems, perhaps a 
32-bit system in chroot jail or 32-bit binary on 64-bit system should do the 
trick.

If you want to replicate my setup, download static build of qemu user (debian 
package I think), and some arm distro, e.g. arch linux arm. place 
qemu-arm-static into chroot root and issue these commands:

mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
echo 
':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00
:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xf
f:/qemu-arm-static:' > /proc/sys/fs/binfmt_misc/register
export QEMU_CPU=arm926

after that you can chroot 'your arm distro root' /bin/bash

adjust your settings, you may also want to mount --bind /sys arm-root/sys; same 
for proc and dev if some program expects to see those.

Original comment by dim...@gmail.com on 16 Dec 2012 at 12:38

GoogleCodeExporter commented 8 years ago
Ok thanks for the tips. let me see what I can do.

Original comment by sum...@gmail.com on 17 Dec 2012 at 8:14

GoogleCodeExporter commented 8 years ago
There is not a single complain for about 1 year from any other person for this 
kind of problem and personally cannot reproduce it in any of my setups. I am 
suspecting there is something specific about the HW however cannot pinpoint 
what it is. I will close this issue currently as I cannot move forward with 
debugging with the current information.

Original comment by sum...@gmail.com on 25 Jun 2014 at 9:01