python / cpython

The Python programming language
https://www.python.org
Other
63.24k stars 30.28k forks source link

Convoy effect with I/O bound threads and New GIL #52194

Closed 0d6a79a7-4e74-41c2-8eed-8d04ea475e57 closed 3 years ago

0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago
BPO 7946
Nosy @gvanrossum, @loewis, @arigo, @gpshead, @jcea, @mdickinson, @ncoghlan, @pitrou, @scoder, @larryhastings, @ericvsmith, @tarekziade, @carljm, @coderanger, @phsilva, @merwok, @alex, @jab, @briancurtin, @florentx, @cool-RR, @dimaqq, @thedrow, @corona10, @Sophist-UK
Files
  • udp-ioclient.py
  • udp-iotest.py
  • gilprio2.patch
  • prioritygil.tar.gz: Priority GIL implementation from PyCON
  • linux-7946.patch: Linux (libc) patch for py32
  • gilinter2.patch
  • writes.py
  • schedtest.py: Scheduling test
  • dabeaz_gil.patch: Dave's GIL patch
  • nir-ccbench-xp32.log
  • nir-ccbench-linux.log
  • ccbench-osx.log
  • bfs.patch: Updated implementation of the BFS scheduler.
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields: ```python assignee = None closed_at = created_at = labels = ['interpreter-core', '3.10', 'performance'] title = 'Convoy effect with I/O bound threads and New GIL' updated_at = user = 'https://bugs.python.org/dabeaz' ``` bugs.python.org fields: ```python activity = actor = 'Sophist' assignee = 'none' closed = True closed_date = closer = 'dabeaz' components = ['Interpreter Core'] creation = creator = 'dabeaz' dependencies = [] files = ['16315', '16316', '16323', '16324', '16570', '16676', '16968', '17095', '17106', '17194', '17370', '17393', '17504'] hgrepos = [] issue_num = 7946 keywords = ['patch'] message_count = 100.0 messages = ['99438', '99439', '99459', '99461', '99477', '99814', '99815', '99838', '99840', '99846', '99875', '100094', '101116', '101118', '101120', '101125', '101127', '101141', '101142', '101144', '101146', '101148', '101192', '101197', '101612', '101613', '101697', '101700', '101707', '101711', '101714', '101715', '101724', '101737', '101772', '101773', '101821', '101825', '101854', '102043', '102649', '102890', '103154', '103186', '103320', '103414', '103460', '104194', '104195', '104197', '104198', '104203', '104219', '104233', '104234', '104288', '104293', '104303', '104309', '104312', '104317', '104319', '104320', '104324', '104325', '104330', '104335', '104369', '104452', '104453', '104473', '104478', '104613', '104899', '105687', '105835', '105874', '106030', '106780', '106782', '156883', '223109', '223110', '346399', '346495', '347621', '347640', '347641', '377825', '377865', '378187', '378200', '378236', '385088', '385109', '415693', '415698', '415703', '415708', '415715'] nosy_count = 49.0 nosy_names = ['gvanrossum', 'loewis', 'jhylton', 'arigo', 'gregory.p.smith', 'jcea', 'mark.dickinson', 'ncoghlan', 'pitrou', 'scoder', 'movement', 'larry', 'eric.smith', 'kevinwatters', 'tarek', 'karld', 'carljm', 'coderanger', 'phsilva', 'durin42', 'eric.araujo', 'nirai', 'alex', 'stuaxo', 'andrix', 'konryd', 'jab', 'brian.curtin', 'hozn', 'victorpoluceno', 'flox', 'DazWorrall', 'cool-RR', 'rh0dium', 'rcohen', 'dabeaz', 'mahmoudimus', 'portante', 'aconrad', 'ysj.ray', 'thouis', 'donaldjeo', 'Michele', 'jmehnle', 'Dima.Tisnek', 'Omer.Katz', 'corona10', 'Sophist', 'maartenbreddels'] pr_nums = [] priority = 'normal' resolution = 'wont fix' stage = 'resolved' status = 'closed' superseder = None type = 'performance' url = 'https://bugs.python.org/issue7946' versions = ['Python 3.10'] ```

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    Background ----------- In order to multitask with threads, a critical part of the Python interpreter implementation concerns the behavior of I/O operations such as read, write, send, and receive. Specifically, whenever an I/O operation is carried out, the interpreter releases the GIL so that other threads can run while the program waits for the I/O operation to complete.

    Observed Behavior of I/O Operations ------------------------------------ The release of the GIL for I/O is a critical aspect of making sure the interpreter doesn't make all threads hang while waiting. However, the release of the GIL also assumes a worst-case scenario. In practice, a large number of I/O requests actually complete immediately with no actual blocking. For example, if a program is sending on a socket, send() operations will typically complete immediately if buffer space is available to accept the data. Likewise, read() and recv() operations may return immediately if data is already available in the operating system.

    For system calls that complete immediately, a thread quickly releases and then almost immediately reacquires the GIL to continue running. However, given that the I/O operation didn't block, the release of the GIL was technically unnecessary in this case.

    Behavior of the new GIL ----------------------- A feature of the new Python GIL implementation is that the interpreter no longer periodically signals waiting threads (e.g., the check interval). Instead, thread switching is based on a timed wait on a condition variable. Should a timeout occur, a thread will indicate that it wants the GIL and the currently running thread will be forced to give it up.

    Although this scheme solves the problem of CPU-bound threads thrashing, it introduces a highly pronounced "convoy effect" when CPU-bound threads and I/O bound threads get mixed together. A major part of the problem is caused by the bahvior of I/O as described above. Specifically, when an I/O bound thread executes an I/O call, it always releases the GIL. Since the GIL is released, a CPU bound thread is now free to acquire the GIL and run. However, if the I/O call completes immediately (which is common), the I/O bound thread immediately stalls upon return from the system call. To get the GIL back, it now has to go through the timeout process to force the CPU-bound thread to release the GIL again.

    It should be noted that the behavior described also occurs in Python 2, but due to the small check interval, an I/O bound thread that wants the GIL back will tend to get it without having to wait very long.

    Example ------- Here is a very simple example that illustrates the problem. In this example, there is one CPU-bound thread that hammers the CPU and there is an I/O bound thread that handles some network communication (an echo server):

    # iotest.py
    import time
    import threading
    from socket import *
    
    # CPU-bound thread (just hammers the CPU)
    def spin():
        while True:
            pass
    
    # I/O-bound thread (an echo TCP server)
    def echo_server():
        s = socket(AF_INET, SOCK_STREAM)
        s.setsockopt(SOL_SOCKET, SO_REUSEADDR,1)
        s.bind(("",15000))
        s.listen(1)
        while True:
            c,a = s.accept()
            while True:
                data = c.recv(8192)
                if not data:
                    break
                c.sendall(data)
            c.close()
        s.close()
    
    # Launch the CPU-bound thread
    t1 = threading.Thread(target=spin)
    t1.daemon=True
    t1.start()
    
    # Run the I/O server
    echo_server()

    Here is a benchmark program that runs as a client for the echo_server() thread defined above. It sends a sequence of messages and reads the response back. It then reports some timings at the end.

    # echoclient.py
    from socket import *
    import time
    
    CHUNKSIZE = 16384
    NUMMESSAGES = 640     # Total of 10MB
    
    # Dummy message
    msg = b"x"*CHUNKSIZE
    
    # Connect and send messages
    s = socket(AF_INET,SOCK_STREAM)
    s.connect(("",15000))
    start = time.time()
    for n in range(NUMMESSAGES):
        s.sendall(msg)
        bytes_recv = len(msg)
        # Get the response back
        while bytes_recv > 0:
            data = s.recv(bytes_recv)
            bytes_recv -= len(data)
    s.close()
    end = time.time()
    print("%0.3f seconds (%0.3f bytes/sec)" % (end-start, (CHUNKSIZE*NUMMESSAGES)/(end-start)))

    Performance Results ------------------- These results are from running the above programs on a dual-core MacBook running OS-X Snow Leopard. I also get similar behavior on a quad-core desktop machine.

    If you run the iotest.py program using Python 2.6.4 and execute the client, you get this result:

    bash % python echoclient.py 1.064 seconds (9854148.739 bytes/sec)

    If you switch the iotest.py to Python 3.2 and rerun, you get this result:

    bash % python echoclient.py 12.340 seconds (849726.150 bytes/sec)

    Notice that there is a factor 12 performance difference.

    Modify the iotest.py program so that there are 2 CPU-bound threads spinning. Just add this extra code:

        t2 = threading.Thread(target=spin)
        t2.daemon
        t2.start()

    Now, repeat the above tests. For Python 2.6.4, you get this:

    bash-3.2$ python echoclient.py
    0.358 seconds (29319821.410 bytes/sec)

    (Yes the performance actually improves! That's left as an exercise for the reader to figure out why)

    Now, switch the iotest.py server to Python 3.2 and retry:

    base-3 $ python echoclient.py
    59.545 seconds (176098.609 bytes/sec)    

    Notice how the addition of one CPU-bound thread made the time go up by more than a factor 4!

    Now, disable all but one of the CPU cores and try the test again in Python 3.2:

    bash-3.2$ python echoclient.py
    0.120 seconds (87246036.201 bytes/sec)

    Here, you see that it runs about 500 times faster than with two cores (yikes!)

    What's causing this behavior? ----------------------------- In the iotest.py program, there is an inner loop that looks like this:

            while True:
                data = c.recv(8192)
                if not data:
                    break
                c.sendall(data)

    The I/O operations recv() and sendall() always release the GIL when they execute. However, when this happens, CPU bound threads jump in and start running again. The problem gets worse as the number of CPU-bound threads increases--CPU bound threads might cycle round-robin before the I/O bound thread runs again. The problem is more pronounced on multiple CPU cores because when the GIL is released, one of the cores will typically go handle the system call while the other core wakes up the waiting CPU-bound thread (which then almost immediately steals the GIL).

    Is it worth fixing? ------------------- I claim yes. There are many applications, such as those carried out with the multiprocessing library, that will operate by trying to overlap computation and I/O in some manner (for example, receiving the next chunk of data to work on while carrying out calculations on the currently received data).

    In heavily loaded I/O bound applications such as servers with hundreds of simultaneously connected clients, the release of the GIL on short I/O operations may cause a lot of unintended thrashing as threads cycle amongst themselves. This would most likely manifest itself as an increased turnaround time for requests.

    How to fix? ----------- Modify all I/O operations in the interpreter to not release the GIL if they won't block. Either that or maybe there's some sort of really sneaky easy solution (unknown).

    The effect can be minimized by setting the switch interval to a really small value using sys.setswitchinterval(). However, doing this greatly increases the amount of thread context-switching--something that's also undesirable.

    alex commented 14 years ago

    Would the idea of priority-GIL requests that Antoine had in his original patch solve this issue?

    pitrou commented 14 years ago

    Just a quick test under Linux (on a dual quad core machine):

    As already said, the "spinning endlessly" loop is a best case for thread switching latency in 2.x, because the opcodes are very short. If each opcode in the loop has an average duration of 20 ns, and with the default check interval of 100, the GIL gets speculatively released every 2 us (yes, microseconds). That's why I suggested trying more "realistic" workloads, as in ccbench.

    Also, as I told you, there might also be interactions with the various timing heuristics the TCP stack of the kernel applies. It would be nice to test with UDP.

    That said, the observations are interesting.

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    The comment on the CPU-bound workload is valid--it is definitely true that Python 2.6 results will degrade as the workload of each tick is increased. Maybe a better way to interpreter those results is as a baseline of what kind of I/O performance is possible if there is a quick I/O response time.

    However, ignoring that and the comparison between Python 2.6 and 3.2, there is still a serious performance issue with I/O in 3.2. For example, the dramatic decrease in I/O performance as there are more CPU bound threads competing and the fact that there is a huge performance gain when all but one CPU core is disabled.

    I tried the test using UDP packets and get virtually the exact same behavior described. For instance, echoing 10MB (sent in 8k UDP packets) takes about 0.6s in Python 2.6 and 12.0s in Python-3.2. The time shoots up to more than 40s if there are two CPU-bound threads.

    The problem being described really doesn't have anything to do with TCP vs. UDP or any part of the network stack. It has everything to do with how the operating system buffers I/O requests and how I/O operations such as sends and receives complete immediately without blocking depending on system buffer characteristics (e.g., if there is space in the send buffer, a send will return immediately without blocking). The fact that the GIL is released when it's not necessary in these cases is really the source of the problem.

    pitrou commented 14 years ago

    We could try not to release the GIL when socket methods are called on a non-blocking socket.

    Regardless, I've re-run the tests under the Linux machine, with two spinning threads:

    (and as someone mentioned, the "priority requests" mechanism which was in the original new GIL patch might improve things. It's not an ideal time for me to test, right now :-))

    pitrou commented 14 years ago

    I'm attaching Dave's new UDP-based benchmarks, which eliminate the dependency on the TCP stack's behaviour.

    pitrou commented 14 years ago

    And here is an experimental patch which enables the "priority requests" mechanism which was in the original new GIL patch. "Experimental" because it only enables them on a couple of socket methods (enough to affect the benchmarks).

    Here are the UDP benchmark results (with 2 background threads):

    And here's patched py3k with 8 background threads: 3.058 seconds (3429136.193 bytes/sec)

    (benchmarks run on a 8-core Linux machine)

    pitrou commented 14 years ago

    See also bpo-7993 for a patch adding a similar bandwidth benchmark to ccbench.

    pitrou commented 14 years ago

    Here is an improved version of the priority requests patch. With this patch I get the following result (2 background threads): 0.255 seconds (41109347.194 bytes/sec)

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    I posted some details about the priority GIL modifications I showed during my PyCON open-space session here:

    http://www.dabeaz.com/blog/2010/02/revisiting-thread-priorities-and-new.html

    I am attaching the .tar.gz file with modifications if anyone wants to look at them. Note: I would not consider this to be solid enough to be any kind of official patch. People should only look at it for the purposes of experimentation and for coming up with something better.

    pitrou commented 14 years ago

    Here is another patch based on a slightly different approach. Instead of being explicitly triggered in I/O methods, priority requests are decided based on the computed "interactiveness" of a thread. Interactiveness itself is a simple saturated counter (incremented when the GIL is dropped without request, decremented when the GIL is dropped after a request).

    Benchmark numbers are basically the same as with gilprio2.patch.

    briancurtin commented 14 years ago

    The UDP benchmarks look even better on Windows. These are results of the attached scripts when running on an Intel Xeon (quad-core) 2.33GHz running Windows Server 2003 x64.

    vanilla py3k 32-bit: 26.323 || 64-bit: 25.29

    gilprio2 patched py3k 32-bit: 0.932 || 64-bit: 0.682

    gilinter patched py3k 32-bit: 0.958 || 64-bit: 0.703

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    Here's a short benchmark for everyone who thinks that my original benchmark was somehow related to TCP behavior. This one doesn't even involve sockets:

    from threading import Thread
    import time
    
    def writenums(f,n):
        start = time.time()
        for x in range(n):
            f.write("%d\n" % x)
        end = time.time()
        print(end-start)
    
    def spin():
        while True:
            pass

    # Uncomment to add a thread

    t1 = Thread(target=spin)

    t1.daemon=True

    t1.start()

    writenums(open("/tmp/nums","w"),1000000)

    If I run this on my Macbook with no threads, it takes about 1.05 seconds. If the one spinning thread is turned on, the time jumps to about 4.5 seconds. What you're seeing is that the spinning thread unfairly hogs the CPU.

    If I use my own patched version (new GIL with priorities), the threaded version drops back down to about 1.10 seconds. I have not tried it with Antoine's latest patch, but would expect similar results as he is also using priorities.

    Just to be clear, this issue is not specific to sockets or TCP.

    eda57068-96ad-4b33-8431-9c528f59a6a6 commented 14 years ago

    On some platforms the difference is not so important. I ran it in Debian Lenny AMD64 "Core2 Duo P9600 @2.66GHz".

    # Python 3.2a0 (py3k:78982M, Mar 15 2010, 15:40:42) # [GCC 4.3.4] on linux2

    0.67s without thread 0.84s with spinning thread

    eda57068-96ad-4b33-8431-9c528f59a6a6 commented 14 years ago

    With line buffering, I see the issue.

    # Modified version of the test case, with bufsize=1

    from threading import Thread
    import time
    
    def writenums(f, n):
        start = time.time()
        for x in range(n):
            f.write("%d\n" % x)
        end = time.time()
        print(end-start)
    
    def spin():
        while True:
            pass
    
    t1 = Thread(target=spin)
    t1.daemon=True
    # Uncomment to add a thread
    #t1.start()

    # With line buffering writenums(open("./nums", "w", 1), 1000000)

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    Whoa, that's pretty diabolically evil with bufsize=1. On my machine, doing that just absolutely kills the performance (13 seconds without the spinning thread versus 557 seconds with the thread!). Or, put another way, the writing performance drops from about 0.5 Mbyte/sec down to 12 Kbytes/sec with the thread. With my priority GIL, the time is about 29 seconds with the thread (consistent with your experiment using the gilinter patch).

    One thing to think about with this example is the proper priority of I/O handling generally. What if, instead of a file, this example code was writing on a pipe to another process? For that, you would probably want that I/O thread to be able to blast its data to the receiver as fast as it reasonably can so that it can be done with it and get back to other work.

    In terms of practical relevance, this test again represents a simple situation where computation is overlapped with I/O processing. Perhaps the program has just computed a big result which is now being streamed somewhere else by a background thread. In the meantime, the program is now working on computing the next result (the spinning thread). Think queues, actors, or any number of similar things---there are programs that try to operate like this.

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    Almost forgot--if I turn off one of the CPU cores, the time drops from 557 seconds to 32 seconds. Gotta love it!

    pitrou commented 14 years ago

    One thing to think about with this example is the proper priority of I/O handling generally. What if, instead of a file, this example code was writing on a pipe to another process? For that, you would probably want that I/O thread to be able to blast its data to the receiver as fast as it reasonably can so that it can be done with it and get back to other work.

    We should be careful with statements such as "you want probably want [...]". There may be situations where you want such a thing; others where you don't really want (or care about) it.

    While IO responsiveness and throughput can be an important measure of performance, it is not the only one and depending on the situation it actually may not matter at all.

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    Oh the situation definitely matters. Although, in the big picture, most programmers would probably prefer to have fast I/O performance over slow I/O performance :-).

    pitrou commented 14 years ago

    Oh the situation definitely matters. Although, in the big picture, most programmers would probably prefer to have fast I/O performance over slow I/O performance :-).

    Yes, of course. But that's not the point. We could try to improve GIL behaviour in every thinkable situation, at the expense of code complexity. But we don't want to make the code too complex and delicate to maintain, and that's why I've asked for advice on python-dev.

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    I absolutely agree 100% that it is not worth trying to fix the GIL for every conceivable situation (although if you could, I wouldn't complain).

    To me, there are really only two scenarios worth worrying about:

    1. Get rid of all of that multicore lock thrashing that happens if there are multiple CPU-bound threads competing. Python programmers should know that intentionally using threads like this won't offer a performance boost. However, it would still be nice to avoid all of that needless overhead when it happens by design or by accident (the new GIL already addresses this).

    2. Make sure that you can get reasonable I/O performance when there is *one* CPU-bound thread competing with some background I/O processing. This covers the use case of overlapped I/O and computation as well as almost every practical problem related to communicating processes, multiprocessing, queuing, etc.

    As for everything else, it's probably not worth worrying about so much. If someone is only doing I/O (e.g., a web server), their code is going to behave about the same as before (although maybe slightly better under heavy load if there's less GIL contention). Situations where someone intentionally tries to set up multiple long-running CPU-bound threads seems pretty unlikely---the very presence of the GIL wouldn't make that kind of programming model attractive in the first place, so why would they do it?

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    You know, I almost wonder whether this whole issue could be fixed by just adding a user-callable function to optionally set a thread priority number. For example:

        sys.setpriority(n)

    Modify the new GIL code so that it checks the priority of the currently running thread against the priority of the thread that wants the GIL. If the running thread has lower priority, it immediately drops the GIL.

    Other than having this added preemption, do nothing else---just throw it all back to the user to come up with the proper "priorities."

    If there was something like this, it would completely fix the overlapped compute and I/O problem I mentioned. I'd just set a higher priority on the background I/O threads and be done with it. Problem solved.

    Ok, it's only a thought.

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    I tried Florent's modification to the write test and did not see the effect on my machine with an updated revision of Python32.

    I am running Ubuntu Karmic 64 bit. 7s - no background threads. 20s - one background thread.

    According to the following documentation the libc condition is using scheduling policy when waking a thread and not FIFO order: The following documentation suggests ordering in Linux is not FIFO: http://www.opengroup.org/onlinepubs/000095399/functions/pthread_cond_timedwait.html#tag_03_518_08_06

    I upload a quick and dirty patch (linux-7946.patch) to the new GIL just to reflect this by avoiding the timed waits.

    On my machine it behaves reasonably both with the TCP server and with the write test, but so does unpatched Python 3.2.

    I noticed high context switching rate with dave's priority GIL - with both tests it goes above 40K/s context switches.

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    I updated the patch with a small fix and increased the ticks countdown-to-release considerably. This seems to help the OS classify CPU bound threads as such and actually improves IO performance.

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    I upload bfs.patch

    To apply the patch use the following commands on updated python 3.2: $ patch -fp1 \< bfs.patch $ ./configure

    The patch replaces the GIL with a scheduler. The scheduler is a simplified implementation of the recent kernel Brain F**k Scheduler by the Linux hacker Con Kolivas:

    http://ck.kolivas.org/patches/bfs/sched-BFS.txt "The goal of the Brain Fuck Scheduler, referred to as BFS from here on, is to completely do away with the complex designs of the past for the cpu process scheduler and instead implement one that is very simple in basic design. The main focus of BFS is to achieve excellent desktop interactivity and responsiveness without heuristics and tuning knobs that are difficult to understand, impossible to model and predict the effect of, and when tuned to one workload cause massive detriment to another."

    Con Kolivas is the hacker whose work inspired the current CFS scheduler of the Linux Kernel.

    On my core 2 duo laptop it performs as follows compared to the other patches:

    1) Florent's writenums() test: \~same 2) UDP test: x6 faster 3) cpued test: works as expected, while the other patches starve the pure python threads.

    cpued test spins 3 threads, 2 of them pure python and the 3rd does time.sleep(0) every \~1ms:

    import threading
    import time
    
    def foo(n):
        while n > 0:
            'y' in 'x' * n
            n -= 1
    
    def bar(sleep, name):
        for i in range(100):
            print (name, i, sleep)
            for j in range(300):
                foo(1500)
                if sleep:
                    time.sleep(0)
    
    t0 = threading.Thread(target=bar, args=(False, 't0'))
    t1 = threading.Thread(target=bar, args=(False, 't1'))
    t2 = threading.Thread(target=bar, args=(True, 't2-interactive'))
    
    list(map(threading.Thread.start, [t0, t1, t2]))
    list(map(threading.Thread.join, [t0, t1, t2]))

    The patch is still work in progress. In particular: 1) I still need to add support for Windows. 2) It currently requires Posix clock_gettime() and assumes good timer resolution. 3) I only verified it builds on Ubuntu Karmic 64bit. 4) I still need to optimize it and address cleanup.

    The scheduler is very simple, straight forward and flexible, and it addresses the tuning problems discussed recently.

    I think it can be a good replacement to the GIL, since Python really needs a scheduler, not a lock.

    pitrou commented 14 years ago

    I upload bfs.patch

    Interesting patch, but:

    If this gets accepted there will be cosmetic issues to watch out (and the patch should be cross-platform).

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    I upload an updated bfs.patch. Apply to updated py32 and ignore the error with:

    $ patch -fp1 < bfs.patch
    $ ./configure

    Please give understandable benchmark numbers, including an explicit comparison with baseline 3.2, and patched 3.2 (e.g. gilinter.patch)

    Below.

    Please also measure single-thread performance, because it looks like you are adding significant work inside the core eval loop

    Removed most of it now. last bit will be removed soon.

    Do you need a hi-res clock? gettimeofday() already gives you microseconds. It looks like a bit of imprecision shouldn't be detrimental.

    I use clock_gettime() to get the thread running time to calculate slice depletion. Wall clock can not help with that.

    The magic number DEADLINE_FACTOR looks gratuitous (why 1.1^20 ?)

    To my understanding it controls the CPU load (~6) beyond which threads tend to expire. Since expired threads are handled in FIFO order, IO threads do not preempt them (IO threads are chronically expired). So beyond that load IO threads become less responsive.

    By the way, I would put COND_SIGNAL inside the LOCK_MUTEX / UNLOCK_MUTEX pair in bfs_yield().

    Done.

    Here are benchmark results of the UDP test as timed with ipython, where client.work() is a single run of the client:

    System: Core 2 Duo (locked at 2.4 GHz) with Ubuntu Karmic 64 bit.

    Vanilla Python 3.2:

    In [28]: %timeit -n3 client.work() 1.290 seconds (8127084.435 bytes/sec) 1.488 seconds (7045285.926 bytes/sec) 2.449 seconds (4281485.217 bytes/sec) 1.874 seconds (5594303.222 bytes/sec) 1.853 seconds (5659626.496 bytes/sec) 0.872 seconds (12023425.779 bytes/sec) 4.951 seconds (2117942.079 bytes/sec) 0.728 seconds (14409157.126 bytes/sec) 1.743 seconds (6016999.707 bytes/sec) 3 loops, best of 3: 1.53 s per loop

    gilinter.patch:

    In [31]: %timeit -n3 client.work() 5.192 seconds (2019676.396 bytes/sec) 1.613 seconds (6500071.475 bytes/sec) 3.057 seconds (3429689.199 bytes/sec) 3.486 seconds (3007596.468 bytes/sec) 4.324 seconds (2424791.868 bytes/sec) 0.964 seconds (10872708.606 bytes/sec) 3.510 seconds (2987722.960 bytes/sec) 1.362 seconds (7698999.458 bytes/sec) 1.013 seconds (10353913.920 bytes/sec) 3 loops, best of 3: 1.96 s per loop

    PyCON patch:

    In [32]: %timeit -n3 client.work() 2.483 seconds (4223256.889 bytes/sec) 1.330 seconds (7882880.263 bytes/sec) 1.737 seconds (6036251.315 bytes/sec) 1.348 seconds (7778296.679 bytes/sec) 0.983 seconds (10670811.638 bytes/sec) 1.419 seconds (7387226.333 bytes/sec) 1.057 seconds (9919412.977 bytes/sec) 2.483 seconds (4223205.791 bytes/sec) 2.121 seconds (4944231.292 bytes/sec) 3 loops, best of 3: 1.25 s per loop

    bfs.patch:

    In [33]: %timeit -n3 client.work() 0.289 seconds (36341875.356 bytes/sec) 0.271 seconds (38677439.991 bytes/sec) 0.476 seconds (22033958.947 bytes/sec) 0.329 seconds (31872974.070 bytes/sec) 0.478 seconds (21925125.894 bytes/sec) 0.242 seconds (43386204.271 bytes/sec) 0.213 seconds (49195701.418 bytes/sec) 0.309 seconds (33967467.196 bytes/sec) 0.256 seconds (41008076.688 bytes/sec) 3 loops, best of 3: 259 ms per loop

    Output of cpued.py test:

    Vanilla Python 3.2, gilinter.patch and PyCON patch all starve the pure Python threads and output the following:

    $ ~/build/python/python32/python cpued.py 
    t0 0 False
    t1 0 False
    t2-interactive 0 True
    t2-interactive 1 True
    t2-interactive 2 True
    t2-interactive 3 True
    t2-interactive 4 True
    t2-interactive 5 True
    t2-interactive 6 True
    t2-interactive 7 True
    .
    .
    .

    Output from bfs.patch run:

    $ ~/build/python/bfs/python cpued.py 
    t0 0 False
    t1 0 False
    t2-interactive 0 True
    t0 1 False
    t1 1 False
    t2-interactive 1 True
    t0 2 False
    t1 2 False
    t2-interactive 2 True
    t0 3 False
    t1 3 False
    t2-interactive 3 True
    .
    .
    .

    Note: I have not tested on other Posix systems, and expect to have some complications on Windows, since its thread timers are low resolution (10ms+), and there are issues with its high-precision wall clock. ...will soon know better.

    pitrou commented 14 years ago

    I use clock_gettime() to get the thread running time to calculate slice depletion.

    Ouch. CLOCK_THREAD_CPUTIME_ID is not a required part of the standard. Only CLOCK_REALTIME is guaranteed to exist.

    By the way, it's not obvious cpued tests anything meaningful. I understand the bias you are trying to avoid but constructing artificial test cases is not very useful, because we are playing with heuristics and it's always possible to defeat some expectations. That's why benchmarks should try to model/represent real-world situations.

    I've tried ccbench with your patch and there's a clear regression in latency numbers:

    Background CPU task: Pi calculation (Python)

    CPU threads=0: 0 ms. (std dev: 0 ms.) CPU threads=1: 1 ms. (std dev: 0 ms.) CPU threads=2: 0 ms. (std dev: 2 ms.) CPU threads=3: 0 ms. (std dev: 2 ms.) CPU threads=4: 2 ms. (std dev: 8 ms.)

    Background CPU task: regular expression (C)

    CPU threads=0: 0 ms. (std dev: 0 ms.) CPU threads=1: 969 ms. (std dev: 577 ms.) CPU threads=2: 1050 ms. (std dev: 577 ms.) CPU threads=3: 1217 ms. (std dev: 577 ms.) CPU threads=4: 1416 ms. (std dev: 577 ms.)

    Vanilla py3k and py3k+gilinter both don't have this problem (on the regular expression workload).

    IO bandwidth numbers are similar between the BFS patch:

    Background CPU task: Pi calculation (Python)

    CPU threads=0: 6952.7 packets/s. CPU threads=1: 4350.3 ( 62 %) CPU threads=2: 3332.2 ( 47 %) CPU threads=3: 2632.0 ( 37 %) CPU threads=4: 2052.0 ( 29 %)

    and py3k + gilinter:

    Background CPU task: Pi calculation (Python)

    CPU threads=0: 7023.4 packets/s. CPU threads=1: 5019.4 ( 71 %) CPU threads=2: 4474.6 ( 63 %) CPU threads=3: 2728.5 ( 38 %) CPU threads=4: 2244.9 ( 31 %)

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    Ouch. CLOCK_THREAD_CPUTIME_ID is not a required part of the standard. Only CLOCK_REALTIME is guaranteed to exist.

    Right, however the man page at kernel.org says the following on CLOCK_THREAD_CPUTIME_ID: "Sufficiently recent versions of glibc and the Linux kernel support the following clocks" http://www.kernel.org/doc/man-pages/online/pages/man2/clock_getres.2.html

    The same statement shows up as early as 2003: http://www.tin.org/bin/man.cgi?section=3&topic=clock_gettime

    However, if this is indeed a problem on some systems (none Linux?), then a fall back could be attempted for them.

    There could also be a problem on systems where the counter exists but has low resolution 10ms+

    What platforms do you think this could be a problem on?

    By the way, it's not obvious cpued tests anything meaningful. I understand the bias you are trying to avoid but constructing artificial test cases is not very useful, because we are playing with heuristics and it's always possible to defeat some expectations. That's why benchmarks should try to model/represent real-world situations.

    I came up with cpued.py after reading the patches in an attempt to understand how they behave. In this case one thread is pure Python while the other occasionally releases the GIL, both CPU bound. I don't claim this is a real-world situation. However, it is a case in which bfs.patch behaves as expected.

    I've tried ccbench with your patch and there's a clear regression in latency numbers.

    Please specify system and test details so I can try to look into it. On my system ccbench behaves as expected:

    $ ~/build/python/bfs/python ccbench.py
    == CPython 3.2a0.0 (py3k) ==
    == x86_64 Linux on '' ==

    --- Throughput ---

    Pi calculation (Python)

    threads=1: 1252 iterations/s.
    threads=2: 1199 ( 95 %)
    threads=3: 1178 ( 94 %)
    threads=4: 1173 ( 93 %)

    regular expression (C)

    threads=1: 491 iterations/s.
    threads=2: 478 ( 97 %)
    threads=3: 472 ( 96 %)
    threads=4: 477 ( 97 %)

    SHA1 hashing (C)

    threads=1: 2239 iterations/s.
    threads=2: 3719 ( 166 %)
    threads=3: 3772 ( 168 %)
    threads=4: 3464 ( 154 %)

    --- Latency ---

    Background CPU task: Pi calculation (Python)

    CPU threads=0: 0 ms. (std dev: 1 ms.) CPU threads=1: 0 ms. (std dev: 1 ms.) CPU threads=2: 0 ms. (std dev: 1 ms.) CPU threads=3: 0 ms. (std dev: 1 ms.) CPU threads=4: 0 ms. (std dev: 1 ms.)

    Background CPU task: regular expression (C)

    CPU threads=0: 0 ms. (std dev: 0 ms.) CPU threads=1: 6 ms. (std dev: 0 ms.) CPU threads=2: 2 ms. (std dev: 2 ms.) CPU threads=3: 1 ms. (std dev: 0 ms.) CPU threads=4: 5 ms. (std dev: 7 ms.)

    Background CPU task: SHA1 hashing (C)

    CPU threads=0: 0 ms. (std dev: 1 ms.) CPU threads=1: 0 ms. (std dev: 1 ms.) CPU threads=2: 0 ms. (std dev: 1 ms.) CPU threads=3: 1 ms. (std dev: 1 ms.) CPU threads=4: 1 ms. (std dev: 0 ms.)

    pitrou commented 14 years ago

    Please specify system and test details so I can try to look into it.

    It's a dual-core Linux x86-64 system. But, looking at the patch again, the reason is obvious:

    #define CHECK_SLICE_DEPLETION(tstate) (bfs_check_depleted || (tstate->tick_counter % 1000 == 0))

    tstate->tick_counter % 1000 is replicating the behaviour of the old GIL, which based its speculative operation on the number of elapsed opcodes (and which also gave bad latency numbers on the regex workload).

    By the way, I configure --with-computed-gotos.

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    It's a dual-core Linux x86-64 system. But, looking at the patch again, the reason is obvious:

    define CHECK_SLICE_DEPLETION(tstate) (bfs_check_depleted || (tstate

    tick_counter % 1000 == 0))

    tstate->tick_counter % 1000 is replicating the behaviour of the old GIL, which based its speculative operation on the number of elapsed opcodes (and which also gave bad latency numbers on the regex workload).

    The flag_check_depleted is there to work around this problem. It is raised by waiters which timeout.

    What distribution and version of GNU/Linux are you using?

    As for the CLOCK_THREAD_CPUTIME_ID clock, support was added to FreeBSD recently in version 7.1, which I guess is not good enough: http://www.freebsd.org/releases/7.1R/relnotes.html

    I did not find yet anything on Solaris. Do you know of an alternative way to measure thread running time on Posix?

    pitrou commented 14 years ago

    What distribution and version of GNU/Linux are you using?

    $ cat /etc/mandriva-release 
    Mandriva Linux release 2010.0 (Official) for x86_64
    
    $ rpm -qv glibc
    glibc-2.10.1-6.2mnb2
    
    $ uname -a
    Linux localhost 2.6.31.5-desktop-1mnb #1 SMP Fri Oct 23 00:05:22 EDT
    2009 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 3600+ GNU/Linux

    I'll try on an Ubuntu system some other day, if I find the time.

    I did not find yet anything on Solaris. Do you know of an alternative way to measure thread running time on Posix?

    No, but I'm not an expert.

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    Well, on initial check the scheduler seems to work well with regular gettimeofday() wall clock instead of clock_gettime().

    :)

    / Return thread running time in seconds (with nsec precision). */ static inline long double get_thread_timestamp(void) { return get_timestamp(); // wall clock via gettimeofday() /struct timespec ts; clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts); return (long double) ts.tv_sec + ts.tv_nsec 0.000000001;/ }

    Does it make things better on your system?

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    Uploaded an updated bfs.patch

    The latency problem was related to the --with-computed-gotos flag. I fixed it and it seems to work fine now.

    I also switched to gettimeofday() so it should work now on all Posix with high resolution timer.

    pitrou commented 14 years ago

    I also switched to gettimeofday() so it should work now on all Posix with high resolution timer

    But on a busy system, won't measuring wall clock time rather than CPU time give bogus results?

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    But on a busy system, won't measuring wall clock time rather than CPU time give bogus results?

    This was the motivation for using clock_gettime(). I tried the wall clock version under load (including on single core system) and it seems to behave. Now it remains to rationalize it :)

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    gilinter.patch has good IO latency in UDP test on my system when built with --with-computed-gotos:

    In [34]: %timeit -n3 client.work() 0.320 seconds (32782026.509 bytes/sec) 0.343 seconds (30561727.443 bytes/sec) 0.496 seconds (21154075.417 bytes/sec) 0.326 seconds (32171215.998 bytes/sec) 0.462 seconds (22701809.421 bytes/sec) 0.378 seconds (27722146.793 bytes/sec) 0.391 seconds (26826713.409 bytes/sec) 0.315 seconds (33335858.720 bytes/sec) 0.281 seconds (37349508.136 bytes/sec) 3 loops, best of 3: 329 ms per loop

    pitrou commented 14 years ago

    Hmm, the gilinter patch shouldn't be sensitive to whether computed gotos are enabled or not. Here is an updated patch, though, the previous one didn't apply cleanly anymore. I've also made the priority condition a bit stricter.

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    I update bfs.patch. It now builds on Windows (and Posix).

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    I upload a new update to bfs.patch which improves scheduling and reduces overhead.

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    Uploaded an update.

    79528080-9d85-4d18-8a2a-8b1f07640dd7 commented 14 years ago
    A couple remarks on BFS-based patch:
    - nothing guarantees that you'll get a msec resolution
    - gettimeofday returns you wall clock time: if a process that modifies time is running, e.g. ntpd, you'll likely to run into trouble. the value returned is _not_ monotonic, but clock_gettime(CLOCK_MONOTONIC) is
    - inline functions are used, but it's not ANSI
    - static inline long double get_timestamp(void) {
        struct timeval tv;
        GETTIMEOFDAY(&tv);
        return (long double) tv.tv_sec + tv.tv_usec * 0.000001;
    }
    the product is computed as double, and then promoted as (long double).
    - the code uses a lot of floating point calculation, which is slower than integer

    Otherwise: "- You know, I almost wonder whether this whole issue could be fixed by just adding a user-callable function to optionally set a thread priority number. For example:

        sys.setpriority(n)

    Modify the new GIL code so that it checks the priority of the currently running thread against the priority of the thread that wants the GIL. If the running thread has lower priority, it immediately drops the GIL."

    The problem with this type of fixed-priority is starvation. And it shouldn't be up to the user to set the priorities. And some threads can mix I/O and CPU intensive tasks.

    It's a dual-core Linux x86-64 system. But, looking at the patch again, the reason is obvious:

    define CHECK_SLICE_DEPLETION(tstate) (bfs_check_depleted || (tstate

    tick_counter % 1000 == 0))

    tstate->tick_counter % 1000 is replicating the behaviour of the old GIL, which based its speculative operation on the number of elapsed opcodes (and which also gave bad latency numbers on the regex workload).

    I find this suspicous too. I haven't looked at the patch in detail, but what does the number of elapsed opcodes offers you over the timesplice expiration approach ?

    79528080-9d85-4d18-8a2a-8b1f07640dd7 commented 14 years ago

    Some more remarks:

    It is thus recommended that a condition wait be enclosed in the equivalent of a "while loop" that checks the predicate."

    79528080-9d85-4d18-8a2a-8b1f07640dd7 commented 14 years ago

    Please disregard my remark on COND_TIMED_WAIT not updating timeout_result, it's wrong (it's really a macro, not a function...)

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    I uploaded an update to bfs.patch which improves behavior in particular on non-Linux multi-core (4+) machines.

    Hi Charles-Francois, Thanks for taking the time to review this patch!

    • nothing guarantees that you'll get a msec resolution

    Right, the code should behave well with low precision clocks as long as short (sub-tick) tasks are not synchronized with the slice interval. There is a related discussion of this problem in schedulers in the section on sub-tick accounting in: http://ck.kolivas.org/patches/bfs/sched-BFS.txt

    On which target systems can we expect not to have high precision clock?

    • gettimeofday returns you wall clock time: if a process that modifies time is running, e.g. ntpd, you'll likely to run into trouble. the value returned is _not_ monotonic, but clock_gettime(CLOCK_MONOTONIC) is
    • inline functions are used, but it's not ANSI
    • static inline long double get_timestamp(void) { struct timeval tv; GETTIMEOFDAY(&tv); return (long double) tv.tv_sec + tv.tv_usec * 0.000001; }

    I added timestamp capping to the code. timestamp is used for waiting and therefore I think the source should be either CLOCK_REALTIME or gettimeofday().

    tstate->tick_counter % 1000 is replicating the behaviour of the old GIL, which based its speculative operation on the number of elapsed opcodes (and which also gave bad latency numbers on the regex workload).

    I find this suspicous too. I haven't looked at the patch in detail, but what does the number of elapsed opcodes offers you over the timesplice expiration approach ?

    More accurate yielding. It is possible a better mechanism can be thought of and/or maybe it is indeed redundant.

    It is thus recommended that a condition wait be enclosed in the equivalent of a "while loop" that checks the predicate."

    Done.

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    the scheduling function bfs_find_task returns the first task that has an expired deadline. since an expired deadline probably means that the scheduler hasn't run for a while, it might be worth it to look for the thread with the oldest deadline and serve it first, instead of stopping at the first one

    This is by design of BFS as I understand it. Next thread to run is either first expired or oldest deadline:

    http://ck.kolivas.org/patches/bfs/sched-BFS.txt "Once a task is descheduled, it is put back on the queue, and an O(n) lookup of all queued-but-not-running tasks is done to determine which has the earliest deadline and that task is chosen to receive CPU next. The one caveat to this is that if a deadline has already passed (jiffies is greater than the deadline), the tasks are chosen in FIFO (first in first out) order as the deadlines are old and their absolute value becomes decreasingly relevant apart from being a flag that they have been asleep and deserve CPU time ahead of all later deadlines."

    7b124e7b-b61d-49f6-9762-fd36b627e237 commented 14 years ago

    Yet another update to bfs.patch.

    I upload a variation on Florent's write test which prints progress of background CPU bound threads as: thread-name timestamp progress

    Here are some numbers from Windows XP 32bit with Intel q9400 (4 cores). Builds produced with VS express (alas - no optimizations, so official builds may behave differently).

    BFS - 33% CPU, 37,000 context switches per second

    z:\bfs\PCbuild\python.exe y:\writes.py t1 2.34400010109 0 t2 2.42199993134 0 t1 4.67199993134 1 t2 4.79699993134 1 t1 7.01600003242 2 t2 7.20300006866 2 t1 9.375 3 t2 9.625 3 t1 11.7039999962 4 t2 12.0309998989 4 t1 14.0469999313 5 t2 14.4219999313 5 t1 16.4070000648 6 t2 16.7809998989 6 t1 18.7820000648 7 t2 19.125 7 t1 21.1570000648 8 t2 21.4839999676 8 t1 23.5 9 t2 23.8589999676 9 t1 25.8599998951 10 t2 26.2339999676 10 t1 28.2349998951 11 28.2189998627

    gilinter - starves both bg threads and high rate of context switches. 45% CPU, 203,000 context switches per second

    z:\gilinter\PCbuild\python.exe y:\writes.py t1 13.0939998627 0 t1 26.4219999313 1 t1 39.8129999638 2 t1 53.1559998989 3 57.5470001698

    PyCON - starves one bg thread and slow IO thread Py32 - starves IO thread as expected.

    Note, PyCON, gilinter and py32 starve the bg thread with Dave's original buffered write test as well - http://bugs.python.org/issue7946#msg101116

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    The attached patch makes two simple refinements to the new GIL implemented in Python 3.2. Each is briefly described below.

    1. Changed mechanism for thread time expiration

    In the current implementation, threads perform a timed-wait on a condition variable. If time expires and no thread switches have occurred, the currently running thread is forced to drop the GIL.

    In the patch, timeouts are now performed by a special "GIL monitor" thread. This thread runs independently of Python and simply handles time expiration. Basically, it records the number of thread switches, sleeps for a specified interval (5ms), and then looks at the number of thread switches again. If no switches occurred, it forces the currently running thread to drop the GIL.

    With this monitor thread, it is no longer necessary to perform any timed condition variable waits. This approach has a few subtle benefits. First, threads no longer sit in a wait/timeout cycle when trying to get the GIL (so, there is less overhead). Second, you get FIFO scheduling of threads. When time expires, the thread that has been waiting the longest on the condition variable runs next. Generally, you want this.

    1. A very simple two-level priority mechanism

    A new attribute 'cpu_bound' is added to the PyThreadState structure. If a thread is ever forced to drop the GIL, this attribute is simply set True (1). If a thread gives up the GIL voluntarily, it is set back to False (0). This attribute is used to set up simple scheduling (described next).

    There are now two separate condition variables (gil_cpu_cond) and (gil_io_cond) that separate waiting threads according to their cpu_bound attribute setting. CPU-bound threads wait on gil_cpu_cond whereas I/O-bound threads wait on gil_io_cond.

    Using the two condition variables, the following scheduling rules are enforced:

    Results ------- This patch gives excellent results for both the ccbench test and all of my previous I/O bound tests. Here is the output:

    == CPython 3.2a0.0 (py3k:80470:80497M) == == i386 Darwin on 'i386' ==

    --- Throughput ---

    Pi calculation (Python)

    threads=1: 871 iterations/s.
    threads=2: 844 ( 96 %)
    threads=3: 838 ( 96 %)
    threads=4: 826 ( 94 %)

    regular expression (C)

    threads=1: 367 iterations/s.
    threads=2: 345 ( 94 %)
    threads=3: 339 ( 92 %)
    threads=4: 327 ( 89 %)

    bz2 compression (C)

    threads=1: 384 iterations/s.
    threads=2: 728 ( 189 %)
    threads=3: 695 ( 180 %)
    threads=4: 707 ( 184 %)

    --- Latency ---

    Background CPU task: Pi calculation (Python)

    CPU threads=0: 0 ms. (std dev: 0 ms.) CPU threads=1: 0 ms. (std dev: 0 ms.) CPU threads=2: 1 ms. (std dev: 2 ms.) CPU threads=3: 0 ms. (std dev: 1 ms.) CPU threads=4: 0 ms. (std dev: 1 ms.)

    Background CPU task: regular expression (C)

    CPU threads=0: 0 ms. (std dev: 0 ms.) CPU threads=1: 2 ms. (std dev: 1 ms.) CPU threads=2: 1 ms. (std dev: 1 ms.) CPU threads=3: 1 ms. (std dev: 1 ms.) CPU threads=4: 2 ms. (std dev: 1 ms.)

    Background CPU task: bz2 compression (C)

    CPU threads=0: 0 ms. (std dev: 0 ms.) CPU threads=1: 0 ms. (std dev: 2 ms.) CPU threads=2: 2 ms. (std dev: 3 ms.) CPU threads=3: 0 ms. (std dev: 1 ms.) CPU threads=4: 0 ms. (std dev: 1 ms.)

    --- I/O bandwidth ---

    Background CPU task: Pi calculation (Python)

    CPU threads=0: 5850.9 packets/s. CPU threads=1: 5246.8 ( 89 %) CPU threads=2: 4228.9 ( 72 %) CPU threads=3: 4222.8 ( 72 %) CPU threads=4: 2959.5 ( 50 %)

    Particular attention should be given to tests involving I/O performance. In particular, here are the results of the I/O bandwidth test using the unmodified GIL:

    --- I/O bandwidth ---

    Background CPU task: Pi calculation (Python)

    CPU threads=0: 6007.1 packets/s. CPU threads=1: 189.0 ( 3 %) CPU threads=2: 19.7 ( 0 %) CPU threads=3: 19.7 ( 0 %) CPU threads=4: 5.1 ( 0 %)

    Other Benefits -------------- This patch does not involve any complicated libraries, platform specific functionality, low-level lock twiddling, or mathematically complex priority scheduling algorithms. Emphasize: The code is simple.

    Negative Aspects ---------------- This modification might introduce a starvation effect where CPU-bound threads never get to run if there is an extremely heavy load of I/O-bound threads competing for the GIL.

    Comparison to BFS ----------------- Still need to test. Would be curious.

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    One comment on that patch I just submitted. Basically, it's an attempt to make an extremely simple tweak to the GIL that fixes most of the problems discussed here in an extremely simple manner. I don't have any special religious attachment to it though. Would love to see a BFS comparison.

    0d6a79a7-4e74-41c2-8eed-8d04ea475e57 commented 14 years ago

    Here is the result of running the writes.py test with the patch I submitted. This is on OS-X.

    bash-3.2$ ./python.exe writes.py t1 2.83990693092 0 t2 3.27937912941 0 t1 5.54346394539 1 t2 6.68237304688 1 t1 8.9648039341 2 t2 9.60041999817 2 t1 12.1856160164 3 t2 12.5866689682 3 t1 15.3869640827 4 t2 15.7042851448 4 t1 18.4115200043 5 t2 18.5771169662 5 t2 21.4922711849 6 t1 21.6835460663 6 t2 24.6117911339 7 t1 24.9126679897 7 t1 27.1683580875 8 t2 28.2728791237 8 t1 29.4513950348 9 t1 32.2438161373 10 t2 32.5283250809 9 t1 34.8905010223 11 t2 36.0952250957 10 t1 38.109760046 12 t2 39.3465380669 11 t1 41.5758800507 13 t2 42.587772131 12 t1 45.1536290646 14 t2 45.8339021206 13 t1 48.6495029926 15 t2 49.1581180096 14 t1 51.5414950848 16 t2 52.6768190861 15 t1 54.818582058 17 t2 56.1163961887 16 t1 58.1549630165 18 t2 59.6944830418 17 t1 61.4515309334 19 t2 62.7685520649 18 t1 64.3223180771 20 t2 65.8158640862 19 65.8578810692