Closed vstinner closed 13 years ago
Attached patch implements an handler for the signal SIGSEGV. It uses its own stack to be able to allocate memory on the stack (eg. call a function), even on stack overflow.
The patch requires sigaction() and sigaltstack() functions, but I didn't patched configure.in script. These functions are available on Linux, but should be available on other UNIX OSes.
segfault() signal handler supposes that the thread state is consistent (interp->frame chained list). It calls indirectly PyUnicode_EncodeUTF8() and so call PyBytes_FromStringAndSize() which allocates memory on the heap. It clears PyUnicode "defenc" attribute (the result of PyUnicode_EncodeUTF8()) to free directly the memory.
To test it, try some scripts in Lib/test/crashers/.
One example: --------------------
$ ./python Lib/test/crashers/recursive_call.py
Fatal Python error: segmentation fault
Traceback (most recent call first):
File "Lib/test/crashers/recursive_call.py", line 12, depth 15715
File "Lib/test/crashers/recursive_call.py", line 12, depth 15714
File "Lib/test/crashers/recursive_call.py", line 12, depth 15713
...
File "Lib/test/crashers/recursive_call.py", line 12, depth 3
File "Lib/test/crashers/recursive_call.py", line 12, depth 2
File "Lib/test/crashers/recursive_call.py", line 9, depth 1
Segmentation fault
See also issue bpo-3999: a similar patch to raise an exception on segfault. This patch was rejected because Python internal state may be corrupted, and we cannot guarantee that next instructions will be executed correctly.
This patch is safer because it just tries to display the backtrace and then exit.
New version of the patch:
TODO: Patch configure to only enable segfault handler if sigaltstack() is available. The alternate stack is maybe not needed for the SIGFPE handler.
dmalcolm asked if it would be possible to display the Python backtrace on Py_FatalError(). I don't know: Py_FatalError() is usually called when Python internals are broken. But well, segfaults do also usually occurs when something is broken :-)
New version of the patch:
I only tested the patch on Linux with a narrow build. It should be tested at least on a wide build and on Windows.
I tested my patch on all Lib/test/crashers/*.py: all segfaults are replaced by nice Python tracebacks.
If you debug a Python program in gdb, gdb stops at the first SIGSEGV (before calling the signal handler).
I didn't tested how the signal handler behaves if it raises a new fault (SIGFPE or SIGSEGV). It is supposes to stop immediatly.
In the worst case, my patch introduces an infinite loop and the program eats all CPU time (and consume maybe a lot of memory?) instead of exiting immediatly. Especially, segfault_handler() doesn't ensure that there is no loop in the frame linked list. A possible solution is to fix a limit of the maximum depth (eg. only display first 100 frames and finished with "...").
Updated example: ----------------------------------------------
$ ./python Lib/test/crashers/recursive_call.py
Fatal Python error: segmentation fault
Traceback (most recent call first):
File "Lib/test/crashers/recursive_call.py", line 12 in <lambda>
File "Lib/test/crashers/recursive_call.py", line 12 in <lambda>
File "Lib/test/crashers/recursive_call.py", line 12 in <lambda>
...
File "Lib/test/crashers/recursive_call.py", line 12 in <lambda>
File "Lib/test/crashers/recursive_call.py", line 12 in <lambda>
File "Lib/test/crashers/recursive_call.py", line 15 in <module>
Segmentation fault
SIGSEGV catched in gdb: ----------------------------------------------
$ gdb -args ./python Lib/test/crashers/recursive_call.py
...
(gdb) run
Starting program: /home/SHARE/SVN/py3k/python Lib/test/crashers/recursive_call.py
Program received signal SIGSEGV, Segmentation fault. 0x080614e1 in _PyObject_DebugMalloc (nbytes=24) at Objects/obmalloc.c:1398 1398 return _PyObject_DebugMallocApi(_PYMALLOC_OBJ_ID, nbytes); (gdb) ----------------------------------------------
It should be tested at least on a wide build and ...
Done: it works correctly for non-BMP characters in narrow and wide builds.
Eg. in wide build with U+10ffff character in the path: ---------------------
$ ./python bla.py
Fatal Python error: segmentation fault
Traceback (most recent call first):
File "/home/SHARE/SVN/py3k\U0010ffff/Lib/ctypes/__init__.py", line 486 in string_at
File "bla.py", line 2 in <module>
Segmentation fault
Abandon
Patch version 4:
This version works on Windows.
dmalcolm asked if it would be possible to display the Python backtrace on Py_FatalError()
It works :-) I fixed a bug in ceval.c (r85411) which was not directly related.
Patch version 5:
I posted the patch on Rietveld for a review (as asked by Antoine): http://codereview.appspot.com/2477041
It looks like this doesn't yet have any test cases.
You probably should invoke a child python process that crashes and examine the output (perhaps running some/all of the examples in Lib/test/crashers ?); you may want to "steal" some of the wrapper code from Lib/test/test_gdb.py to do this.
Test ideas:
Also, please test the interaction of this with the debugger (with gdb, at any rate): as I see it, this ought to gracefully get out of the way if you're running python under a debugger. See Lib/test/test_gdb.py for more examples of how to detect gdb, and invoke it in batch mode from a test suite.
One other concern: many OSes (e.g. Linux distributions) implement some kind of system-wide crash-catching utility; for example in Fedora we have ABRT ( https://fedorahosted.org/abrt/wiki ).
I'm not sure yet exactly how these are implemented, but we'd want to avoid breaking them: segfaults of a system-provided /usr/bin/python (or /usr/bin/python3 ) should continue to be detected by such tools.
By the way, don't you want to handle SIGILL and SIGBUS too?
You would be wise to avoid using heap storage once you're in the crash handler. From a security standpoint, if something has managed to damage the heap (which is not uncommon in a crash), you should not attempt to allocate or free heap memory. On modern glibc systems, this isn't much of a concern as there are various memory protection mechanisms that make heap exploitation very very hard (you're just going to end up crashing the crash handler). I'm not sure about other operating systems that python supports though.
You would be wise to avoid using heap storage once you're in the crash handler. From a security standpoint, if something has managed to damage the heap (which is not uncommon in a crash), you should not attempt to allocate or free heap memory.
As far as I can tell, the signal handler in the patch doesn't call malloc() or free(), neither directly nor indirectly.
I am then confused by this in the initial comment:
It calls indirectly PyUnicode_EncodeUTF8() and so call PyBytes_FromStringAndSize() which allocates memory on the heap.
I've not studied the patch though, so this may have changed.
I am then confused by this in the initial comment:
> It calls indirectly PyUnicode_EncodeUTF8() and so call > PyBytes_FromStringAndSize() which allocates memory on the heap.
I've not studied the patch though, so this may have changed.
The patch certainly has changed, yes. In the latest patch, printing of unicode message is done one code point at a time without allocating any intermediary area.
Version 6:
I was too lazy to reimplement functions to convert an integer to a string in bases 10 and 16, so I used snprintf() on a small buffer allocated on the stack.
TODO (maybe): Call the original signal handler to make tools like apport or ABRT able to catch segmentation faults.
By the way, don't you want to handle SIGILL and SIGBUS too?
Maybe. SIGILL is a very rare exception. To test it, should I send a SIGILL signal to the process? Or should I try to execute arbitrary CPU instructions?
About SIGBUS: I don't know this signal. Is it used on Linux? If not, on which OS is it used?
About SIGBUS: I don't know this signal. Is it used on Linux? If not, on which OS is it used? Yes, IIRC typically you'll only see it on RISC CPUs that require instructions to be word-aligned; you typically see it if the program counter jumps to a broken address; the SIGBUS happens when the CPU tries to fetch the code.
Patch version 7:
Patch version 8:
One other concern: many OSes (e.g. Linux distributions) implement some kind of system-wide crash-catching utility; ...
Even if it would be possible to call the original signal handler, I think that it would be better to just disable this feature if a (better?) program also watchs for segfaults.
We should provide different ways to enable and/or disable this feature:
I don't think that a command line option and an environment variable is pratical for an OS distributor.
With a sys function, the OS distributor can call it in the site module.
I don't think that a command line option and an environment variable is pratical for an OS distributor.
Environment variables are probably the most practical for OS vendors, since they can simply set them in /etc/profile.d (Mandriva does that with PYTHONDONTWRITEBYTECODE :-/).
If it's an env variable, though, it should be clear that it's CPython-specific, so I'm not sure how to call it. CPYTHONNOSEGFAULTHANDLER?
If it's an env variable, though, it should be clear that it's CPython-specific, so I'm not sure how to call it.
Why do you think that only this variable is CPython-specific? CPython has an option called PYTHONDONTWRITEBYTECODE, but PyPy doesn't create .pyc files (and I suppose that nor IronPython nor Jython create such files).
I propose to call the variable PYTHONNOFAULTHANDLER. I removed "SEG" because it does catch other faults than segmentation faults.
Version 9 of my patch:
I don't know 100 frames is a good limit or not. Is it enough?
Summary of the patch:
The fault handler has no more the infinite loop issue (because the backtrace is truncated on frame loop).
Oh, I'm tired...
Summary of the patch:
- ...abort the process (call the debugger on Windows)
(the debugger is only called if Python is compiled in debug mode)
- Add PYTHONNOHANDLER environment variable ...
Oops, the correct name is PYTHONNOFAULTHANDLER (no fault handler).
Why was sys.setsegfaultenabled() omitted? It may be useful to disable the handler from a script
Why was sys.setsegfaultenabled() omitted?
Just because I forgot your message, sorry.
Version 10 of my patch:
With this patch, the original signal handler is called and so the Python fault handler is compatible with OS fault handlers like grsecurity and Apport.
FYI, in v10,
+#define NFAULT_SIGNALS (sizeof(fault_signals) / sizeof(fault_signals[0])) +static fault_handler_t fault_handlers[4];
, should use "NFAULT_SIGNALS" instead of "4".
However, this bit of code bothers me a lot:
+ const int fd = 2; / should be fileno(stderr) \/
To assume that fd=2 is the write place to be writing bytes is assuming quite a bit about the state of the application. It is not unusual at all to close 0,1,2 when writing daemons, which frees them up to be assigned to *anything*. For all you know, fd=2 currently is a network socket that you will be throwing gibberish at, or worse it could be a block device that you are writing gibberish on.
The closest discussion I could find on this subject was on the libstdc++ mailing-list with regard to their verbose termination code:
http://gcc.gnu.org/ml/gcc-patches/2004-02/msg02388.html
AFAICT, their conclusion was that the only reasonable solution was to write to the stderr FILE since it was the only thing that is guaranteed to make sense always (even though it may fail). Their situation is different in that they are handling a C++ exception, so they don't have to stick to async-safe functions, but absent that extra difficulty, I believe the reasoning is the same.
The analogous situation exists in Python, in that sys.stderr(*) represents where the application programmer expects "stderr" writing to go. I think you need to arrange to know what the fd number for that object is or this patch is unacceptable in the vein of "wrote garbage to my harddrive and destroyed my data" sort.
I'm not sure there is a safe way to know what the fileno for "sys.stderr" is because it can be anything, including an object whose fileno changes over time. However, I think it would be fair to support only built-in io types that are obviously safe, since you could cache the fileno() value at assignment to use in your fault handler.
(*) Or perhaps __stderr__ if stderr is None or an unsupported type?
Le lundi 20 décembre 2010 07:55:08, vous avez écrit :
+#define NFAULT_SIGNALS (sizeof(fault_signals) / sizeof(fault_signals[0])) +static fault_handler_t fault_handlers[4];
, should use "NFAULT_SIGNALS" instead of "4".
Ah yes, NFAULT_SIGNALS is a better choice than the maximum of possible signals (4).
However, this bit of code bothers me a lot:
- const int fd = 2; / should be fileno(stderr) \/
To assume that fd=2 is the write place to be writing bytes is assuming quite a bit about the state of the application. It is not unusual at all to close 0,1,2 when writing daemons, which frees them up to be assigned to *anything*.
Write into a closed file descriptor just does nothing. Closed file descriptors are not a problem.
For all you know, fd=2 currently is a network socket that you will be throwing gibberish at, or worse it could be a block device that you are writing gibberish on.
The GNU libc has also a fault handler (source code: debug/segfault.c). It uses the file descriptor 2, except if SEGFAULT_OUTPUT_NAME environment variable is set (value stored in "fname" variable): open the specified file.
/ This is the name of the file we are writing to. If none is given or we cannot write to this file write to stderr. \/ fd = 2; if (fname != NULL) { fd = open (fname, O_TRUNC | O_WRONLY | O_CREAT, 0666); if (fd == -1) fd = 2; }
The GNU libc installs handlers for SIGSEGV, SIGILL, SIGBUS, SIGSTKFLT, SIGABBRT and SIGFPE signals. The handler restores the default handler and re- raise the signal:
/ Pass on the signal (so that a core file is produced). \/ sa.sa_handler = SIG_DFL; sigemptyset (&sa.sa_mask); sa.sa_flags = 0; sigaction (signal, &sa, NULL); raise (signal);
Where "raise(signal);" is something like kill(getpid(), signal).
It only uses an alternate stack if SEGFAULT_USE_ALTSTACK environment variable is set.
The handler can display registers, the backtrace and the memory mappings, depending on the compilation options and the operating system.
The closest discussion I could find on this subject was on the libstdc++ mailing-list with regard to their verbose termination code:
http://gcc.gnu.org/ml/gcc-patches/2004-02/msg02388.html
AFAICT, their conclusion was that the only reasonable solution was to write to the stderr FILE since it was the only thing that is guaranteed to make sense always (even though it may fail). Their situation is different in that they are handling a C++ exception, so they don't have to stick to async-safe functions, but absent that extra difficulty, I believe the reasoning is the same.
The FILE* type cannot be used because fprintf(), fputs(), ... are not signal- safe.
I'm not sure there is a safe way to know what the fileno for "sys.stderr" is because it can be anything, including an object whose fileno changes over time
The signal handler cannot access the Python object. Eg. if sys.stderr is a StringIO() (which has no file descriptor), it cannot be used.
However, I think it would be fair to support only built-in io types that are obviously safe, since you could cache the fileno() value at assignment to use in your fault handler.
The problem is to detect that stderr file descriptor changes (eg. closed, duplicated, reopened, etc.). I don't think that it's possible to detect such changes (with a portable function).
The fault handler is unable to retrieve the thread state if the GIL is released. I will try to fix that.
On 12/20/2010 8:30 AM, STINNER Victor wrote:
Write into a closed file descriptor just does nothing. Closed file descriptors are not a problem.
My issue not with a closed file descriptor, it is with an open file descriptor that is not what you think it is.
> For all you know, fd=2 currently is a network socket that you > will be throwing gibberish at, or worse it could be a block device that > you are writing gibberish on.
The GNU libc has also a fault handler (source code: debug/segfault.c). It uses the file descriptor 2, except if SEGFAULT_OUTPUT_NAME environment variable is set (value stored in "fname" variable): open the specified file.
The GNU libc segfault handler is *opt-in* via the SEGFAULT_SIGNALS environment variable. I do not know of a system where SEGFAULT_SIGNALS is a part of the default environment. I suspect the only time anyone uses that variable and code is if they are using the "catchsegv" tool, which comes with glibc. In any case, that developer should be aware of the implication of closing fd 2.
The FILE* type cannot be used because fprintf(), fputs(), ... are not signal- safe.
My point was that in C++, they have an object that an application developer more readily associates with "stderr" than fd 2. That writing to "stderr" leads to a write to fd 2 is incidental, because it's possible that "stderr" does *not* lead to a write to fd 2 and that writing to fd 2 would be a bad thing to do blindly. For instance, you can call freopen(path, mode, stderr) and fd 2 will end up closed and another fd will be the target of stderr. I don't believe POSIX guarantees that open() will not re-use fd 2.
The problem is to detect that stderr file descriptor changes (eg. closed, duplicated, reopened, etc.). I don't think that it's possible to detect such changes (with a portable function).
When I said that, I hadn't fully investigated the intricacies of the io types. I was unaware that you could assign to "sys.stderr.buffer.raw" and change out the target fd. I assumed a BufferedWriter could not have the target stream changed out from beneath it. So, I don't have a solution to the problem of knowing the correct (if any) file descriptor to write to.
If the argument is made that fd 2 is the right place for most Python applications, then there needs to be a programmatic way to disable this feature and/or tell it where to write, so that programs that daemonize can do the right thing.
> The problem is to detect that stderr file descriptor changes (eg. closed, > duplicated, reopened, etc.). I don't think that it's possible to detect such > changes (with a portable function).
When I said that, I hadn't fully investigated the intricacies of the io types. I was unaware that you could assign to "sys.stderr.buffer.raw" and change out the target fd. I assumed a BufferedWriter could not have the target stream changed out from beneath it.
AFAICT, this is not deliberate (not documented, and not tested for). It should probably be fixed, actually, because there's no code that I know of that ensures it does something meaningful.
Version 11 of my patch:
Disable the fault handler by default solves many issues reported on the python-dev mailing list:
Amaury asked for a sys.setsegfaultenabled() option: I think that the command line option and the environment variable are enough. The question is now how to enable the feature for a single run (reproduce a crash to try to get more information), not how to enable/disable it system-wide (because most developers agree that it should be disabled by default).
Should it be backported to Python 2.7 and 3.1?
Stephen,
I wonder if you would have comments on this. As far as I know emacs installs SEGV handlers similar to the ones proposed here. How well do they work? Does it really help users to produce meaningful bug reports?
On 12/22/2010 8:52 PM, STINNER Victor wrote:
Amaury asked for a sys.setsegfaultenabled() option: I think that the command line option and the environment variable are enough.
I really think you should think of it as a choice the developer of an application makes instead of a choice an application user makes. A setsegfaultenabled() could just be another step of initializing the application akin to setting up the logging module, for instance. In that situation, a function call has a much lower barrier for use than a CLI option or environment variable where you'd have to create a wrapper script.
On Wed, Dec 22, 2010 at 9:27 PM, Scott Dial \report@bugs.python.org\ wrote:
Scott Dial \scott@scottdial.com\ added the comment:
On 12/22/2010 8:52 PM, STINNER Victor wrote: > Amaury asked for a sys.setsegfaultenabled() option: I think that the command line option and the environment variable are enough.
I really think you should think of it as a choice the developer of an application makes instead of a choice an application user makes. A setsegfaultenabled() could just be another step of initializing the application akin to setting up the logging module, for instance. In that situation, a function call has a much lower barrier for use than a CLI option or environment variable where you'd have to create a wrapper script.
+1
I would actually prefer just sys.setsegfaultenabled() without a controlling environment variable. If necessary, the environment variable can be checked in site.py and sys.setsegfaultenabled() called.
As I suggested on python-dev, I also think this belongs to a separate module rather than core or sys. The relevant code is already segregated in a file, so turning it into a module should not be difficult. The only function that probably must stay in core is _Py_DumpBacktrace(). With say "segvhandler" module, site.py can include something like this:
if sys.getenv('PYTHONSEGVHANDLER'):
import segvhandler
segvhandler.enable()
Does the latest patch address the GIL/multithreading issues?
Does the latest patch address the GIL/multithreading issues?
Yes.
Le jeudi 23 décembre 2010 à 02:27 +0000, Scott Dial a écrit :
Scott Dial \scott@scottdial.com\ added the comment:
On 12/22/2010 8:52 PM, STINNER Victor wrote: > Amaury asked for a sys.setsegfaultenabled() option: I think that the command line option and the environment variable are enough.
I really think you should think of it as a choice the developer of an application makes instead of a choice an application user makes.
Why do you think so? Can you give me an use case of sys.setsegfaultenabled()?
Extract of my email on python-dev:
Le jeudi 23 décembre 2010 à 02:27 +0000, Scott Dial a écrit :
Scott Dial \scott@scottdial.com\ added the comment:
On 12/22/2010 8:52 PM, STINNER Victor wrote: > Amaury asked for a sys.setsegfaultenabled() option: I think that the command line option and the environment variable are enough.
I really think you should think of it as a choice the developer of an application makes instead of a choice an application user makes.
Why do you think so? Can you give me an use case of sys.setsegfaultenabled()?
Extract of my email on python-dev:
Use case: when a program crashs, the user reruns its application with the fault handler enabled and tries to reproduce the crash. He/She can send the Python backtrace to the developer, or use it directly (if he/she understands it).
After the discussion on python-dev, I don't think that the fault handler should be enabled by default, but only for a single run.
Le jeudi 23 décembre 2010 à 02:45 +0000, Alexander Belopolsky a écrit :
As I suggested on python-dev, I also think this belongs to a separate module rather than core or sys.
Why do you want to move it outside Python core? It is very dependent of Python internals (GIL/threads, frames, etc.) and so I think that it's better to keep it in the Python core.
On 12/22/2010 10:35 PM, STINNER Victor wrote:
Why do you think so? Can you give me an use case of sys.setsegfaultenabled()?
To feed back your own argument on python-dev:
How do you know that you application will crash? The idea is to give informations to the user when an application crashs: the user can use the backtrace or send it to the developer. Segmentation faults are usually not (easilly) reproductible :-( So even if you enable the fault handler, you may not be able to replay the crash. Or even worse, the fault may not occurs at all when you enable the fault handler... (Heisenbugs!)
After the discussion on python-dev, I don't think that the fault handler should be enabled by default, but only for a single run.
I agree that it should be disabled by default because of the potential do bad things if the application was not wrote with it in mind. But an application developer would be in a much better position to decide what the default should be for their application if they believe they will be able to get more useful bug reports from their users by enabling it.
I thought that was your position, but if you no longer believe that, then I will not push for it.
Re: msg124528
Yes, XEmacs installs a signal handler on what are normally fatal errors. (I don't know about GNU Emacs but they probably do too.)
The handler has two functions: to display a Lisp backtrace and to output a message explaining how to report bugs (even including a brief introduction to the "bt" command in gdb. ;-)
I personally have never found the Lisp backtrace useful, except that it can be used as a bug signature of sorts ("oh, I think I've seen this one before..."). However, I suspect this is mostly because in Emacs Lisp very often you don't have the name of the function in the backtrace, only a compiled code object. So in many cases it's almost no help in localizing the fault. Victor's patch does a lot better on this than XEmacs can, I suspect.
The bug reporting message, OTOH, has been useful to us for the reasons people give for wanting the handler installed by default. We get more and better bug reports, often including C backtraces, from people who have never participated directly in XEmacs development before. (It also once served the function of inhibiting people from sending us core files. Fortunately, I don't think that happens much any more. :-) Occasionally a user will be all proud of themselves because "I never used gdb before," so I'm pretty sure that message is effective.
Quite frequently we see the handler itself crash if there was memory corruption, but certainly it gives useful output well over half the time. So I want to back up Victor on those aspects.
Finally, although our experience has be very positive, qnote that XEmacs is not an embeddable library, nor is there provision in the mainline versions for embedding other interpreters in XEmacs. So we've never had to worry about the issues that come with that.
For more technical details, you could ask Ben Wing \ben@xemacs.org\ who put a lot of effort into the signal handling implementation, or Hrvoje Niksic (sorry, no address offhand) who posts on python-dev occasionally. (I don't know if Hrvoje ever worked on the signal handlers, and he hasn't worked on XEmacs for years, but he was more familiar with internals than me then, and might very well still remember more than I ever knew. :-) I don't think either will disagree with my general statements above, though.
[Alexander]
if sys.getenv('PYTHONSEGVHANDLER'):
import segvhandler
segvhandler.enable()
+1
If this doesn't find support, I'd name sys.setsegfaultenabled() sys.setsegvhandlerenabled() or sys.enable_segvhandler().
Note: To avoid the signal-safe requirement, another solution is to use sigsetjmp()+siglongjmp().
I tested the patch version 11 on Windows: all tests pass. But #include \<unistd.h> should be skipped on Windows (Python/fault.c): I will add #ifdef MS_WINDOWS.
I tested the patch version 11 on Windows: all tests pass.
Oh, and I forgot to say that the Windows fault handler does catch the fault too (Windows opens a popup with a question like "Should the error be reported to Microsoft?").
Tested on FreeBSD 8: all tests pass (all of the 4 signals are supported) and FreeBSD dumps a core file.
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields: ```python assignee = None closed_at =
created_at =
labels = ['interpreter-core', 'type-feature']
title = 'Display Python backtrace on SIGSEGV, SIGFPE and fatal error'
updated_at =
user = 'https://github.com/vstinner'
```
bugs.python.org fields:
```python
activity =
actor = 'eric.araujo'
assignee = 'none'
closed = True
closed_date =
closer = 'vstinner'
components = ['Interpreter Core']
creation =
creator = 'vstinner'
dependencies = []
files = ['20144']
hgrepos = []
issue_num = 8863
keywords = ['patch']
message_count = 54.0
messages = ['106801', '106803', '108156', '114289', '118486', '118487', '118489', '118491', '118511', '118526', '118527', '118528', '118529', '118538', '118539', '118541', '118543', '118586', '118587', '118588', '119174', '119178', '119179', '119193', '123037', '124266', '124267', '124268', '124364', '124373', '124381', '124385', '124388', '124397', '124399', '124527', '124528', '124529', '124531', '124533', '124534', '124535', '124536', '124537', '124541', '124544', '124545', '124548', '124549', '124550', '124551', '124552', '124583', '130260']
nosy_count = 11.0
nosy_names = ['amaury.forgeotdarc', 'davidfraser', 'belopolsky', 'scott.dial', 'pitrou', 'vstinner', 'eric.araujo', 'sjt', 'skrah', 'dmalcolm', 'joshbressers']
pr_nums = []
priority = 'normal'
resolution = 'rejected'
stage = 'patch review'
status = 'closed'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue8863'
versions = ['Python 3.2', 'Python 3.3']
```