Open hochl opened 2 years ago
Here's an example of an expression in PyClear
:
We also have:
static int
template_clear(TemplateObject *self)
{
Py_CLEAR(self->literal);
for (Py_ssize_t i = 0, n = Py_SIZE(self); i < n; i++) {
Py_CLEAR(self->items[i].literal);
}
return 0;
}
And
for (Py_ssize_t i = 0; i < keys->dk_nentries; i++) {
Py_CLEAR(values->values[i]);
}
All cases are element access, but this is the most complex examples I am able to find.
I am not sure that I made the point totally clear. Here is an example program for clarification, using some dummy type for PyObject
.
#include <stdlib.h>
#define _PyObject_CAST(op) ((PyObject*)(op))
#define Py_CLEAR(op) \
do { \
PyObject *_py_tmp = _PyObject_CAST(op); \
if (_py_tmp != NULL) { \
(op) = NULL; \
/* Py_DECREF(_py_tmp); */ \
free((void*)_py_tmp); \
} \
} while (0)
typedef struct {
} PyObject;
int main()
{
PyObject* obj[16];
PyObject** p = obj;
size_t i;
for (i = 0; i < 16; i++) {
obj[i] = malloc(sizeof(PyObject));
}
#if 1
for (int i = 0; i < 16; i++) {
Py_CLEAR(*p++);
}
#else
for (int i = 0; i < 16; i++, p++) {
Py_CLEAR(*p);
}
#endif
}
This code will give a buffer overflow. Changing the last line to for (int i = 0; i < 16; i++, p++) Py_CLEAR(*p);
works without problems. The side effect is not triggered by Python library code and most probably not in any sane C project, but it is not unconceivable that this macro shoots someone out there into the foot. I suggest that, if you have the choice, to somehow make all macros evaluate their arguments just once.
I'm not sure we've ever promised that macros won't evaluate their args more than once. @vstinner ?
"Duplication of side effects" is one catch of macros that PEP 670 tries to avoid: https://peps.python.org/pep-0670/#rationale
Sadly, Py_CLEAR() cannot be converted to a function (as part of PEP 670) since it magically gets a reference to a pointer thanks to magic macro preprocessor. An hypothetical Py_Clear() function would take a pointer to a Python object, so PyObject**
type, like: Py_Clear(&variable);
.
IMO using &argument
in the macro implementation is an acceptable fix to prevent the duplication of side effects.
I think the compiler will optimize out the additional temporary variable in most cases.
I agree with you. Moreover, correctness matters more than performance for Py_CLEAR() API.
cc @serhiy-storchaka @erlend-aasland
Maybe a Py_ClearRef(PyObject **)
function shoud also be added?
Somehow related discussions:
I am against extending ABI without need. Py_CLEAR is a part of API, but not in ABI. Incref/decref is anough, Py_CLEAR is just an API sugar.
Calling Py_Clear(&variable)
can prevent some compiler optimizations. The compiler can no longer use a register for variable
in a register, and it can no longer assume variable
is now NULL, and it cannot guarantee the the value of local variable
will not change outside of the local code. I faced a similar situation recently. And the problem was not only that the compiler generated less efficient code, but that it started issuing warnings about correct code that it could no longer fully analyze.
How about this macro?
#define Py_CLEAR(op) \
do { \
PyObject **_py_tmp = (PyObject**)&(op); \
if (*_py_tmp != NULL) { \
PyObject* _py_tmp2 = *_py_tmp; \
*_py_tmp = NULL; \
/* Py_DECREF(_py_tmp2); */ \
free((void*)_py_tmp2); \
} \
} while (0)
Replace the free line with the commented out line for real Python, this is for my earlier example program.
The macro Py_CLEAR(op) references the argument op two times. If the macro is called with an expression it will be evaluated two times, for example Py_CLEAR(p++).
This issue looks an hypothetical bug, but I'm not convinced that it's possible to write an expression which a side effect and which makes sense to set to NULL. For example, in your example, what's the point of writing p++ = NULL;
(through the macro).
To be clear, GCC fails to build the following C code:
int main()
{
void *ptr = 0;
ptr++ = 0;
return 0;
}
I created PR #99100 to fix this issue. I'm not convinced that we should fix it, but a full PR might help to make a decision.
In the past, I saw surprising bug like https://bugs.python.org/issue43181 about passing a C++ expression to a C macro. So well, maybe in case of doubt, it's better to fix the issue to be extra safe. Py_CLEAR() pretends to be safer than using directly the Py_DECREF() macro ;-)
Ah wait, I read again https://github.com/python/cpython/issues/98724#issuecomment-1292591490 and now I got the issue :-) I updated the unit test in my PR #99100.
Perfect! The main problem I have with macros that pretend to be functions is that they might evaluate their arguments several times, and this is totally unexpected. I hope the improved version works for all use-cases :)
I am not sure that it is not just a documentation issue. We can just document that the argument of Py_CLEAR (and the first argument of Py_SETREF) should not have side effect.
We can make it working even for arguments with a side effect, but should it be considered a bug fix or a new feature?
We can make it working even for arguments with a side effect, but should it be considered a bug fix or a new feature?
My PR fix the macro so it behaves correctly with arguments with side effects. For me it's a bugfix and should be backported to stable branches. If you are scared of the new implementation (using a pointer to a pointer), I'm fine with only changing the macro in Python 3.12 for now, and only backport if there is a strong pressure from many users to backport the fix.
I am fine with the new implementation. I think all modern compilers can get handle it without adding overhead if it is in a macro.
I afraid that users will start to rely on this feature and then their code will work incoretly when compiled with older Python, depending on the bugfix number.
I think this should be extended to all macros, so that they only reference their arguments once. If that can be done it could be made public that from now on macros are safe to use even with arguments that have side effects.
I completed my PR to fix also Py_SETREF() and Py_XSETREF() macros.
I created follow-up issues:
Py_SETREF(), Py_XSETREF() and Py_CLEAR() have been fixed in Python 3.12. By the way, Py_SETREF() and Py_XSETREF() are now documented: https://docs.python.org/dev/c-api/refcounting.html#c.Py_SETREF
It was decided to not fix the bug in Python 3.10 and 3.11: read the discussion on the PR #99100 for the rationale.
Thanks @hochl for your bug report.
Looks like it caused https://github.com/python/cpython/issues/99701
Looks like it caused https://github.com/python/cpython/issues/99701
Oh wow, I didn't notice that my commit c03e05c2e72f3ea5e797389e7d1042eef85ad37a can introduce a type punning issue and miscompile Python using strict aliasing (which is the default behavior of C compilers). That's really bad. I reverted my change to fix #99701. Miscompiling Python is really bad.
Articles about these problems:
I wrote PR #99739 to fix type punning by using __typeof__()
in the macro. Problem: MSVC doesn't implement this C compiler extension, only GCC and clang do. MSVC implements decltype() but this function is only used in C++, not in C, and the Python C API must be usable in C.
I don't like the idea of only fixing the macro if the used C compiler provides __typeof__()
(or a variant like typeof()
), and "miscompiles" the code if it doesn't (ex: MSVC). Python has a long tradition of providing portable behavior on all platforms (and all C compilers).
So I think that the best we can do here is to document the issue and explain how to work around it: the macro does duplicate side effects, so just avoid expression with side effects. For example, Replace Py_CLEAR(*ptr++);
with Py_CLEAR(*ptr); *ptr++;
. By the way, for example, Py_CLEAR(*ptr++);
doesn't make sense to me: the macro assign to expression to NULL, *ptr++ = NULL;
is hard to read for me. I really prefer *ptr = NULL; ptr++;
.
I am not entirely sure if reverting the change is the way to go. As I see it there is a more fundamental type punning problem here where this macro update only triggered a different bug that is still also here. No idea how to fix that, I usually use memcpy
to work around strict-aliasing problems, but that would inflate the macros tremendously, right? No wonder Linus wrote about how braindead the C compiler is in that regard (refer to this post for your own amusement :) ).
I fixed again Py_CLEAR(), Py_SETREF() and Py_XSETREF() macros with commit b11a384dc7471ffc16de4b86e8f5fdeef151f348.
This time, the fix should avoid the type punning issue which caused miscompilation of Python.
The fix uses __typeof__()
if available (GCC and clang), or memcpy()
(ex: MSVC on Windows).
No idea how to fix that, I usually use memcpy to work around strict-aliasing problems, but that would inflate the macros tremendously, right?
I expect that memcpy() is never called as a function: C compilers usually use their own "built-in" flavor which is more efficient. I expect that memcpy() is implemented with a single MOV instruction on x86-64.
Example with _testcapi.test_py_setref():
static PyObject*
test_py_setref(PyObject *self, PyObject *Py_UNUSED(ignored))
{
// Py_SETREF() simple case with a variable
PyObject *obj = PyList_New(0);
if (obj == NULL) {
return NULL;
}
Py_SETREF(obj, NULL);
assert(obj == NULL);
...
}
x86-64 assembly compiled by GCC 12.2.1 with -O3, implementation using __typeof__()
:
Dump of assembler code for function test_py_setref:
0x00007fffea2bd160 <+0>: sub rsp,0x8
// obj = Py_ListNew(0);
0x00007fffea2bd164 <+4>: xor edi,edi
0x00007fffea2bd166 <+6>: call 0x7fffea2bb080 <PyList_New@plt>
// if (obj == NULL) ...
0x00007fffea2bd16b <+11>: test rax,rax
0x00007fffea2bd16e <+14>: je 0x7fffea2bd1f0 <test_py_setref+144>
// Py_SETREF(obj, NULL): Py_DECREF(obj)
0x00007fffea2bd174 <+20>: sub QWORD PTR [rax],0x1 // --ob_refcnt
0x00007fffea2bd178 <+24>: mov rdi,rax
0x00007fffea2bd17b <+27>: je 0x7fffea2bd1e0 <test_py_setref+128> // if (--ob_refcnt == 0) ...
(...)
The obj variable is not even allocated in the stack: it only exists in the rax register. Py_SETREF() doesn't introduce inefficient memory copies.
The obj = NULL;
assignement in Py_SETREF(obj, NULL)
is simply optimized (removed), since the rax register is reused later, and it's not useless to set rax to 0, since it's a register, and not memory on the stack or on the heap.
If I disable __typeof__()
by removing # define _Py_TYPEOF(expr) __typeof__(expr)
in Include/pyport.h, I get this x86-64 assembly code:
Dump of assembler code for function test_py_setref:
0x00007fffea2bd160 <+0>: sub rsp,0x8
// obj = Py_ListNew(0)
0x00007fffea2bd164 <+4>: xor edi,edi
0x00007fffea2bd166 <+6>: call 0x7fffea2bb080 <PyList_New@plt>
// if (obj == NULL) ...
0x00007fffea2bd16b <+11>: test rax,rax
0x00007fffea2bd16e <+14>: je 0x7fffea2bd1f0 <test_py_setref+144>
// Py_SETREF(obj, NULL): Py_DECREF(obj)
0x00007fffea2bd174 <+20>: sub QWORD PTR [rax],0x1 // --ob_refcnt
0x00007fffea2bd178 <+24>: mov rdi,rax
0x00007fffea2bd17b <+27>: je 0x7fffea2bd1e0 <test_py_setref+128> // if (--ob_refcnt == 0) ...
GCC emits the same machine code for both implementation, __typeof__()
and memcpy()
. No Py_SETREF() implementation call memcpy() or add any inefficient memory copies.
GCC is smart and works as expected ;-) memcpy()
is just a way to tell the compiler about type erasure, to prevent it to optimize too far which would miscompile the code.
I am sad to say this: but looks like this feature broke one more thing :(
Since https://github.com/python/cpython/commit/b11a384dc7471ffc16de4b86e8f5fdeef151f348 we have this failure of ARM64 Windows 3.x
3114
@vstinner tried to fixed it in https://github.com/python/cpython/commit/cd67c1bb30eccd0c6fd1386405df225aed4c91a9 but it did not work. And buildbots are still failing with this problem:
test_parse_in_error (test.test_ast.ASTHelpers_Test.test_parse_in_error) ... ok
Windows fatal exception: stack overflow
Current thread 0x00003664 (most recent call first):
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\test_ast.py", line 1252 in test_recursion_direct
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\case.py", line 579 in _callTestMethod
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\case.py", line 623 in run
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\case.py", line 678 in __call__
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\suite.py", line 122 in run
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\suite.py", line 84 in __call__
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\suite.py", line 122 in run
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\suite.py", line 84 in __call__
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\suite.py", line 122 in run
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\suite.py", line 84 in __call__
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\unittest\runner.py", line 208 in run
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\support\__init__.py", line 1100 in _run_suite
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\support\__init__.py", line 1226 in run_unittest
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\runtest.py", line 281 in _test_module
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\runtest.py", line 317 in _runtest_inner2
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\runtest.py", line 360 in _runtest_inner
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\runtest.py", line 235 in _runtest
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\runtest.py", line 265 in runtest
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\main.py", line 352 in rerun_failed_tests
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\main.py", line 754 in _main
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\main.py", line 709 in main
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\libregrtest\main.py", line 773 in main
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\__main__.py", line 2 in <module>
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\runpy.py", line 88 in _run_code
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\runpy.py", line 198 in _run_module_as_main
test_recursion_direct (test.test_ast.ASTHelpers_Test.test_recursion_direct) ...
Is it really related? Or just a coincedence? 🤔
File "C:\Workspace\buildarea\3.x.linaro-win-arm64\build\Lib\test\test_ast.py", line 1252 in test_recursion_direct
I fixed the issue with commit cd67c1bb30eccd0c6fd1386405df225aed4c91a9.
Is it really related? Or just a coincedence? thinking
Tests on recursive functions are fragile, unless they use support.infinite_recursion()
. Depending how the Python binary is built, the stack memory used by each function call can vary a lot: compiler flags, function inlined or not, etc.
On Windows, MSVC doesn't inline "static inline functions" in debug mode:
Capturing a few comments I made on Discord: I think it's a mistake to change the behaviour of Py_CLEAR
to fix this issue. It's surprising behaviour, but changing the behaviour does not improve the situation and it may break existing code that relies on this behaviour. The common use of Py_CLEAR
is with simple names or simple dereferences, which aren't affected by the surprising behaviour. Fixing the surprising corner case is not worth the change in semantics (and the added complexity of the implementation).
I reopen the issue. The limited C API of Python 3.12 is still broken on compilers which don't provide typeof()
: Py_CLEAR() now requires memset()
which is not provided by <Python.h>
: the Python.h
header doesn't include the <string.h>
header on purpose.
Either the change is reverted to keep the bug on purpose, or the limited C API must be fixed.
Minimum extension module code reproducing the build error:
#define Py_LIMITED_API 0x030b0000
#include "Python.h"
static int
xx_modexec(PyObject *m)
{
PyObject *o = PyLong_FromLong(1);
Py_CLEAR(o);
return 0;
}
static PyModuleDef_Slot xx_slots[] = {
{Py_mod_exec, xx_modexec},
{0, NULL}
};
static struct PyModuleDef xxmodule = {
PyModuleDef_HEAD_INIT,
.m_name = "xxlimited",
.m_size = 0,
.m_slots = xx_slots,
};
PyMODINIT_FUNC
PyInit_xxlimited(void)
{
return PyModuleDef_Init(&xxmodule);
}
Python is built with -Werror=implicit-function-declaration
and so building the extension fails with:
gcc -fno-strict-overflow -Wno-unused-result -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -Og -Wall -O0 -std=c11 -Wno-unused-result -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I. -I./Include -fPIC -c ./Modules/xxlimited.c -o Modules/xxlimited.o
In file included from ./Include/Python.h:44,
from ./Modules/xxlimited.c:3:
./Modules/xxlimited.c: In function 'xx_modexec':
./Include/object.h:651:13: error: implicit declaration of function 'memcpy' [-Werror=implicit-function-declaration]
651 | memcpy(_tmp_op_ptr, &_null_ptr, sizeof(PyObject*)); \
| ^~~~~~
./Modules/xxlimited.c:9:5: note: in expansion of macro 'Py_CLEAR'
9 | Py_CLEAR(o);
| ^~~~~~~~
(...)
cc1: some warnings being treated as errors
By the way, in LLVM 16, clang now treats this warning as an error by default:
The problem of implicit function declaration is also becoming a major problem to migrate the C ecosystem to C99 and newer, see: https://fedoraproject.org/wiki/Changes/PortingToModernC
I created a thread on discuss about this issue: https://discuss.python.org/t/c-api-design-goal-hide-implementation-details-convert-macros-to-regular-functions-avoid-memset-or-errno-in-header-files-etc/25137
Maybe I missed it, but has anyone proposed yet to mix a macro (that takes the address of the argument and passes it into an inline function) with an inline function (that receives the pointer and does all the rest)?
I proposed multiple times to add functions rather than macros. Examples:
@scoder's idea is a little bit different: the added function doesn't have to be part of the public C API.
@scoder: Do you think that such function should be public? Do you see advantages of a function rather than using a macro? I have my own rationale, but as you saw, I failed to convinced other people :-)
Accepted PEP 670 suggests converting macros to functions to they can be used in programs which cannot use macros.
I'm very late to this unfortunately, but the fallback version of Py_CLEAR
using memcpy
seems risky at the least.
There's no guarantee that foo* q; memcpy(&q, (foo **)&p, sizeof(q))
will give you the same result as foo *q = (foo *)p
. Draft C11 does say that "All pointers to structure types shall have the same representation [...] as each other" (6.2.5 ¶ 28), which may (I'm not sure) make this code valid when the argument to Py_CLEAR
is a structure pointer, but there's no such guarantee for void *
or char *
. In C++, casts between pointers to related struct/class types can add an offset, and memcpy
ing won't add that offset.
It's probably not good practice to call Py_CLEAR
on a void *
, and on most implementations it'd probably work anyway, and it may not be possible to multiple-inherit from PyObject
for other reasons, but it all seems risky. It's technically undefined and it could bite you.
I think it'd be good to drop the guarantee of single evaluation from the documentation, because it can't be implemented portably. (Consider also that memcpy
may make for noticeably slower debug builds, and if people start writing code that depends on single evaluation then that regression would become unfixable.)
You could use a template function when compiling as C++ (or decltype
or auto
if you don't care about pre-C++11; modern MSVC supports both), else __typeof__
if available, else give up and do it the old way.
I disagree with converting Py_CLEAR to regular function at first place. It was designed as a macro, and worked as a macro all these years. Your cure is worse than the (imaginary) disease.
I also think that we're trying to fix a non-problem here. Py_CLEAR()
is literally just a convenience macro. If the usage is not convenient for users, they don't need to (and probably should not) use it.
I disagree with converting Py_CLEAR to regular function at first place.
I'm not sure you mean my suggestion of using a template function, but the template function would be defined in the header and always inlined when optimizing, so it shouldn't affect other optimizations or warnings (which was your concern in an earlier comment). And of course Py_CLEAR
could still be a macro that called it.
But thinking about it again, if single evaluation can't be guaranteed anyway in the spec (and I think it shouldn't be) then it would probably be best to just revert to the old implementation.
Bug report
The macro
Py_CLEAR(op)
references the argumentop
two times. If the macro is called with an expression it will be evaluated two times, for examplePy_CLEAR(p++)
.Your environment
x86_64
I suggest a fix similar to this (old version commented out with #if 0):
I am not sure if this has happened anywhere, but I see a possible problem here. I think the compiler will optimize out the additional temporary variable in most cases.