Closed Weissnix4711 closed 2 months ago
Hi @Weissnix4711
This is a very peculiar error indeed. I have no idea why it's occuring. Digging into this I realized it must have to do with initializing python type objects. Specifically, the head. Normally, the type of a python object in C would be declared like this:
static PyTypeObject glmArrayType = {
PyObject_HEAD_INIT(NULL)
"glm.array", // tp_name
sizeof(glmArray), // tp_basicsize
0, // tp_itemsize
// [...]
}
PyObject_HEAD_INIT
is a macro, which is like a placeholder, would be replaced by { 1, 0 },
So the actual definition would look like this:
static PyTypeObject glmArrayType = {
{ 1, nullptr },
"glm.array", // tp_name
sizeof(glmArray), // tp_basicsize
0, // tp_itemsize
// [...]
}
This initializes the so called head, which is a C structure. I don't want to go into details, but as you can clearly see in the build logs, what the compiler tries to do is this:
static PyTypeObject glmArrayType = {
{ { 1 }, nullptr },
"glm.array", // tp_name
sizeof(glmArray), // tp_basicsize
0, // tp_itemsize
// [...]
}
Notice the difference in the head? May not look like much, but this actually does something completely different.
Again, I'll spare you the details, but it does not compile like this.
The macro responsible for this (PyObject_HEAD_INIT
) is defined by Python.h
, which comes from the python installation itself. It makes absolutely no sense to me, why and how the macro could be defined incorrectly.
If that is the case though, you would be unable to compile any C / C++ Python extension on the system in question.
It doesn't seem to be an issue with the python version, as I can compile PyGLM without issues on my system.
There are automatically built aarch64 wheels for PyGLM too, so the architecture can't be the source of the problem.
I really have no idea how to help you in debugging this, as I'm puzzled why it happens in the first place.
The only thing I can imagine is that the runner is bad. Perhaps re-running the build fixes the issue.
Cheers --Zuzu_Typ--
That macro (PyObject_HEAD_INIT
) actually changed between 3.11.10 and 3.12.0, specifically in this commit.
But that begs the question, why does it compile on your system, and seemingly everyone else's, but not our runner?
Also yeah same error for all archs built so that's not the issue.
That's interesting. Well, the thing is, in C there is a default macro called NULL
. Most compilers define NULL
as 0
. In your build it was defined as nullptr
. If NULL
is used correctly, this will achieve the same thing. However, the way the macro is defined in 3.12.0, it is not used correctly (at least if the macro is called as shown: PyObject_HEAD_INIT(NULL)
.
What used to be the case:
The macro result ({ 1, NULL }
) is used to initialize the following struct:
typedef struct {
PyObject ob_base;
Py_ssize_t ob_size; /* Number of items in variable part */
} PyVarObject;
Unintuitively, since there is only one initializer list, both values are used to initialize the PyObject ob_base
, which in turn looks like this:
typedef struct _object {
_PyObject_HEAD_EXTRA // This macro evaluates to nothing by default, so we'll ignore it.
Py_ssize_t ob_refcnt; // This tracks, how many references to this python object there are
PyTypeObject *ob_type; // this is a reference to the base-type of this object.
} PyObject;
So previously, ob_refcnt
was initialized with 1
and ob_type
was initialized with NULL
or 0
(effectively a nullpointer).
This works, regardless of whether NULL
is defined as 0
or nullptr
.
What happens now though (after the change):
The macro result ({ { 1 }, NULL }
) is used to initialize the following struct:
typedef struct {
PyObject ob_base;
Py_ssize_t ob_size; /* Number of items in variable part */
} PyVarObject;
As opposed to before, now { 1 }
is used to initialize PyObject ob_base
, which sets ob_refcnt
to 1
. I'm not exactly sure if this is true, but I believe ob_type
is implicitly set to NULL. The second part of the macro result (NULL
) is now used to initialize ob_size
. Obviously this is not supposed to happen. ob_size
is an integer and not a pointer. If NULL
is defined as 0
, as is most often the case, this works anyway, but if NULL
is defined as nullptr
, this suddenly breaks. That's where the error cannot convert 'std::nullptr_t' to 'Py_ssize_t' {aka 'long int'} in initialization
comes from.
Although I do suspect an error in my deduction, it seems as though the error is on Python Core's side.
Were you able to follow my thoughts?
By the way, I quickly tested this on my machine, by simply compiling
static PyTypeObject glmArrayType = {
{{1}, NULL},
"glm.array",
sizeof(glmArray),
// [...]
And
static PyTypeObject glmArrayType = {
{{1}, nullptr},
"glm.array",
sizeof(glmArray),
// [...]
to see what happens. The first one compiles successfully, and seems to work without any issues. The second one does not compile with the following error:
glmArray.h(113,8): error C2440: "Initialisierung": "nullptr" kann nicht in "Py_ssize_t" konvertiert werden
(translated:)
glmArray.h(113,8): error C2440: "Initialization": "nullptr" can't be converted to "Py_ssize_t"
-- Edit: In case you were wondering, this does compile (as expected):
static PyTypeObject glmArrayType = {
{1, nullptr},
"glm.array",
sizeof(glmArray),
// [...]
Alright, turns out this was a peculiar issue that had been silently existing for years, but by chance didn't make a difference up until Python 3.12.
Should be fixed now. Please retry building with PyGLM 2.7.2 (which as of writing is still in the process of being released).
Not going to pretend to understand C well enough to figure this out. Building
2.7.1
against python3.12.5
....
Full build log here from gitlab.alpinelinux.org/alpine/aports !64271