Closed MakiWolf closed 6 months ago
Please provide your clang version. Looks like some changes in STL headers were made.
clang --version Ubuntu clang version 16.0.6 (15) Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin
Using the same version as OP.
I see usage was switched from uint64_t
a couple of months ago, and I'm seeing std::uintptr_t
usage seems nearly non-existent to me:
https://github.com/fmtlib/fmt/issues/1197
https://youtrack.jetbrains.com/issue/RSCPP-34814
I don't know if we could get away with just changing those references to uintptr_t
or uint64_t
.
Finding reference of uintptr_t
in headers with the namespace std
:
$ find /usr/include/c++/13.2.1/ -type f -exec sh -c 'if (grep -q "namespace std" $0); then if (grep "uintptr_t" $0); then echo $0; fi; fi' {} \;
/// atomic_uintptr_t
typedef atomic<uintptr_t> atomic_uintptr_t;
/usr/include/c++/13.2.1/atomic
#include <stdint.h> // uintptr_t
const auto __intptr = reinterpret_cast<uintptr_t>(__ptr);
_GLIBCXX_DEBUG_ASSERT((uintptr_t)__ptr % _Align == 0);
/usr/include/c++/13.2.1/bits/align.h
{ __glibcxx_assert(((uintptr_t)_M_ptr % required_alignment) == 0); }
{ __glibcxx_assert(((uintptr_t)_M_ptr % required_alignment) == 0); }
{ __glibcxx_assert(((uintptr_t)_M_ptr % required_alignment) == 0); }
{ __glibcxx_assert(((uintptr_t)_M_ptr % required_alignment) == 0); }
/usr/include/c++/13.2.1/bits/atomic_base.h
constexpr uintptr_t __ct = 16;
auto __key = (uintptr_t(__addr) >> 2) % __ct;
/usr/include/c++/13.2.1/bits/atomic_wait.h
{ return _Type(reinterpret_cast<uintptr_t>(_M_impl.get()) & 0x3); }
/usr/include/c++/13.2.1/bits/fs_path.h
: _M_val(reinterpret_cast<uintptr_t>(__c._M_pi))
auto __x = reinterpret_cast<uintptr_t>(__c._M_pi);
mutable __atomic_base<uintptr_t> _M_val{0};
static constexpr uintptr_t _S_lock_bit{1};
/usr/include/c++/13.2.1/bits/shared_ptr_atomic.h
using ::uintptr_t;
using uintptr_t = __UINTPTR_TYPE__;
/usr/include/c++/13.2.1/cstdint
return reinterpret_cast<_Tp*>(reinterpret_cast<uintptr_t>(this)
_M_diff = reinterpret_cast<uintptr_t>(__arg)
- reinterpret_cast<uintptr_t>(this);
{ return (reinterpret_cast<uintptr_t>(this->get())
< reinterpret_cast<uintptr_t>(__rarg.get())); }
{ return (reinterpret_cast<uintptr_t>(this->get())
== reinterpret_cast<uintptr_t>(__rarg.get())); }
typedef __UINTPTR_TYPE__ uintptr_t;
uintptr_t _M_diff;
(reinterpret_cast<uintptr_t>(this) + _M_diff);
_M_diff = reinterpret_cast<uintptr_t>(__arg)
- reinterpret_cast<uintptr_t>(this);
{ return (reinterpret_cast<uintptr_t>(this->get())
< reinterpret_cast<uintptr_t>(__rarg.get())); }
{ return (reinterpret_cast<uintptr_t>(this->get())
== reinterpret_cast<uintptr_t>(__rarg.get())); }
typedef __UINTPTR_TYPE__ uintptr_t;
uintptr_t _M_diff;
/usr/include/c++/13.2.1/ext/pointer.h
using uintptr_t = __UINTPTR_TYPE__;
using native_handle_type = uintptr_t;
auto __cb = [](void* __data, uintptr_t, const char* __filename,
auto __cb2 = [](void* __data, uintptr_t, const char* __symname,
uintptr_t, uintptr_t) {
using uintptr_t = __UINTPTR_TYPE__;
-> int (*) (void*, uintptr_t)
auto __cb = +[](void* __data, uintptr_t __pc) {
__cb = [](void* __data, uintptr_t __pc) {
/usr/include/c++/13.2.1/stacktrace
using ::uintptr_t;
/usr/include/c++/13.2.1/tr1/cstdint
It would seem I could build it just fine using uintptr_t
alone (no std::
namespace):
$ <pathTo>/bin/netcoredbg --buildinfo
.NET Core debugger 3.0.0-7 (3.0.0-7)
Build info:
Build type: Release
Build date: Nov 18 2023 17:31:21
Target OS: Linux
Target arch: x64
Hostname: computerName
NetcoreDBG VCS info: aafa6f3
CoreCLR VCS info: 6a4e500
Now to see if it borks on me from normal usage.
Looks like std::uintptr_t
was never part of standard and uintptr_t
should be used instead. Will test this at work with clang16 and CI tests.
If that's the case, I'm wondering how this was ever committed to a remote repository if it wouldn't pass locally if it was never part of standard.
https://en.cppreference.com/w/cpp/header/cstdint
Have uintptr_t
but mention about std::uintptr_t
in UINTPTR_MAX maximum value of std::uintptr_t
.
Looks like define std::uintptr_t
was clang developers decision, and now it was silently removed. I didn't found any mention about this in clang release related notices.
Hmm... looks like I was wrong and uintptr_t
is part of STL from C99/C++11. We compile c++ code with "-std=c++11" option. Have no idea why clang16 don't see std::uintptr_t
now and propose uintptr_t
instead, probably cstdint
header was removed from STL headers we are using and at this point compiller don't see std::uintptr_t
declaration...
If I am correct, this issue could be fixed with #include <cstdint>
line added to src/interfaces/types.h
.
I did notice #ifdef _GLIBCXX_USE_C99_STDINT_TR1
in /usr/include/c++/13.2.1/cstdint
which does lead to using ::uintptr_t;
, but $ clang -dM -E -x c /dev/null | grep USE_C
doesn't show any result.
I'm currently working through trying to get the latest vscode-csharp to output the vsix, so I'll try to remember to try the #include
later. I had to update my build process for each main/master element I manually bring into csharp.
Yup, adding cstdint
did it. Everything's back up and running again.
Fixed in upstream.
Can't be built on Ubuntu 23.10 with cmake 3.27.4 and dotnet 8.