freemint / m68k-atari-mint-gcc

Fork of GNU's gcc with support for the m68k-atari-mint target
https://github.com/freemint/m68k-atari-mint-gcc/wiki
Other
26 stars 7 forks source link

Possible problems with new mintelf toolchain #18

Closed th-otto closed 11 months ago

th-otto commented 1 year ago

In your configuration, you use m68kelf.h as target-machine specific file. Amongst others, this will redefine M68K_STRUCT_VALUE_REGNUM to a0, which will introduce an incompatibility (it is not redefined again in mint.h). Especially it is incompatible with libgcc.a which assumes a1. Same goes for STATIC_CHAIN_REGNUM.

There might be other traps. Eg. m68kelf.h defines ASM_OUTPUT_BEFORE_CASE_LABEL is defined using the swbeg directive which is only valid for svr4

vinriviere commented 1 year ago

Thanks Thorsten for having spotted that. The m68k-atari-mintelf toolchain is supposed to have the same ABI as the m68k-atari-mint one. Those differences aren't intentional from me. I didn't notice that m68kelf.h introduced ABI changes.

I tried a trivial fix (not yet pushed). Unfortunately, using #define DEFAULT_PCC_STRUCT_RETURN 0 isn't enough. Even if I compile a test program with -freg-struct-return, a small struct isn't returned into d0. It uses (a1) instead. While it worked well with GCC 4.6.4. I added some traces, and the cause seems to come from m68k_return_in_memory(). As mode == BLKmode, return to d0 isn't honored. Of course I will use #define DEFAULT_PCC_STRUCT_RETURN 0, but that setting isn't honored by GCC 13. Except filing a new bug report, I can't see what to do.

th-otto commented 1 year ago

How did your test program look like? There seems be some strangeness what gcc treats as "small structs".

For example:

struct s {
    int a;
};

struct s test(void)
{
    struct s a;

    a.a = 0;
    return a;
}

Produces

_test:
        moveq #0,%d0
        rts

Return value is passed in d0.

But:

struct s {
    char a[3];
};

struct s test(void)
{
    struct s a;

    a.a[0] = 0;
    return a;
}

produces

_test:
        clr.b (%a1)
        move.l %a1,%d0
        rts

Return value is passed through (a1), both with gcc-4.6.4 and gcc-13

Even more strange:

struct s {
    char a[4];
};

struct s test(void)
{
    struct s a;

    a.a[0] = 0;
    return a;
}

Returns the value in d0 in gcc-4.6.4, but through a1 in gcc-13. Where gcc-13 here applies both to your new toolchain, as to my own toolchain. I think this is a consequence of not defining STRUCTURE_SIZE_BOUNDARY anymore. Proof of that is that all my gcc versions (4,8,9,10,11,12) return the value in d0. Only 7 & 13 don't (i have changed gcc-7 in the meantime to also use PCC_BITFIELD_TYPE_MATTERS). Even if the comments in m68emb.h might be helpful to understand the problem, we should imho not adopt this settings, since our target is more closely to SVR4 aka linux abi, not an embedded target.

I think we should really revert that change to define STRUCTURE_SIZE_BOUNDARY to 16 again, even if that means we have to fix gdb to be able to compile. There is another proof that this change in structure layout causes other unforeseen incompatibilities. I've incorporated your prg/mintelf patches now in my patches for binutils. With that, i could use all my gcc mintelf toolchains without any change, because they were already prepared for elf. Then i changed gcc-7 and gcc-13 to define PCC_BITFIELD_TYPE_MATTERS instead of STRUCTURE_SIZE_BOUNDARY, in order to be able to compile gdb. With that, gdb compiled, however it crashed on startup. Reason was that i did not recompile & install mintlib with the new compiler. After i did that, it worked again.

@mikrosk : how did you solve your problem with std::array padding when you encountered it?

th-otto commented 1 year ago

There is another difference regarding support of thread local variables (this is also present in all my own mintelf toolchains): for elf, the assembler is detected as supporting thread-local storage. That causes gcc to emit library calls to __m68k_read_tp, which is not present in libgcc, and later tests in libcstd++ fail. That's why a workaround for the use of thread_local in gdb is needed. OTOH, for the old toolchains, the target assembler is not detected as supporting thread-local storage. That causes libgcc to be compiled with -DUSE_EMUTLS, and gcc to emit library calls to ___emutls_get_address instead. That works also in later tests, and libstdc++ defines _GLIBCXX_HAVE_TLS

Note that all this tests are done despite the fact that gcc later reports "Thread model: single"

vinriviere commented 1 year ago

How did your test program look like?

I tested with a struct containing 4 bytes, and I got the same results as you. Thanks for your extended tests.

th-otto commented 1 year ago

BTW, even if getting a setup to be able to run the full testsuite is difficicult, you can atleast run

maintenance selftest

inside gdb. That runs atleast a subset of testsuite. Some of them fail currently, and then they seem to hang while running "scoped_ignore_sigpipe"

vinriviere commented 1 year ago

Anyway: my goal with the m68k-atari-mintelf toolchain was to get rid of the a.out stuff, and use ELF instead. This was intended as a radical change in .o and executable formats, but not as an internal ABI change, to ease transition from m68k-atari-mint. I used the m68k-elf "embedded" target as reference, because PRG/ELF is just ELF with an additional header. But I wasn't aware there was so much ABI différences between m68k targets: svr4, linux, embedded. My wish is that the PRG/ELF format becomes widely accepted, as a clean and modern solution to a.out obsolescence. I don't mind about the internal C ABI.

On the other hand, as m68k-atari-mintelf and m68k-atari-mint are, by design, binary incompatible, we have a unique opportunity to improve the ABI. Question is what is or isn't an improvement. Ideally, regardless of existing programs with a mix of C/asm sources, I have no idea which ABI should theoretically considered the best for TOS/FreeMiNT platform.

So let's consider each point, and make a choice. I don't really mind.

However, having a single-byte struct padded to 2 bytes is really ugly. I wonder if there is really a rational argument for that. But if m68k compilers used to do that, then go for it. That wouldn't be the first oddity of the C language.

th-otto commented 1 year ago

I strongly vote to keep the old layout. I haven't investigated why exactly the crash occured (last function reported was ioctl, but since mintlib is not compiled with debug information, i could not backtrace). I think it happened when calling functions like tcgetattr, since the crash also occurred using a small test program that only called rl_reset_terminal(). Also, some of the structures declared in mintlib headers are passed as-is to the mint kernel, expecting the same layout there. If that is not the case, that will certainly cause disaster. And needed changes to gdb are minimal (attached below).

gdb-packed.patch.txt

However, having a single-byte struct padded to 2 bytes is really ugly.

Yes, i agree. But structure packing has always been implementation dependent. Atleast we can use aligned(16) and similar without getting compile errors (even if it is not honored at runtime)

mikrosk commented 1 year ago

I favour as small patch as possible, i.e. adopt as much from standard m68kelf as possible. I'd go even as far as reverting back to fp0 return registers.

As Vincent said, it's a target, we can do things cleaner and more maintainable.

I don't mind alignment oddities, see my report of std::array size in struct, it basically made m68k Atari gcc incompatible with modern code.

mikrosk commented 1 year ago

Maybe it's even time to get rid of the m68020-60 multilib target which is REALLY a nonstandard one and go with #1

th-otto commented 1 year ago

I favour as small patch as possible, i.e. adopt as much from standard m68kelf as possible. I'd go even as far as reverting back to fp0 return registers.

Sorry, but that's totally out of the question. Just to save a few lines in the patch is no argument to create incompatibilities.

I'm also tired about starting a discussion about FPU again. That has already been done several times, and every few moths you start a new discussion about it. It has nothing to do with elf at all. If you really wont to support FPU less Falcons, then start writing an emulator package that handles all instructions and all addressing modes. Good luck.

vinriviere commented 1 year ago

I have no idea what choice is the best. But if we are stuck with an old ABI, even with a brand new toolchain, just because someone did that choice 25 years ago, maybe by chance, then that's a bit sad. That would mean we are doomed forever.

th-otto commented 1 year ago

Of course it would be possible to change the ABI using a completely new toolchain (well completely new isn't quite right, i don't know if anyone is using my mintelf toolchains). But then it should only be done with a good reason, not because it saves 3 lines in the patch. And as already mentioned above, i currently can foresee any consequences that might have to mintlib and its interface to the kernel. But there must be differences, otherwise i could not explain the crashes.

vinriviere commented 1 year ago

Regardless of what choices will be done here, it would be a good thing to precisely document places where the MiNTLib or FreeMINT interface is so fragile. At least, we would know why we configure our tools in some specific way.

th-otto commented 1 year ago

Yes, that would be a good thing. Unfortunately the crashes i encountered is something that is now hard to reproduce. It took me half a day trying to find out what went wrong, without luck (gdb worked when compiled with your compiler, and it also worked before i backported the STRUCTURE_SIZE_BOUNDARY change). Then i finally realized that i still had old libraries in place, and started to recompile them all. So to reproduce that now, you would have to redo the same mistake that i did: compile a toolchain using the old abi, compile all libraries with it (atleast mintlib/fdlibm/readline/ncurses), install them, then configure and compile a new toolchain using the new abi, and use that to compile gdb. And if that causes crashes as in my case, you still won't know the actual cause of the problem.

But currently i'm busy trying a new version of ncurses, and finding out whats going wrong in the scoped_ignore_sigpipe test. Maybe it has to do with the warning i mentioned:

In file included from ../../binutils-2.41/gdb/../gdbsupport/scoped_ignore_sigttou.h:23,
                 from ../../binutils-2.41/gdb/inflow.c:37:
In destructor 'scoped_ignore_signal<Sig, ConsumePending>::~scoped_ignore_signal() [with int Sig = 22; bool ConsumePending = false]',
    inlined from 'void lazy_init<T>::reset() [with T = scoped_ignore_signal<22, false>]' at ../../binutils-2.41/gdb/../gdbsupport/scoped_ignore_sigttou.h:42:16,
    inlined from 'scoped_ignore_sigttou::~scoped_ignore_sigttou()' at ../../binutils-2.41/gdb/../gdbsupport/scoped_ignore_sigttou.h:72:29,
    inlined from 'void new_tty()' at ../../binutils-2.41/gdb/inflow.c:794:5:
../../binutils-2.41/gdb/../gdbsupport/scoped_ignore_signal.h:58:10: warning: '*(unsigned char*)((char*)&ignore_sigttou + offsetof(scoped_ignore_sigttou, scoped_ignore_sigttou::m_ignore_signal.lazy_init<scoped_ignore_signal<22, false> >::m_u))' may be used uninitialized [-Wmaybe-uninitialized]
   58 |     if (!m_was_blocked)
      |          ^~~~~~~~~~~~~
../../binutils-2.41/gdb/inflow.c: In function 'void new_tty()':
../../binutils-2.41/gdb/inflow.c:790:29: note: '*(unsigned char*)((char*)&ignore_sigttou + offsetof(scoped_ignore_sigttou, scoped_ignore_sigttou::m_ignore_signal.lazy_init<scoped_ignore_signal<22, false> >::m_u))' was declared here
  790 |       scoped_ignore_sigttou ignore_sigttou;
      |                             ^~~~~~~~~~~~~~

But i don't really understand what is going there. A similar warning is also emitted when compiling gdb/ser-unix.c.

th-otto commented 1 year ago

Some of them fail currently

Well they don't if you don't do the mistake of changing the out-radix in ~/.gdbinit, like i did ;)

So a safer way to run them is

$ gdb --nx -batch -ex "maintenance selftest"

The --nx switch prevents any .gdbinit from being processed.

mikrosk commented 1 year ago

Sorry, but that's totally out of the question. Just to save a few lines in the patch is no argument to create incompatibilities.

This is not about the patch size but about maintainability. I use my example with std::array again (which I didn't solve at all btw, had to manually change several places in the code to work around it) -- if I reported such problem with m68k-elf there would be a chance to get it fixed or at least investigated. Now I was stuck with some m68kmint decision nobody knows reason about.

If you really wont to support FPU less Falcons, then start writing an emulator package that handles all instructions and all addressing modes. Good luck.

I want to support people which asks the same question every X months -- why gcc -m68030 ends up with binaries asking for a FPU. Or even more concretely, why we can't provide TosWin, Hypview, COPS, Qed etc for Falcon without FPU if they don't use any FP instructions.

And also as mentioned, -m68020-60 multilib simply doesn't exist anymore. It took me quite some time to implement it again and it's just a matter of time when it breaks again.

Anyway, none of this matters for the ongoing effort to get m68k-mintelf up and running reliably. Just wanted to be a voice of progress. I've had some experience with pissing people of with new things. ;)

th-otto commented 1 year ago

This is not about the patch size but about maintainability.

It's not less maintenable, if you put it in a separate commit (and take the opportunity to write a comment there why it is needed). And btw. the patch i posted works with both old and new abi.

And also as mentioned, -m68020-60 multilib simply doesn't exist anymore. It took me quite some time to implement it >again and it's just a matter of time when it breaks again.

We use a much different multilib setting than the default already, i cannot see why this should break. Using the default settings, you end up with a dozen or so libraries compiled for different coldfire variants. And i'm not in the mood to change that configuration, because a) ppl should be used to it by now and b) too many other projects depend on it, namely a lot of Makefiles in freemint, and ~170 build scripts that i've written in the meantime.

I've had some experience with pissing people of with new things

And you think requiring people to throw away their existing installations and start all over won't piss them off? Anyway, this too has been discussed already a lot of times. If you think it's worth to change that, then go ahead. But then i'm out.

mikrosk commented 1 year ago

Out of curiosity I tried to reproduce the problem with std::array: https://github.com/dxx-rebirth/dxx-rebirth/issues/695.

And indeed, the problem is gone. However another interesting difference has appeared: https://github.com/dxx-rebirth/dxx-rebirth/issues/724. This one looks like a more expected problem as the struct is not packed in any way and the assert just expects certain size (I guess assuming 32-bit alignment rules while ours are still 16-bit?)

Just mentioning this to be sure that for this one we know what's causing it.

Anyway, as demonstrated with the first bug, keeping the 'weird' (mint-gcc4/gcc7 non-compatible) change makes that C++-super-intensive project (nearly) compilable without any hacks from my side.

vinriviere commented 1 year ago

BTW, I've just reintroduced #define STRUCTURE_SIZE_BOUNDARY 16 for a test, and I can't reproduce GCC 4.6.4 behaviour when returning a struct containing 2 chars. With GCC 13, it always defaults to pcc (usage of address register), whatever I do. There is an ABI difference. Or maybe I missed another parameter.

th-otto commented 1 year ago

There is an ABI difference. Or maybe I missed another parameter.

Hm, i've changed my gcc-7 and gcc-13 back to use #define STRUCTURE_SIZE_BOUNDARY 16, and all gcc versions now behave the same, returning the structure in a register. Only for a size of 3 bytes this is not the case, but again the behaviour is the same in gcc-4.

th-otto commented 1 year ago

BTW, for the .note.GNU-stack problem, i already added a fix when i updated my toolchains to binutils-2.40 some weeks ago: https://github.com/th-otto/m68k-atari-mint-gcc/commit/f12fdc5ca9f0cb40dfe54743793bf39bf04d87a0

But looks like i forgot that in the mint/gcc-13 branch

th-otto commented 1 year ago

The select problem might be caused by aranym: https://github.com/aranym/aranym/blob/949f0f8ca6d5fd00d13159ba4141a1c8ead65657/src/natfeat/hostfs.cpp#L601-L605

That is the function that is finally called by the mint kernel via f->dev->select. However it just returns E_OK (0) which makes the implementation in mintlib think that no bytes are available, thus not setting any flags.

However i think that my workaround (disabling the call to interruptible_select in gdb/top.c) isn't that bad. Plain files are always ready for read, and if the comment is correct there (that fgetc is not interruptible), it just means you won't be able to interrupt .gdbinit file processing.

mikrosk commented 1 year ago

BTW, for the .note.GNU-stack problem, i already added a fix when i updated my toolchains to binutils-2.40 some weeks ago: https://github.com/th-otto/m68k-atari-mint-gcc/commit/f12fdc5ca9f0cb40dfe54743793bf39bf04d87a0

Oh, nice. Please push it to the binutils branch then, definitely useful.

mikrosk commented 1 year ago

However i think that my workaround (disabling the call to interruptible_select in gdb/top.c) isn't that bad.

Hmm, nice dilemma. If the decision will be to leave it there, at least document it that it is because of Aranym and it is a deliberate decision to ease Atari development. But perhaps it's a good idea to test without hostfs first.

th-otto commented 1 year ago

Oh, nice. Please push it to the binutils branch then, definitely useful.

For the regular binutils branch here, it is not needed, because it is needed for elf only. For vincents branch, i don't want to interfere there, so i'll leave it to vincent to decide to cherry-pick it ;)

mikrosk commented 1 year ago

Yes, sorry, I meant the elf branch. @vinriviere up to you then. :)

th-otto commented 1 year ago

I've compiled a new version of ncurses now (6.4). That will also use the newer abi6 from ncurses by default. Alas, it did not seem to help, but it did not seem to break anything, either. Same problem remains, that you basically have to press ^L everytime gdb returns to the prompt.

Next thing that should be checked is whether

I've uploaded the new library to http://tho-otto.de/snapshots/gdb/

A terminfo database is also there, in a separate archive. Remember that this has to be installed on your target system, not along with the cross-compiler.

tw100.term.txt

vinriviere commented 1 year ago

BTW, for the .note.GNU-stack problem, i already added a fix when i updated my toolchains to binutils-2.40 some weeks ago:

Ah, good find!

I finally found the explanation of that feature: https://www.airs.com/blog/archives/518

Long story short: gcc supports a nonstandard and obscure feature named "nested functions". And to properly operate, instructions are dynamically generated in the stack (a questionable practice). So the stack must be executable (some processors can disallow code execution in some areas). But executable stack is bad, as it can lead to buffer overflow attacks. On the other hand, the "nested functions" feature is very rarely used. So concretely, it would be a bad solution to always set the stack as executable, as it would be risky and useless most of the time.

GNU's solution is to mark all .o files with a .note.GNU-stack section, including manually written assembler sources. The section itself is empty. But what matters is the sections flags. Basically, virtually all .o files don't need executable stack, so the .S files must be marked with: .section .note.GNU-stack,"",@progbits What if that mark is missing? Well, currently, it is interpreted by default as "stack must be executable".

This is the purpose of the warning. The default behaviour may be reversed in the future, so one shouldn't rely on it. So all the .s files must contain the .section .note.GNU-stack statement. If at least one .o file has that section, but not all, that warning is issued.

By looking at Thosten's patch, I understand that libgcc/config/m68k/lb1sf68.S lacks that .section .note.GNU-stack section. So if a program multiply 2 longs, it links with libgcc.a and and implicitly pulls some code from lb1sf68.S. Then if that program is linked with a random .o which explicitly contains a .section .note.GNU-stack, that will trigger the ld warning.

Thorsten's patch adds generation of the .note.GNU-stack section where it is currently missing. This is good, and should be proposed upstream. But the change is a bit big for our expected minimalist mintelf patch. And that won't prevent the warning if someone, willingly or not, links with a file lacking the stack section. The expected stack flags will be ignored, anyway.

On the other hand, on TOS/MiNT at the OS level, there is no way to mark the stack as non-executable. So I would be in favor of just patching the linker to avoid the warning. Any .o file will be free to ask for executable or non-executable stack. Manually written assembly won't have to worry about that. And finally, the linker and the OS will just ignore those stack requests. I will have a look to the ELF linker to see if that could easily be done that way, without adverse effects.

BTW: I wonder if that auto-generated trampoline code on the stack for nested functions behaves well with the 68030/ColdFire cache. Probably not. That vaguely reminds me something - maybe there was something about that in the old GCC 2.x patches from Frank. This should be checked.

th-otto commented 1 year ago

If at least one .o file has that section, but not all, that warning is issued.

That might explain why i get the warning, but you didn't until now. Usually, i compile all libraries using gcc-7.5.0, which was already fixed to emit such a section. But gcc-13 was missing that, so the linker complained. I still consider it a bug in the linker, since that is a linux specific feature, not an elf specific feature.

So I would be in favor of just patching the linker to avoid the warning.

Yes, i did that already. It would still be needed when linking objects from assembler sources that are missing that statement, like lb1sf68.S.

But the change is a bit big for our expected minimalist mintelf patch

You can simplify it. The change in m68k.cc can be reduced to #define TARGET_ASM_FILE_END file_end_indicate_exec_stack for elf-only toolchains, like it was originally done in linux.h. The local function m68k_file_end is then not needed at all. ctri.S and crtn.S are only needed for shared libs, and are currently not referenced by our linker.

BTW: I wonder if that auto-generated trampoline code on the stack for nested functions behaves well with the >68030/ColdFire cache

Probably not: if you compile something like

int somefunc(int (*f)(int));

int f(void) {
int i = 2;
int g(int j) { return i + j; }
return somefunc(g);
}

there isn't any attempt to flush the cache. The cross-compiler for m68k-linux on my system emits a call to __clear_cache (even if you compile for -m68000).

I guess we have to look at

#undef FINALIZE_TRAMPOLINE
#define FINALIZE_TRAMPOLINE(TRAMP)                  \
  maybe_emit_call_builtin___clear_cache ((TRAMP),           \
                     plus_constant (Pmode,      \
                            (TRAMP),    \
                            TRAMPOLINE_SIZE))

from gcc/config/m68/linux.h. But that will need a __clear_cache function in libgcc.a. On linux, that will call a kernel function.

Anyway, good find with the blog. That gave a good explanation.

th-otto commented 1 year ago

And here is proposed fix for the TLS problem. The result of this that the compiler will emit library calls to emutls_get_address instead of m68k_read_tp. Theoretically, that should make the patch in gdb for the thread_local keyword obsolete, but could as well be left in place to avoid some unneeded overhead calling that function in single-threaded mode.

0001-Make-mintelf-toolchain-use-emutls.patch.txt

th-otto commented 1 year ago

And another proposed patch to emit a call to __clear_cache

0001-Emit-library-call-to-__clear_cache-when-finalizing-t.patch.txt

th-otto commented 1 year ago

BTW, --tui mode does not work really perfect on linux either. Output from the program will be written to the lower panel, and cause the upper panel (where the source is displayed) to scroll up. When you hit ^L, the output line from the program is lost.

th-otto commented 1 year ago

One problem for ncurses is a bogus entry in the tw100 terminfo entry:

    knp=\Eb, kpp=\E\Ea, kund=\EK, ll=\E[24H, nel=\EE,

ll is the entry for last line, first column (if no cup), but is hardcoded there to line 24. That entry should be removed, in which case cup (position cursor) is used, with the actual number of rows of the terminal. That can be done for example with

$ infocmp tw100 > tw100.term
$ sed -i 's/ll=\\E\[24H, //' tw100.term
$ tic tw100.term

After that, window looks much better:

Screenshot_20230825_084433

I will also add a patch to ncurses.

Question is whether we should fix toswin2 to specially handle that case. But i guess that would break other cases when you really want to position the cursor in line 24.

mikrosk commented 1 year ago

Question is whether we should fix toswin2 to specially handle that case. But i guess that would break other cases when you really want to position the cursor in line 24.

I'm a bit confused, what should be handled on TosWin2 side? Isn't the actual number of rows of the terminal always a good answer?

th-otto commented 1 year ago

I first thought about a hack like "if cursor is moved to line 24 column 0, use the actual number of rows instead". But that's crap. Problem is (with the original entry), that gdb uses the "ll" entry when it is about to display a prompt. That will move the cursor always to line 24, regardless of how large the window is.

mikrosk commented 1 year ago

That will move the cursor always to line 24, regardless of how large the window is

Yes, that is clear. What I'm not sure about is that what changes TosWin2 requires if we want to use the generic approach (without hardcoding the 24) ?

th-otto commented 1 year ago

None. Without that ll entry, gdb will to do the right thing, using cup with the actual number of rows.

vinriviere commented 1 year ago

One problem for ncurses is a bogus entry in the tw100 terminfo entry:

Excellent find @th-otto 👏 I understand that the tw100 terminfo definition must be patched inside the ncurses package (remove ll entry), and properly updated in MiNT setups.

But I'm also completely confused by your declarations. What else needs to be done?

vinriviere commented 1 year ago

On the other hand, on TOS/MiNT at the OS level, there is no way to mark the stack as non-executable. So I would be in favor of just patching the linker to avoid the warning.

I've pushed a fix for that: https://github.com/freemint/m68k-atari-mint-binutils-gdb/commit/67ac81f56dbe82a458d401d1d3e4acc98b515997 And I rebuilt my Ubuntu binutils package.

I noticed that ld had a --no-warn-execstack option. I just enabled it by default in the configure script, as for hppa. One-line fix, no more warning. We don't care about this for the mintelf target, as that stack-segment isn't used, anyway. And if someones still wants to see the warning (because .note.GNU-stack problem is indeed missing from some libgcc.a files, and virtually all user-written .s sources), it is still possible to link with -Wl,--warn-execstack.

th-otto commented 1 year ago

and properly updated in MiNT setups.

There is no mint setup for ncurses. The only ones are from my site, but those are not suitable for your gcc because of mismatch in underscores. The new ones that i compiled a few days ago for gdb contain the libraries only, because those are only meant for the cross-compiler, while the terminfo database is used on the target.

But I'm also completely confused by your declarations. What else needs to be done?

For now, you have to fix the entry locally on the system that you are using, using the commands above.

Apropos executable stack. If you really need an executable stack (like in the example with the nested local function), that may actually not work when using memory protection in mint. But haven't tried yet.

mikrosk commented 1 year ago

For now, you have to fix the entry locally on the system that you are using, using the commands above.

Ah, I think I get it now. You meant that TosWin2 could be hacked in such way that if user does not change anything and run gdb (or some other ncurses app), TosWin2 could in theory detect this situation and ignore the forced line 24.

I don't think this is necessary, as Vincent said, we just provide proper setup (gdb and ncurses, if I got it right).

th-otto commented 1 year ago

Yes, forget about hacking toswin2, that just was just a stupid idea.

Question is how to provide a setup. Maybe we could add a few entries to the bootable builds (tw100, and some other atari related entries, like vt52). However they would be stored there inside the mint//sys-root folder, not into some existing ext2 partition or whatever.

vinriviere commented 1 year ago

There is no mint setup for ncurses.

People need to install the terminfo database from somewhere. From SpareMiNT, or whatever.

For my part, I provide the terminfo database with the native ncurses binary package for MiNT. I plan to update it when all that compiler stuff is over.

mikrosk commented 1 year ago

The correct way is of course full ncurses installation but I see no problem with providing the basic/atari files separately. We do that with a few system files (like /etc/passwd or /var/run/utmp) anyway.

vinriviere commented 1 year ago

Question is how to provide a setup. Maybe we could add a few entries to the bootable builds (tw100, and some other atari related entries, like vt52).

Yes, something like that. A small excerpt of the terminfo database should be shipped with FreeMiNT bootable builds in sys-root/share/terminfo. There is already a lot of stuff in sys-root/bin. As terminfo is not currently installed, I guess that none of that software requires it. That could be an opportunity to start.

th-otto commented 1 year ago

The one from sparemint has the same broken entry. I've updated my ncurses archives already, and the terminfo database could just be used from any atari package, they all go to the same folder and are identical.

If you plan to update your own package, good luck ;) Bulding has become even more complicated with newer versions, because there are now 4 abi's (old abi 5, new abi 6, and both in wide variants). If you want to have some fun, look at https://github.com/th-otto/rpmint/blob/master/ncurses-build.sh It takes now about 6min to build ncurses on my machine, almost as long as compiling gcc.

vinriviere commented 1 year ago

Er, 6 minutes to build GCC? You have a nice machine. On mine, it takes 3 hours to build GCC on Cygwin. This is why I switched to Ubuntu for serious work. IIRC, building the full GCC still takes about 30 minutes. I tried "make -j" for parallel build, but in that case the kernel automatically kills the processes after a few minutes due to ressource hogging.

th-otto commented 1 year ago

I just did some tests regarding support for float registers. Seems we have to fix a few thing there. Currently, the code at https://github.com/freemint/m68k-atari-mint-binutils-gdb/blob/368ff0c03fb7cbd072349ccc261b47f3bc9c2e6a/gdb/m68k-tdep.c#L1230 ends up with has_fp = 1, flavour = m68k_coldfire_flavour, and float_return = 0. I think the m68k_coldfire_flavour is selected because has_fp is initalized to 1, and somehow the call to info.bfd_arch_info->compatible succeeds (it matches m68k:68000 & m68k:isa-a:no-div, maybe because of that hack for crt0.o?)

We should also check where that Tag_GNU_M68K_ABI_FP comes from. Seems to be needed so that we can tell gdb that our target returns float values in d0, not float registers.

th-otto commented 1 year ago

Er, 6 minutes to build GCC? You have a nice machine

Nothing special, Intel-7 7700, 3.6Ghz. Takes 7:30 when using 8 parallel jobs on linux, for c & c++ backends. Compiling gcc-4 is about 4min. On cygwin it is different of course, there it is more than 2 hours (with 2 jobs only, because running in virtualbox)

Cygwin mainly has the problem that fork() is extremely slow. And the filesystem is another thing.

vinriviere commented 1 year ago

Seems we have to fix a few thing there.

Oh. This is very bad. I fear that you're right, and that's a result of my bfd_m68k_compatible() hack for the linker. This has to be checked, but that's highly probable. If so, we coud always use a different patch between binutils and gdb, but that would be very bad. Anyway, that hack will have to be removed.

An option would be to build a different crt.o for the different multilibs. Contents would be the same, but the meta-information would be different. Also, we could use conditional compilation there, if needed. It has been a long time that I consider to do that. That would require a change in the MiNTLib's makefiles and install scripts.

Another idea, not completely tested. Add this at the top of crt0.S:

.cpu 68k

The result can be checked like this:

$ m68k-atari-mintelf-objdump -f crt0.o

crt0.o:     file format elf32-m68k
architecture: m68k, flags 0x00000010:
HAS_SYMS
start address 0x00000000

Note the architecture: that's m68k only (versus m68k:68000 normally used when assembling with -m68000). If that's enough to mark that object file as compatible with any other architecture, then bingo. To be checked carefully, after having removed the bfd_m68k_compatible() hack.

NB: .cpu 68k doesn't seem to hurt the old a.out gas, so we could add it to crt0.S unconditionally.