dirkwhoffmann / vAmiga

vAmiga is a user-friendly Amiga 500, 1000, 2000 emulator for macOS
https://dirkwhoffmann.github.io/vAmiga
Other
293 stars 24 forks source link

Custom CPU implementation #251

Closed dirkwhoffmann closed 4 years ago

dirkwhoffmann commented 4 years ago

Many of the recently reported bugs seem to be related to bus and interrupt timing. To improve the situation, I favour the idea of integrating a custom CPU implementing into vAmiga. To get this project done in a decent time frame, I will take a reference implementation approach based on two already existing cores: Musashi and portable68000. These cores are going to serve as my functional reference and temporal reference, respectively.

This is my roadmap:

Task 4 will require some smart recording logic, because I cannot simply run both cores in a row (the first CPU will alter memory and cause side effects). To cope with that, the second core must run in a fake environment that intercepts all memory calls and compares them to what the first CPU did.

These are my corresponding milestones:

Once all four milestone have been reached, the new core can take over and will hopefully bring vAmiga to the next level.

Milestones reached so far: None 🀭

mithrendal commented 4 years ago

I am just reading about the m68k emulator which is written in rust https://github.com/marhel/r68k . And about his testing strategy.

And then I read this πŸ€” ...

In effect, each instruction is compared thoroughly (with random values) to Musashi, using all combinations possible of the allowed source and destination addressing modes and registers. The number of clock cycles consumed is also reported by Musashi after execution, and is also compared to r68k.

Did you read it too ? The last sentence said that Musashi reports the consumed clock cycles.😳 But I thought it is not ?! 😳 Until now I thought you wanted to have an own CPU implementation because Musashi is not counting cycles ... And I understood that we will get that part from portable68000...

mithrendal commented 4 years ago

This might also be useful for our book shelve

http://cache.freescale.com/files/32bit/doc/ref_manual/MC68000UM.pdf

Section 7 and 8 lists all instruction execution times in clock cycles.

Whereas https://www.nxp.com/docs/en/reference-manual/M68000PRM.pdf describes all possible opcodes...

dirkwhoffmann commented 4 years ago

Until now I thought you wanted to have an own CPU implementation because Musashi is not counting cycles ...

Musashi reports the number of elapsed cycles after each executed instruction, but doesn't report the intermediate cycle counts when memory is accessed. Let's say we execute a command that consumes 12 cycles and performs 4 memory accesses. In this case, we need something like this:

Event Cycle
Mem access 1 2
Mem access 2 4
Mem access 3 8
Mem access 4 10
End 12

Portable68000 and it's successor Denise will provide us with that information.

mithrendal commented 4 years ago

Ah yes I understand that means for example when (
the PC in chipram is at a

  mulu <ea>, Dn command  ->70 Cycles (1 read / 0 write)

and the CPU is blocked by Bitplane/Copper or Blitter DMA Access ) then we plan to stop the CPU only that one read cycle when it is acessing the bus, is that right ?

luckily the complex CPU is the last component with the lowest prio in the chain of bus consumers...

dirkwhoffmann commented 4 years ago

we plan to stop the CPU only that one read cycle when it is acessing the bus, is that right ?

Yes, exactly. When the CPU tries to acquire the bus, I need to check if the bus is available. To do this, I need to to know the exact cycle when the read happens (e.g., cycle 12 after instruction start). If the bus is in use, the CPU is halted until it is free again.

mithrendal commented 4 years ago

To deeper understand the problem I try to learn what vAmigas Agnus controller does in the current implementation. I spotted the code partly in agnus.cpp and memory.cpp but I can not see the behaviour easily. Therefore I made the following little quiz 😎 In chipram when BLTPRI is set Is it a) executing the CPU with full speed like fastram without caring about the blitter nasty flag. or b) executing the CPU with some delay or c) stopping the CPU entirely ?

In chipram When BLTPRI is cleared Is it d) executing the CPU with full speed like fastram or e) executing the CPU for 1 bus cycle if the CPU already requested 3 consecutive memory cycles which were already denied or f) periodically stopping the CPU for some bus cycles (random staccato πŸ₯³) ?

Should be c) and e) when I understand the docs correctly.

But then again when e) is not possible because Musashi does not care about bus cycles, what does vAmiga currently do when BLTPRI is not set?

dirkwhoffmann commented 4 years ago

The crucial function w.r.t. bus timing is Agnus::executeUntilBusIsFree() which is executed whenever the CPU accesses Chip or Slow Ram. I tried a lot of variants of which none really worked (every attempt is a hack, because Musashi doesn’t provide the exact cycle information). The current implementation looks like this (it is more primitive than the previous ones, but working best at the moment):

Agnus::executeUntilBusIsFree()
{
    int16_t oldpos;

    // Quick-exit if CPU runs at full speed during blit operations
    if (blitter.getAccuracy() == 0) return;

    // Tell the Blitter that the CPU wants the bus
    cpuRequestsBus = true;

    oldpos = pos.h > 0 ? pos.h - 1 : HPOS_MAX;

    // Wait until the bus is free
    while (busOwner[oldpos] != BUS_NONE) {

        // Add a wait state
         cpu.addWaitStates(DMA_CYCLES(1));

         // Emulate another Agnus cycle
         oldpos = pos.h;
         execute();
    }

    cpuRequestsBus = false;
    cpuDenials = 0;
}

The code checks if the bus is free by reading array busOwner[] at the preceding hpos. This array contains, e.g., BUS_COPPER if the Copper used it (it is the same array that is read by the DMA debugger for displaying bus usage). The BLTPRI flag is checked inside this function (the Blitter and the Copper call it to acquire to bus):

template <BusOwner owner> bool
Agnus::allocateBus()
{
    // Deny if the bus has been allocated already
    if (busOwner[pos.h] != BUS_NONE) return false;

    switch (owner) {

        case BUS_COPPER:

            // Assign bus to the Copper
            busOwner[pos.h] = BUS_COPPER;
            return true;

        case BUS_BLITTER:

            // Check if the CPU has precedence
            if (!bltpri() && cpuRequestsBus) {

                if (cpuDenials >= 3) {

                    // debug("Blitter leaves bus to the CPU\n");
                    return false;

                } else {

                    // debug("Blitter ignores the cpu request\n");

                    // The Blitter gets the bus
                    cpuDenials++;
                }
            }

            // Assign the bus to the Blitter
            busOwner[pos.h] = BUS_BLITTER;
            return true;
    }

    assert(false);
    return false;
}

If BLTPRI is true, the Blitter takes the bus whenever it can. If BLTPRI is false and the CPU wants the bus (indicated by cpuRequestsBus being true), the Blitter skips allocating the bus every third request (determined by counter cpuDenials).

Please feel free to ask more about the code. I'm really happy if somebody looks at it (albeit this part of the code is probably the most ugly one).

mithrendal commented 4 years ago

Ok I try to answer my own question in order to prove my understanding of the code above πŸ™‹πŸ»β€β™€οΈ.... when I understand the current code correctly then in BLTPRI = 0 it adds a CPU waitstate (probably 1 CPU cycle long??? ) every 4th DMA Cycle ... when there is a pending CPU bus request

In case of lots of move.l (ax), (an) instructions there will be more waitstates then for example as it would be the case for lots of mulus. So yes, I can imagine the current implementation should approximate the real world πŸ€— "in theory".

Question1: why do we add a waitstate? Is it not better to block the Musashi CPU when it requests a memory word from bus ?

Question2: when the Musashi CPU requests memory via bus, why can't we count these requests and treat them as bus cycles ? Or in other words what is the advantage of portable68000's intermediate cycle count?

dirkwhoffmann commented 4 years ago

if BLTPRI = 0 it adds a CPU waitstate (probably 1 CPU cycle long??? ) every 4th DMA Cycle ... when there is a pending CPU bus request

If the bus is in use, the CPU gets delayed by 1 DMA cycle which is 2 CPU cycles:

cpu.addWaitStates(DMA_CYCLES(1));

DMA_CYCLESis a macro converting DMA cycles to master clock cycles (the master clock runs at 28 Mhz). There are macros for the CPU clock and the CIA clock as well:

#define CPU_CYCLES(cycles) ((cycles) << 2)
#define CIA_CYCLES(cycles) ((cycles) * 40)
#define DMA_CYCLES(cycles) ((cycles) << 3)

Question1: why do we add a waitstate? Is it not better to block the Musashi CPU when it requests a memory word from bus ?

The code does exactly this. If the CPU wants to access memory and the bus is blocked, Agnus is emulated until the bus is free (which is the same as blocking the CPU). Of course, blocking the CPU has the effect that the currently executed instructions needs longer than usual. This is taken care of by adding the wait states.

Function addWaitStates is very simple (and might be inlined in future):

void
CPU::addWaitStates(Cycle number)
{
    waitStates += number;
}

The wait states are added in function CPU::executeInstruction():

Cycle
CPU::executeInstruction()
{
    ...
    advance(m68k_execute(1));

    if (waitStates) debug(CPU_DEBUG, "Adding %d wait states\n", waitStates);
    clock += waitStates;
    waitStates = 0;

    return clock; 
}

what is the advantage of portable68000's intermediate cycle count?

Portable68000 gives us the complete memory access pattern. E.g., if a 10 cycle instruction with 4 memory accesses is executed, this pattern could looks like this:

C-C-C---C- 

Now, assume that bitplane DMA is going on, with the following memory access pattern:

B-B-B-B-B-B-B-B-B-B-B-B-B

This would result in the following bus usage:

BCBCBCB-BCB-B-B-B-B_B

Because CPU instructions usually use every other bus cycle for memory access, the CPU runs at full speed in this example. However, if the number of bitplanes is increased, the bitplane DMA pattern could look like this:

B-BBB-B-BBB-B-BBB-B-BBB

Now, the CPU would be slowed down:

BCBBBCBCBBBCB-BBB-B-BBB

Bottom line: Simply counting the number of memory accesses doesn’t help. Whether the CPU is slowed down depends on the actual memory access pattern which is not provided by Musashi (unfortunately).

mithrendal commented 4 years ago

I am still did not getting it.😌 May I ask more ? πŸ™‹

Bottom line: Simply counting the number of memory accesses doesn’t help. Whether the CPU is slowed down depends on the actual memory access pattern which is not provided by Musashi (unfortunately).

in vAmiga when the CPU reads or writes to the bus it is dispatched as an example via activeAmiga->mem.peek16(addr); and activeAmiga->mem.poke16(addr, value);

assuming the CPU is being stepped forward one by one in terms of CPU cycles via m68k_execute(1)

Then I don't get it why we can not record the pattern in these bus dispatchers methods ? The CPU certainly calls these in a pattern e.g. read first, some cycles computation, then write, no ? Why is this not the pattern we need then ?

dirkwhoffmann commented 4 years ago

May I ask more ? πŸ™‹

Definitely πŸ‘¨πŸ»β€πŸ«.

The CPU certainly calls these in a pattern e.g. read first, some cycles computation, then write, no ? Why is this not the pattern we need then ?

If I understand correctly, you would build up the pattern step by step. Most likely like this:

Mem access 1: C Mem access 2: C-C Mem access 3: C-C-C

But if, e.g., MULU is executed, the pattern would look very different. It would be similar to this:

C- ..... -C- (with many cycles in-between where the multiplication happens).

The memory pattern can differ considerably between instructions and counting memory accesses would only approximate the real behaviour.

mithrendal commented 4 years ago

If I understand correctly, you would build up the pattern step by step.

yes.

But if, e.g., MULU is executed, the pattern would look very different. It would be similar to this:

C- ..... -C- (with many cycles in-between where the multiplication happens).

if that is the real pattern of MULU, yes that would be the expected behaviour.

The memory pattern can differ considerably between instructions and counting memory accesses would only approximate the real behaviour.

Now I get the problem you think of. You think although we might step the CPU forward cycle by cycle, the memory requests from the emulated CPU would not happen at the same cycle step compared to a real physical CPU. Which is mainly because you assume that the developer of the Musashi did not measure the bus access for example with a logic analyser as the developer of the portable68000 did. And therefore it will only be a guess or an approximation. But what proves this assumption ?

dirkwhoffmann commented 4 years ago

You think although we might step the CPU forward cycle by cycle, the memory requests from the emulated CPU would not happen at the same cycle step compared to a real physical CPU.

No, the problem is that we cannot step the Musashi CPU cycle by cycle (we wouldn't have any problem if we could). We can only step the CPU instruction by instruction which is the problem. When we call Musashi::m68k_execute(1), Musashi executes a single instruction (as a chunk) and returns the number of elapsed cycles. While executing m68k_execute, Musashi calls vAmiga::peek() and vAmiga::poke()a couple of times, but doesn't advance an internal clock between those calls. Hence, viewed from the vAmiga side, Musashi executes all memory accesses at the same time which is the problem. The only thing we could do (besides improving Musashi or implementing our own CPU) is to pretend that a certain number of cycles (usually 2) did elapse between two memory accesses. This is what I have in mind when I call it an approximation. It would be correct for many instructions, but totally wrong for instructions such as MUL or DIV.

mithrendal commented 4 years ago

Ok, now I completely understand the problem and why it is not possible to do some hack in the peek and poke methods in order to fine tune the CPU bus access. Thank you!!

The most simple approach from the vAmiga side would be to use a CPU implementation which vAmiga can step forward cyclewise... And which would then call vAmigas memory interface (e.g. peek & poke ) at the correct cycle of the CPU-instruction which is currently being processed.

dirkwhoffmann commented 4 years ago

The most simple approach from the vAmiga side would be to use a CPU implementation which vAmiga can step forward cyclewise...

This is how it is done in VirtualC64. The approach would be very slow though. Fortunately, we can do better by letting the CPU drive the whole thing. This means that the run loop will be something like this:

while (1) {
   cpu.executeInstruction();
   agnus.executeToCpuClock();
}

In other words: The CPU is giving pace and Agnus follows.

Function cpu.executeSingleInstruction will be structured similar to this:

CPU::executeSingleInstruction(...)

   clock += 4;
   value = mem->peek();

   clock += 2;  
   value = mem->peek();
   …
}

And the peek handler will look like this:

Mem::peek() {
  agnus.executeToCpuClock();

  If (source == CHIP_RAM || source == SLOW_RAM) {
      int blockedCycles = agnus.executeUntilBusIsFree();
      cpu.clock += blockedCycles;
    }
  ...
}

This means that whenever a memory access occurs, Agnus is executed up to the cycle where the CPU already is. If Agnus used the bus in the last cycle, the CPU cannot have it immediately. In this case, Agnus continues to execute until the bus is free. The number of blocked cycles is added to the CPU clock and the memory access performed.

mithrendal commented 4 years ago

True 🀀

mithrendal commented 4 years ago

seems "theirs" is all about cycles 😬...

for example cpu.js from SAE

function runNormal() { //m68k_run_2()
        var exit = false;

        while (!exit) {
            try {
                while (!exit) {
                    regs.instruction_pc = getPC();
                    //regs.opcode = getInst16_default(0);
                    regs.opcode = nextInst16_default();
                    SAER.events.do_cycles(cpu_cycles);
                    var orw_cycles = iTab[regs.opcode].f(iTab[regs.opcode].p);
                    cpu_cycles = orw_cycles[0] * cpucycleunit;
                    //cpu_cycles = adjust_cycles(orw_cycles[0] * cpucycleunit);
                    SAEV_CPU_cycles = cpu_cycles;

                    if (SAEV_spcflags) {
                        if (SAER.m68k.do_specialties(cpu_cycles))
                            exit = true;
                    }
                }
dirkwhoffmann commented 4 years ago
var orw_cycles = iTab[regs.opcode].f(iTab[regs.opcode].p);
cpu_cycles = orw_cycles[0] * cpucycleunit;
SAEV_CPU_cycles = cpu_cycles;

Sorry, I just don't get what they do πŸ™ˆ. I can't get all those different cycles into my head πŸ€“.

Here is something simpler:

Bildschirmfoto 2019-12-11 um 15 02 26

... which leads to E = m c^2 with a few more derivations 😍.

I love simple relationships (which is the reason why UAE code has to stay out of vAmiga πŸ˜‰).

mithrendal commented 4 years ago

This is so smart to let the cpu drive the thing. Honestly first it was sounding a little bit paradox because the cpu should be the one with lowest priority in chipram. The way you intercept the cpu bus access in the peek&poke memory interface reminds me a little on the fairytale of the rabbit and the hedgehog. The speedy rabbit πŸ‡ with the big advantage always lost the run. The hedgehog πŸ¦” always replied to the rabbit I am already here. Again thank you for sharing the concepts it is very interesting how such a complicated computer like the Amiga is being emulated. I also never really fully understood the UAE. Only small parts of it. 😌The code and concepts in vAmiga are much clearer and pretty cool to study and learn from.

dirkwhoffmann commented 4 years ago

Milestone 1 reached πŸ₯³.

I'm sure that my "CPU" is still full of bugs though. Although the portable68000 unit test suite is pretty good, it can only check a tiny subset of all possible instruction / mode / argument combinations.

Next milestone is fixing the disassembler output. For this purpose, I am using a faked vAmiga app (on the dasm branch) that disassembles each executed instruction internally. It then compares the output of Musashi with my own disassembler and crashes the app once a mismatch has been found. Right now, it crashes almost immediately:

Disassembled instruction 262168 differs:
Musashi: dbra    D1, $fc0142
 vAmiga: dbf       D1, $0

Assertion failed: (false), function executeInstruction

Let's see how long it takes until I can see the hand & disk logo in this faked app 😬.

mithrendal commented 4 years ago

Wow, that is a big christmas present for human mankind, a new child is born πŸ§šπŸ»β€β™€οΈ ...

How to pronounce Moira ?

"Moi" like french Mine ... ? "ra" like the "ra" in supra-moleculear ?

emphasis on the first sylable or last sylable ?

is it male πŸ‘ΆπŸ½or femaleπŸ‘§πŸ» or it πŸ‘Ά?

anyway happy birthday πŸŽŠπŸŽ‚πŸŽˆ.

EDIT: https://en.wikipedia.org/wiki/Moira_(given_name) https://en.wikipedia.org/wiki/Moirai https://www.babycenter.com/baby-names-moira-3266.htm

dirkwhoffmann commented 4 years ago

In ancient Greek religion and mythology, the Moirai, often known in English as the Fates (Latin: Fata), Moirae or Mœræ (obsolete), were the white-robed incarnations of destiny

I'm not really an expert in ancient Greek mythology and Wikipedia is kind of technical about it. But what I understood is that the Moirae were essentially three cool girls with superpowers which I found very cool 😎.

What makes me a little suspicious is that they are not on the official super-power list 🀨:

https://marvel.fandom.com/wiki/Category:Powers

And their relationship to Zeus is also unclear:

Both gods and men had to submit to them, although Zeus's relationship with them is a matter of debate: some sources say he can command them (as Zeus Moiragetes "leader of the Fates"), while others suggest he was also bound to the Moirai's dictates.

How to pronounce Moira ?

I have no clue 🀭.

BTW. I have switched over to a formal approach to test the disassembler. I simply iterate over all opcodes and call the disassembler for them:

The following mismatch is very strange. What is Musashi trying to tell me with the *4? I've never seen this syntax... πŸ€”

Mismatch found: 30 0 7456

       Musashi: ori.b   #$0, ($56,A0,D7.w*4)
         Moira: ori.b   #$0, ($56,A0,D7.w)
mithrendal commented 4 years ago

Now I see it .... the "*4" is the scale . I cannot remember I ever used that scale thing.

ori.b #$0, ($56,A0,D7.w*4)

bd=$56 An=A0 Xn=D7 scale=4

effectively this grafik

where scale is grafik

dirkwhoffmann commented 4 years ago

Oops, never heard about a scaling factor in this context πŸ™„.

Here is my current implementation of this addressing mode:

       case 6: // (d,An,Xi)
        {
            i8 d = (i8)irc;
            i32 xi = readR((irc >> 12) & 0b1111);
            ea = readA(n) + d + ((irc & 0x800) ? xi : (i16)xi);
            result = read<S>(ea);
            readExtensionWord();
            break;
        }

The bit format of this addressing mode is:

iiii sxxx dddd dddd

i = Index register s = Size indicator d = displacement

I bet the (unused) bits marked xxx contain the scaling factor πŸ€“.

dirkwhoffmann commented 4 years ago

Let's cheat and peek into the Musashi sources:

static char* get_ea_mode_str(uint instruction, uint size)
...
if(EXT_INDEX_SCALE(extension))
    sprintf(mode+strlen(mode), "*%d", 1 << EXT_INDEX_SCALE(extension));
...
}

Here we go:

#define EXT_INDEX_SCALE(A)                (((A)>>9)&3)

This means we have two scaling bits right here:

xxxx xSSx xxxx xxxx

There is a bit remaining at position (1 << 8). Because I discovered it first, I have the right to name it. I call it the "mystery bit" M.

xxxx xxxM xxxx xxxx

What could be the purpose of M? πŸ€”

mithrendal commented 4 years ago

Oh no πŸ™ˆ, I searched for the newly discovered M Bit πŸ•΅ and now I found this

grafik

look at the bit 10 and 9 of picture (a) ... there is no scale for 68000 !? maybe the instruction you have stumbled upon is only there to test the CPU, maybe in kickstart ? When the CPU does scaling then the kickstart code knews that it is a newer CPU?

If that should turn out to be true, then you should ignore the scaling to let Moira identify itself as a 68000 CPU ....

Also no description about the M Bit. It is always zero look! πŸ‘€

dirkwhoffmann commented 4 years ago

I see, it's a 68020+ feature πŸ˜„. Actually this explains everything, including why there isn't a single portable68000 unit test that applies a scaling factor.

I'll integrate the scaling thing into my disassembler to achieve compatibility with Musashi (this is key for rapid testing).

maybe the instruction you have stumbled upon is only there to test the CPU, maybe in kickstart ?

No, it's such simpler. My new code iterates over all possible bit pattern and calls both disassemblers on them. It's an artificially generated instruction that does't appear anywhere in Kickstart.

Also no description about the M Bit. It is always zero look!

So disappointing. It felt like I was close to a big discovery 😟.

dirkwhoffmann commented 4 years ago

In (d,An) addressing mode, Musashi switches between signed and unsigned format.

E.g., Musashi translates $28 $0 $8000 to:

ori.b   #$0, (-$8000,A0)

Contrarily, Musashi translates $108 $8000 $0 to:

movep.w ($8000,A0), D0

So mean πŸ˜–

mithrendal commented 4 years ago

Go ahead and just mock that Musashi behaviour in Moira for now ...

When complete equality of disasm output to Musashi is reached. We can let Moira consistently produce signed format or unsigned. Maybe we could invent😏 an "SignedFormatOutputEnabled"switch for that in Moira...

dirkwhoffmann commented 4 years ago

The Mystery bit enables "Full Extension Word Format" 🀯 (68020+).

Bildschirmfoto 2019-12-15 um 08 59 21
mithrendal commented 4 years ago

The 68000er interpretes this as [M]eaningless ? Only the 68020+ knows about full mystery extension words? πŸ‘€I am reading the specs...

dirkwhoffmann commented 4 years ago

Only the 68020+ knows about full mystery extension words?

Yes. The 68000/680010 ignores the [M]eaningless/[M]ystery bit as well as the scale bits. Those CPUs only support the brief extension word format.

dirkwhoffmann commented 4 years ago
Mismatch found: 13a 0 0

       Musashi: btst    D0, ($0,PC); ($10002)
         Moira: btst    D0, ($0,PC)

My compatibility counter has reached 0x13A out of 0xFFFF. This means that 0.48% of all disassembled strings already match. Just 99.52% to go. Piece of cake.

dirkwhoffmann commented 4 years ago

At opcode 0x13C, it's getting messy πŸ˜•.

According to the specs, there is no immediate addressing mode for BTST Dn,\<ea>.

Bildschirmfoto 2019-12-15 um 12 49 13

Accordingly, Moira treats the corresponding bit pattern as illegal. Musashi, however, disassembles it:

Mismatch found: 13c 0 1

       Musashi: btst    D0, #$0
         Moira: btst    D0, #$1

Maybe immediate addressing for BTST is a 68020+ feature? πŸ€”

dirkwhoffmann commented 4 years ago

According to

http://www.easy68k.com/paulrsm/doc/trick68k.htm

the addressing mode is supported:

Checking for membership in a small set. If you want to see if a number is in a set of several numbers, you can create a bit mask corresponding to the set. For instance, if the set is {0,1,3,5}, the mask has those bits set and the bit map is 00101011 (2B hexadecimal). You can test for membership in this set with

BTST D0,#$2B ;Is D0 in {0,1,3,5}? If your set is composed of more than eight elements you have to move the mask into a data register first.

mithrendal commented 4 years ago

1B4CA37B-CD4B-4B18-9716-B322FC7303FF

Should be valid...

mithrendal commented 4 years ago

No wait πŸ™Š... this one is the correct one

7DBF9A31-7DD4-454E-9CDE-E6F10144A49A

dirkwhoffmann commented 4 years ago

The #<data> 111 / 100 mode is the mode in question. It's definitely supported then.

dirkwhoffmann commented 4 years ago

New high score reached 😎:

Mismatch found: 4180 0 0

       Musashi: chk.w   D0, D0
         Moira: chk     D0, D0
Bildschirmfoto 2019-12-15 um 14 15 34
dirkwhoffmann commented 4 years ago

Hmm, is this a bug is Musashi? πŸ€”

Mismatch found: 41bc 0 0

       Musashi: chk.w   #$0, D0
         Moira: dc.w $41bc; ILLEGAL

Immediate addressing should not be allowed:

Bildschirmfoto 2019-12-15 um 14 21 23
mithrendal commented 4 years ago

DDC52AA4-0B44-4BBC-90ED-22D14AD638FE

My documents are different 🧐

What means the value 100 in the register column?

dirkwhoffmann commented 4 years ago

Seems like you have to better docs πŸ€“. What document did you use?

mithrendal commented 4 years ago

Look at the third entry of this issue. 😎

This might also be useful for our book shelve

http://cache.freescale.com/files/32bit/doc/ref_manual/MC68000UM.pdf

Section 7 and 8 lists all instruction execution times in clock cycles.

Whereas https://www.nxp.com/docs/en/reference-manual/M68000PRM.pdf describes all possible opcodes...

dirkwhoffmann commented 4 years ago

Look at the third entry of this issue.

Oh, I see. Yes, it's all there πŸ€“.

mithrendal commented 4 years ago

I thought for this expedition into the stone age we need some proper and excellent equipment. Well prepared for the mysteries and obstacles that are awaiting us there .... πŸ‘¨πŸ»β€πŸš€

What means the value 100 in the register column of the 111 adressing mode of the chk operation? Why it is called the register column? Has it a meaning or is it just the combination code for chk to immeadiate adressing...

dirkwhoffmann commented 4 years ago

Has it a meaning

Yes. There is a general coding scheme:

The first seven addressing modes among

Bildschirmfoto 2019-12-15 um 15 10 42

need a register as a parameter. They are coded in the form MMM RRR where MMM is the binary representation of the mode number and RRR is the register number. The last five modes don't require a register as parameter. Because of that, the register field is used to store additional mode bits. I.e.,

111 000 for mode 7 111 001 for mode 8 etc.

dirkwhoffmann commented 4 years ago

Milestone 2: Match Musashi’s disassembler output.

Reached 😎. Moira's disassembler output matches Musashi for all opcodes now.

I did test

I cannot test all possible combinations, because of combinatorial explosion, but I am pretty confident that the disassembler is fine now. This also means that Moira's jump table is correct which is a big step forward.

For Christmas, Moira had wished for a clock. I told her she might be too young for such a device, but she wouldn't listen 🀨. Anyway, enough for today...

dirkwhoffmann commented 4 years ago

Before giving Moira a clock, I decided to give her a sandbox. It works as follows: When a portable68 unit test is executed, the sandbox intercepts all memory accesses and records them. When Moira runs the same test afterwards, her memory accesses are also intercepted and compared to the results on record. This enables automatic verification of all memory access patterns.

Using this brand new cutting edge VMAS(TM) technology (Virtual Memory Access Sandboxing, patent pending), the first mismatch can be found in no time 😎:

Instruction: add.l   D2, (A2)+

ACCESS 8 DOESN'T MATCH:
i:  8  Type: Poke16  Addr: 2000  Cycle: 0  

ACCESS RECORD:
i:  0  Type: Peek16  Addr:    0  Cycle: 0  
i:  1  Type: Peek16  Addr:    2  Cycle: 0  
i:  2  Type: Peek16  Addr:    4  Cycle: 0  
i:  3  Type: Peek16  Addr:    6  Cycle: 0  
i:  4  Type: Peek16  Addr:    8  Cycle: 0  
i:  5  Type: Peek16  Addr:    a  Cycle: 0  
i:  6  Type: Peek16  Addr: 2000  Cycle: 0  
i:  7  Type: Peek16  Addr: 2002  Cycle: 0  
i:  8  Type: Peek16  Addr:    c  Cycle: 0  
i:  9  Type: Poke16  Addr: 2000  Cycle: 0  
i: 10  Type: Poke16  Addr: 2002  Cycle: 0  

The output shows that Moira misses to read a word from memory before writing the result. How dare she 🀨.

dirkwhoffmann commented 4 years ago

Just profiled the disassemblers of Musashi and Moira (65536 x 48 instructions):

Musashi: 44.8 sec
  Moira:  6.5 sec

Actually, it was easy to outperform Musashi, because it calls sprintf to assemble the strings whereas Moira utilises a template-based string writer. The picture will be different once Moira is ready enough to compare emulation speed (which is the important metric). I expect it to be rather impossible to outperform Musashi, so the question is how much slower Moira will be 😬.

mithrendal commented 4 years ago

Moira is born to not only emulate the states before and after an CPU instruction but additional also to emulate the intermediate temporal states of the m68k e.g. caring of bus access times. With all those extra states and probably extra syncing, Moira is a much bigger beast from a states machine perspective. Due to its higher grade of complexity, we should expect it to be slower but more accurate πŸ˜ŽπŸ‘πŸ».