mist-devel / mist-binaries

Firmware and core binaries for the MIST board
205 stars 49 forks source link

ZX Spectrum performance issue #3

Closed sorgelig closed 8 years ago

sorgelig commented 8 years ago

As we talked before, ZX Spectrum core has performance issue which is clearly visible for example on scene demos like Lyra II Megademo (part with dancing girl is slow) and Shock Megademo (first part is slow). Probably it's because not enough memory access bandwidth of CPU.

harbaum commented 8 years ago

Can you give numbers? Something like "X percent" more?

sorgelig commented 8 years ago

If i look at Shock then i would say 100% more. If look at Lyra II then may be 50% more.

ghost commented 8 years ago

The best source on spectrum timings is the Harlequin clone project.

http://www.zxdesign.info/harlequin.shtml

Cheers S

On Fri, Dec 4, 2015 at 1:50 PM, sorgelig notifications@github.com wrote:

If i look at Shock then i would say 100% more. If look at Lyra II then may be 50% more.

— Reply to this email directly or view it on GitHub https://github.com/mist-devel/mist-binaries/issues/3#issuecomment-161972382 .

Stephen Leary

sorgelig commented 8 years ago

What if core will simulate video memory inside FPGA? There are only two pages with around 8kb each used for video. So, it should be able fit inside FPGA. In this case it's possible to decouple video output from other memory access and thus give more bandwidth for CPU.

ghost commented 8 years ago

My advice to solve this issue is to bring everything into verilator with a simulator similar to the BBC Model i wrote and then see how the fusetests behave...

http://www.shadowmagic.org.uk/spectrum/fusetest.zip

S.

On Sat, Dec 5, 2015 at 7:29 AM, sorgelig notifications@github.com wrote:

What if core will simulate video memory inside FPGA? There are only two pages with around 8kb each used for video. So, it should be able fit inside FPGA. In this case it's possible to decouple video output from other memory access and thus give more bandwidth for CPU.

— Reply to this email directly or view it on GitHub https://github.com/mist-devel/mist-binaries/issues/3#issuecomment-162158089 .

Stephen Leary

ghost commented 8 years ago

Have a look at this...

I converted all the VHDL code to Verilog in here.

https://www.dropbox.com/s/ifryp98mzsiwhx6/bench.tgz?dl=0

Then I setup a ula test which runs under verilator. It doesnt run the whole machine yet but it simulates video output for the ULA.

S.

On Sat, Dec 5, 2015 at 10:51 AM, Stephen Leary sleary@vavi.co.uk wrote:

My advice to solve this issue is to bring everything into verilator with a simulator similar to the BBC Model i wrote and then see how the fusetests behave...

http://www.shadowmagic.org.uk/spectrum/fusetest.zip

S.

On Sat, Dec 5, 2015 at 7:29 AM, sorgelig notifications@github.com wrote:

What if core will simulate video memory inside FPGA? There are only two pages with around 8kb each used for video. So, it should be able fit inside FPGA. In this case it's possible to decouple video output from other memory access and thus give more bandwidth for CPU.

— Reply to this email directly or view it on GitHub https://github.com/mist-devel/mist-binaries/issues/3#issuecomment-162158089 .

Stephen Leary

Stephen Leary

harbaum commented 8 years ago

Sounds good. I'll definitely have a look at it. Thanks.

harbaum commented 8 years ago

Runs here. I'll play a little with this.

In my current version the CPU runs at 3.5Mhz but only gets ram access every second cycle (1.75Mhz) to avoid collisions with video ram access. Imho this should not cause any slowdown as the z80 will never access the bus at full 3.5Mhz

renaudhelias commented 8 years ago

For implementing VRAM inside FGPA, you can use it as mirror RAM (duplicate of externel RAM, having just write access from Z80, and read access from VGA) That's good against conflicts, and you still can dump full external RAM for debug purpose...

Regards,

Renaud

2015-12-07 11:04 GMT+01:00 Till Harbaum notifications@github.com:

Runs here. I'll play a little with this.

In my current version the CPU runs at 3.5Mhz but only gets ram access every second cycle (1.75Mhz) to avoid collisions with video ram access. Imho this should not cause any slowdown as the z80 will never access the bus at full 3.5Mhz

— Reply to this email directly or view it on GitHub https://github.com/mist-devel/mist-binaries/issues/3#issuecomment-162469511 .

harbaum commented 8 years ago

I like the idea of not using internal ram for the two video areas. But it may be a easy way to tell if giving the cpu full 3.5mhz ram access would actually make a difference ... which i actually doubt

ghost commented 8 years ago

Is single byte latency the issue? If so a simple cache should help

Sent from my iPhone

On 7 Dec 2015, at 14:32, Till Harbaum notifications@github.com wrote:

I like the idea of not using internal ram for the two video areas. But it may be a easy way to tell if giving the cpu full 3.5mhz ram access would actually make a difference ... which i actually doubt

— Reply to this email directly or view it on GitHub.

harbaum commented 8 years ago

I actually don't know what the issue is. The original core simply gave the t80 a 3.5mhz clock. I have restricted this in a way that only every second 3.5mhz cycle is available to the cpu to do a memory cycle.

ghost commented 8 years ago

That should be fine then. You are correct because basically the z80 only does something every other clock cycle because of the refresh.

I dont think even the ALU does work during the refresh cycles. So basically we have double the time to get the data back from RAM as a normal Z80 with DRAM. The SDRAM latency isnt great but its not that bad.

S.

On Tue, Dec 8, 2015 at 8:37 AM, Till Harbaum notifications@github.com wrote:

I actually don't know what the issue is. The original core simply gave the t80 a 3.5mhz clock. I have restricted this in a way that only every second 3.5mhz cycle is available to the cpu to do a memory cycle.

— Reply to this email directly or view it on GitHub https://github.com/mist-devel/mist-binaries/issues/3#issuecomment-162813712 .

Stephen Leary