riscv-software-src / riscv-perf-model

Example RISC-V Out-of-Order/Superscalar Processor Performance Core and MSS Model
Apache License 2.0
117 stars 43 forks source link

Add decoupled frontend (+ L1 Instruction Cache) #143

Open danbone opened 5 months ago

danbone commented 5 months ago

As the branch prediction API is coming to a close, I'd like to propose adding a decoupled frontend with an L1 instruction cache.

I did cover some of this with @arupc on Monday.

OlympiaDecoupledFrontend

What I'd like to have is a separate BranchPredictionUnit (BPU) with it's own pipeline that streams requests into the fetch unit. The requests essentially contain an address, bytes to fetch, and the termination type (taken branch, return, none etc.). Requests could be one per cacheline (i.e. if the predictor predicts across a cacheline it'll separate them into multiple requests). The fetch unit queues these requests, and flow control on this interface is managed with credits.

Once the fetch unit has a request queued, and enough credits downstream, it'll forward the address to the instruction cache.

Hit responses from the cache go back to the fetch unit where they're broken into instructions and paired with the prediction meta data given in the original BPU request. Some BTB miss detection happens here, misses or misfetches redirect the BPU.

I'm not completely sure how to handle mapping the instructions from the trace onto the data read, handling alignment, unexpected change of flow, malformed traces etc. I've tried to illustrate an idea on how it could be done, but I'm open to other suggestions.

OlympiaDecoupledFrontendTraceDriven

Some further details on the ICache OlympiaDecoupledFrontendFetchCacheBlockDiagram

What I propose is that we change the existing MemoryAccessInfo class to be generic enough so that it can be used as a memory transaction.

There's a few technical details that need to be hashed out still:

I've identified a few changes needed to the current codebase before the ICache work can starts.

This might also help the prefetch work

danbone commented 5 months ago

On the instruction cache side. I'd like to add functionality for:

This feature should enable the fetch unit to send read requests that cross either a subword boundary, or a cacheline boundary.

My plan for implementing this is to introduce a new parameter in fetch that controls the number of read ports on the instruction cache, then looping the fetch instruction code to make multiple requests per cycle.

Within the instruction cache, new parameters will be added that enable bank interleaving, and set interleaving. Look up requests will be picked from the request queue such that multiple requests can be handled per clock if they target different banks/sets.

The fetch unit won't know about the interleaving configuration, and will just wait for the responses to come back.