openhwgroup / cva6

The CORE-V CVA6 is an Application class 6-stage RISC-V CPU capable of booting Linux
https://docs.openhwgroup.org/projects/cva6-user-manual/
Other
2.14k stars 652 forks source link

Draft extended hpdcache #2288

Open takeshiho0531 opened 1 week ago

takeshiho0531 commented 1 week ago

just a draft.... I'd like to create a functioning extended hpdcache for now..

github-actions[bot] commented 6 days ago

:x: failed run, report available here.

yanicasa commented 4 days ago

@takeshiho0531, I have a question about the phase in which the virtual address is transmitted.

In my modifications to create an OBI interconnect, the virtual address is transmitted to the caches (I or D) but the physical request through the OBI can potentially be transmitted to a TCRAM and not to the cache (I or D).

So in this case, the virtual request will not be followed by an OBI physical request from the cache's point of view. And a virtual request will be sent again. Does the HPD cache support a new virtual request without receiving a kill_req?

takeshiho0531 commented 4 days ago

@yanicasa Sorry, I don't understand what TCRAM refers to and I'm not sure about the situation where the physical address is not passed to the cache (I or D )...😢

github-actions[bot] commented 4 days ago

:x: failed run, report available here.

yanicasa commented 4 days ago

Picture1

@takeshiho0531, sorry that I was not clear; TCRAM (Tightly-Coupled Memory) is a low-latency memory which cannot benefit from caching. It is therefore sometimes preferable to connect it before the cache. We also call it "scratchpad".

The aim of this branch is to add Xbar to be able to connect this type of memory or even a peripheral bus, depending on the configuration.

As you can see in the image, as some OBI (physical) accesses are not driven to icache depending on address decoding, some virtual requests via fetch_req_t are not followed up by fetch_obi_req_t. My question is: can a new fetch_req_t be send without first sending a kill_req?

takeshiho0531 commented 4 days ago

@yanicasa Thank you very much for the detailed explanation and for including the diagram! Thanks to this, I clearly understand the significance of the OBI interconnect. When I asked @cfuguet, he mentioned that for requests like the second one in the diagram, it is necessary to assert the abort signal to the HPDCache, but the adapter can manage it. In my current implementation of the adapter, it does not have that functionality, so I will modify it.

yanicasa commented 4 days ago

@takeshiho0531 @cfuguet an other solution could be, a kill signal generated by the interconnect, all unselected slaves can receive a kill_req.

I don't know which solution is cleaner :)

cfuguet commented 4 days ago

@yanicasa I think it is better to do it at the adapter. Otherwise, the interconnect is generating a kill signal that can affect unrelated requests. This could be the case if you have two different initiators on the same interconnect. A previous request from one initiator would be killed by the interconnect even if it was meant for a request of the other initiator...

yanicasa commented 3 days ago

The modification in the adapter is fine for me. Let's go for it!

takeshiho0531 commented 2 days ago

@yanicasa I don't quite understand the data_valid_obi in icache... In this line, why data_valid_obi is set to be 1 when dreq_i.kill_req turns out to be 1...?

yanicasa commented 2 days ago

@takeshiho0531 obi protocol doesn't include the possibility of cancel a transaction, so in order to have the possibility (for performance) of continuing to cancel accesses, I've made that the master (frontend) and the slave (icache) agree that the obi transaction is valid, while knowing that the given obi data is not. It's in the frontend that discard obi data if it comes from an killed access.

takeshiho0531 commented 2 days ago

@yanicasa Thank you :) Then why isn't data_valid_obi set to 1 when |cl_hit && cache_en_q && !inv_q...?

yanicasa commented 2 days ago

@takeshiho0531 I think you just found something to fix :)

I think this should not cause trouble because kill is managed by obi R fsm and in this case obi data valid seems to not be used.

In any case, in the current state of my modifications, the synchro between the two icache FSMs is, in my opinion, not well enough done. It will have to be reviewed to make sure that no desynchronization is possible.