Open niedzwiecki-dawid opened 1 year ago
@tbursztyka @teburd I've seen you are maintainers of SPI, so I added you to this RFC. Do you know if there is anybody who would be interested in discussion of changing the SPI API to support the described case?
In the slave side, there is always the issue of being out of sync in such context, whatever the solution we would follow.
A simple approach would be, in slave side (no changes required anywhere):
Another one (requiring changes in controller driver only, not in the API):
In both solutions (or any other solutions afaik), all has to happen really fast right after the header has been received.
Or, if EC is the only use case here, is there any room for changing the protocol (at least on SPI) in order to send first the header, then the data, with a cs switch in between ?
Hi :-) thanks for the response.
I’ve considered the first option and you are right, it is really really time critical. We’re running SPI at 3mbps, so there is not a lot of time(time of receiving one byte) to check header(version and data length) and reload DMA(we have to use DMA with this SPI speed). Preemption by another interrupt is also a problem here. So I decided not to go this way.
The second option makes more sense to me, because checking the header and reloading DMA/reconfiguring buffer happens within one ISR. But still it is very vulnerable to every smallest delay e.g. if another ISR is running when the SPI interrupt occurs. And again - checking the header and reloading DMA takes some time.
Unfortunately, the CS switch between the header and data can’t be added on the host side. EC is in the middle of a transaction to the ZephyrRTOS and both versions of EC firmware have to work with the same host software. The original EC firmware assumes whole packet during one CS assertion,
IMO, the only reliable way of handling such communication flow at high speed is to program a SPI transaction for max command size and somehow be informed about the end of the transaction.
IMO, the only reliable way of handling such communication flow at high speed is to program a SPI transaction for max command size and somehow be informed about the end of the transaction.
Indeed, if you can allocate 64K, it will be the easiest way. Now it's only a matter of calling the cb with a relevant result code. We cannot assume 0 to be a proper one. So, what about calling the cb when CS goes out (end of transaction) with a result holding how much bytes were read? (of course if all requested were read, it'll be 0 as usual). This behavior would be valid for slave mode only. It's just a matter of changing the documentation then (and the relevant drivers already supporting slave mode)
That sounds really reasonable for me. By max command size
I meant max command size for protocol using SPI, not SPI itself :-) it is ~500bytes in my case.
Regarding result code - I believe it already should work this way for async mode:
* @retval frames Positive number of frames received in slave mode.
* @retval 0 If successful in master mode.
* @retval -errno Negative errno code on failure.
Only the part about the end of transaction is missing(CS goes out).
I think it is a really good way to go and it will extend possibilities of the slave mode. Unfortunately, I think it won't help in my case, because the host expects a response during one CS assertion, as you can see in the first post. But I believe my case is not a good reference. I will see what I can do to change that.
I'm not sure about the hardware possibilities, if any SPI module supports this, but we can think about the definition of the end of transaction. It can be CS deassertion, as you said, but maybe it can also be the end of clocking, with the CS still asserted. I'm not familiar with the different SPI modules possibilities, but "the end of transaction" can be marked as hardware dependent? What do you think?
it is ~500bytes in my case. uint16_t data_len Ok I assumed wrongly that data_len could be of uint16 maxed out.
the host expects a response during one CS assertion, as you can see in the first post.
I missed that. What's the time frame ? Idea would be that, if the time frame of the assertion is good enough, it could still be possible to make the slave switch from one transaction that ended successfully to another (but we are talking of 100s of ms here). So not much change on the current API.
I don't think it is always possible to detect the clocking not to mention to get an interrupt about it. Perhaps some hw does it (triggering a busy state or something like that). I don't know if that can be reliable, but if it is yes we could also modify the API to allow the CB to be called during the transaction. Perhaps it would be better to also improve the cb signature so it would tell what type of event it's dealing with (just thinking out loud).
I missed that. What's the time frame ? Idea would be that, if the time frame of the assertion is good enough, it could still be possible to make the slave switch from one transaction that ended successfully to another (but we are talking of 100s of ms here). So not much change on the current API.
Sorry, but I don't get the question. The time of the frame depends on the number of sent bytes. The polling frame consists of the 32 bytes and there is a short fixed delay between.
I was asking about the short delay (wrong wording on my side). How short is it?
The code on host side doesn't even have delay
statement, so the "short delay" equals time that CPU needs to check received status bytes. It is very short , but I'm not able to give you the exact number.
Introduction
The SPI API isn't adjusted to be used for SPI slaves that don't know how much data will be sent by the SPI master.
Problem description
Some SPI communication protocols don't relay on fixed-size frames e.g. Host Commands (ec_host_cmd). The header, which is sent at the beginning of a frame, contains information how many bytes will be sent. Host Command flow:
Every SPI transfer expects the specified amount of data to be read/sent. Every SPI transfer enables and configures SPI module, potentially configures DMA and disables the modules. So, receiving 8 bytes(header) -> checking number of data bytes(additional to header) -> enabling and reprogramming SPI + DMA in the middle of a transaction doesn’t look like a good option.
Proposed change
I haven't been able to find a perfect solution for such a problem. One possible option I see is extending the SPI API with 2 additional functions:
spi_transceive_async_state
(returning a number of received bytes sincespi_transceive_async
call) andspi_transceive_async_stop
(stopping thetransaction_async
transaction).Detailed RFC
With these 2 additional SPI functions, the flow of such protocol could look like: call
spi_transceive_async
to receive command with maximum size -> after CS assertion callspi_transceive_async_state
till receiving header -> check number of data bytes in the received header -> callspi_transceive_async_state
till receiving data bytes -> callspi_transceive_async_stop
-> full command is received.The
spi_transceive_async_stop
would call the callback with the actual number of received frames asresult
.Proposed change (Detailed)
N/A
Dependencies
These two additional SPI functions are optional, so it wouldn't require editing any SPI driver.
Concerns and Unresolved Questions
I haven't fully confirmed that this solution would resolve problems with such protocols. I would love to hear your opinion on such a case.
Alternatives
transceive_async
works for SPI slave - pass rx buffer size(instead of specified number of bytes to receive) and call the callback, when the transfer ends(end of clocking). It is how UART ASYNC API works. AFAIK, the SPI modules(at least STM32) don't have an interrupt source at the end of transfer, so it is impossible to reliably detect the end of clocking.