Open Laczen opened 1 month ago
@tomi-font, @Damian-Nordic
@Laczen Before I dive into code review, I would like to better understand your motivation behind introducing a new API instead of extending the existing flash_area
API. Let me comment on your reasons hoping that you can elaborate more.
a. It is limited to flash, that resulted in not being able to use eeproms as a means for (settings, data, ...) storage,
Is this only because flash_area
has "flash" in its name or it's missing some properties to unleash the full potential of eeprom? Asking because it seems pretty common to workaround the limitations that you listed in "a." and "b." by introducing a virtual flash device, such as flash_simulator
, and I wonder if there are any hard limitations of this approach besides its questionable prettiness ;).
e. The api only allows to write data directly that leads to high stack usage when a combination of data needs to be written to storage (an example of this can be found in secure storage where a combination of data properties, data and validity check need to be written).
This is a valid point though flash_area
could be extended with such functions as well, right?
Despite of these concerns, what I like about storage_area
is that it is uncoupled from the flash map and fixed-partitions
DTS nodes. It allows for more straightforward usage of storage areas that already have their representation in DTS. For example, I've seen flash_simulator
has been given an option to define the target RAM region (https://github.com/zephyrproject-rtos/zephyr/pull/57380/files) while storage_area
would allow to refer to the RAM region (as well as retained memory) directly. Much easier to conceptualize for a Sunday DTS user :).
@Damian-Nordic the main reason to introduce a new API is that it is completely different from the flash_area
approach. A flash_area
inherits it's properties from the flash device, while the storage_area
API is declaring how it is going to use the underlying device. This allows definitions that are different from the device properties without having to check them (only a validation for first use, e.g. in debug build is sufficient). To achieve these properties on flash_area would require a subsystem on top of flash_area
. Instead I decided to create it directly on top of the flash api, and also on top of eeprom api, disk api and ram directly. A layer over a virtual flash interface is not needed.
Introduction
For years zephyr has been using the flash_area API to work with images and storage solutions. This API has limited the development of zephyr because:
a. It is limited to flash, that resulted in not being able to use eeproms as a means for (settings, data, ...) storage,
Yes, but that is due to lack of recognition that flash devices are just eeprom devices with additional functions. and there is a plan to slowly change it, though not blocking for this issue: https://github.com/zephyrproject-rtos/zephyr/issues/71270
b. It inherits and uses the flash erase-block-size that limits support for building bootloaders that use a combination of internal and external flash that do not have equivalent erase-block-sizes,
That is actually no true, that is current limitation of MCUboot, which also is not capable of working with device without erase due to lack of any mechanisms that allow to recognize validity of data without assuming erase happened. Also the problem is far more complicated and this is very superficial description of the problem, verging into the snake-oil advertisement area.
c. Forces users to create (flash) devices for more enhanced areas that would consist of a combination of flash and ram,
That is actually true.
d. Forces users to create (flash) devices to support zephyr images that are stored on disk,
That is true and was attempted indeed for MCUBoot, and there is still PR (Find TODO), the problem was though that MCUboot, prepared to work with devices with small write-block, was many times re-writing last sectors of image. The bootloader is supposed to be small and have as low number of layers as possible, causing tenths of rewrites per boot cycle.. Also with this solution it would be storage->disk_access->SD to get to SD card, where Disk Access is FTL equivalent with data caching. Such long path is needless for for bootloader and brings a lot of additional code and RAM usage, that normally bootloader should have for its usage (for example decompression support).
e. The api only allows to write data directly that leads to high stack usage when a combination of data needs to be written to storage (an example of this can be found in secure storage where a combination of data properties, data and validity check need to be written).
That is true, partially, and should be fixed the same way SPI transactions work.
b. It inherits and uses the flash erase-block-size that limits support for building bootloaders that use a combination of internal and external flash that do not have equivalent erase-block-sizes,
That is actually no true, that is current limitation of MCUboot, which also is not capable of working with device without erase due to lack of any mechanisms that allow to recognize validity of data without assuming erase happened. Also the problem is far more complicated and this is very superficial description of the problem, verging into the snake-oil advertisement area.
This limitation is in no way related to devices without and with erase: At the moment there are st
devices with a erase-block-size of 2kB and external flash that has a erase-block-size of at least 4kB. These devices can only be supported in mcuboot swap-mode by disabling the use of the flash api. The same (is) will be valid for nxp
devices with erase-block-size of 512B.
For devices with small erase-block-size like the nxp
devices there is even an extra problem because they need a large area for the status storage. This is easily avoided in this proposal by defining a storage-area erase-size of 4kB or even larger.
e. The api only allows to write data directly that leads to high stack usage when a combination of data needs to be written to storage (an example of this can be found in secure storage where a combination of data properties, data and validity check need to be written).
That is true, partially, and should be fixed the same way SPI transactions work.
While it might be able to be fixed with spi transactions (this takes more or less the same approach), the solution proposed here does not involve any changes to the user-space (flash) API and directly enables use on other backends like eeprom, ram, disk, ...
This limitation is in no way related to devices without and with erase: At the moment there are
st
devices with a erase-block-size of 2kB and external flash that has a erase-block-size of at least 4kB. These devices can only be supported in mcuboot swap-mode by disabling the use of the flash api. The same (is) will be valid fornxp
devices with erase-block-size of 512B.
The problem of bundling erase pages together is solvable for MCUboot, specifically if sizes of erase-blocks, between devices, differ by some multiply of N (and actually should have been approached long time ago, but was not). The problem with ST devices is where layout is completely non-uniform and that could also been fixed by bundling pages, on uniform external device to reflect layout of stm - still move will here require moving by largest block in range, which basically means that mcuboot may have an option to just move/swap by configurable highest possible page size, set at compile time. NXP devices have wrtie-block-size problem in context of mcuboot not erase-write-block.
For devices with small erase-block-size like the
nxp
devices there is even an extra problem because they need a large area for the status storage. This is easily avoided in this proposal by defining a storage-area erase-size of 4kB or even larger.
NXP devices have wrtie-block-size problem in context of mcuboot not erase-write-block. The problem is that tracking swap would mean multiple rewrites of blocks that store swap log, which will break last sectors quicker then others, or require more of them, taking away image size. We are weighting a solution to the problem, because the issue annoying for possibly every device with write-block-size of, lets say, more than 32 bytes.
Introduction
For years zephyr has been using the flash_area API to work with images and storage solutions. This API has limited the development of zephyr because:
a. It is limited to flash, that resulted in not being able to use eeproms as a means for (settings, data, ...) storage, b. It inherits and uses the flash erase-block-size that limits support for building bootloaders that use a combination of internal and external flash that do not have equivalent erase-block-sizes, c. Forces users to create (flash) devices for more enhanced areas that would consist of a combination of flash and ram, d. Forces users to create (flash) devices to support zephyr images that are stored on disk, e. The api only allows to write data directly that leads to high stack usage when a combination of data needs to be written to storage (an example of this can be found in secure storage where a combination of data properties, data and validity check need to be written).
This all makes that an alternative to the flash_area API is needed.
Problem description
To solve the issues from the introduction a new subsystem for working with storage is required. This new subsystem should solve the above problems and also be easy extendable for storage devices that are not yet supported. The subsystem should allow us to work with data on flash, eeprom, rram, mram, ram, disk, (file ?), ...
Proposed change
A new subsystem:
storage_area
is proposed. Astorage_area
is no more than a definition of how a storage backend will be used and provides the required API to be able to read/write/erase the backend. Thestorage_area
api also provides a means to read and write chunks of data. It is a simple construct that allows simple addition of future extensions.The new subsystem from the start supports
storage_area
on flash, eeprom, ram and diskTogether with the
storage_area
astorage_area_store
is proposed that is a basic building block for storage solutions. In astorage_area_store
the storage area is divided into sectors and data is stored as simple records that are validated by a crc32. Each sector is started with a sectorcookie
that allows differentiating between severalstorage_area_stores
and allows version and/or data format description to be added by the storage solution. Thestorage_area_store
supports solution with and without persistence requirements, it can be used as a direct replacement offcb
, is an ideal base for id-value (nvs
) and/or key-value storage solutions. Present limitations imposed by e.g.nvs
regarding write-block-size or erase-block-size are removed. The ability to use write and read in chunks also makes it an ideal base for thesecure_storage
subsystem.Dependencies
The new subsystem has no dependencies.
@nashif, @butok, @andrisk-dev, @dleach02, @erwango, @frastm, @brix, @de-nordic