Closed mlepage-google closed 1 year ago
I expect this is src/include/platform/KvsPersistentStorageDelegate.h
which calls into KeyValueStoreManager::Get
, which eventually lands in src/platform/Linux/KeyValueStoreManagerImpl.cpp
where we have, in _Get
:
if ((err != CHIP_NO_ERROR) && (err != CHIP_ERROR_BUFFER_TOO_SMALL))
{
return err;
}
and otherwise we copy however much data we can and return CHIP_NO_ERROR
.
That is definitely a violation of the KeyValueStoreManager
API documentation.
This varies by platform as follows:
@mlepage-google Maybe we need separate issues here for the different buggy platforms?
I suspect we need to fix this to behave correctly, because various other code will not work right if this is wrong and we can end up with serious security problems.
@bzbarsky-apple I would say it should be fine for Zephyr/nRFConnect platform. The Get_
implementation requests value read from Zephyr subsys and then destination buffer size is verified in the Zephyr callback: https://github.com/project-chip/connectedhomeip/blob/master/src/platform/Zephyr/KeyValueStoreManagerImpl.cpp#L72. Result assigned in callback is then returned by the Get_
method: https://github.com/project-chip/connectedhomeip/blob/master/src/platform/Zephyr/KeyValueStoreManagerImpl.cpp#L125
Finding based on survey results in general
Based on the responses to the survey these are conclusion we have made for matter 1.0:
PersistentStorageDelegate::kKeyLengthMax
from 32 to a smaller size.SyncGetKeyValue
as defined currently. Although some platforms might need to perform some additional work and it can be quite costly if the buffer stored is very large, but fundamentally there is nothing preventing them from doing so.Consideration to explore for PersistentStorageDelegate
post 1.0:
SyncGetKeyValue
as defined currently, because of the high cost on some platform where they need to allocate a buffer the size of value stored to perform a full read to then partially copy into the smaller provided buffer, we might want to consider fine tuning the API contract. For example if the size of value in storage is under a certain threshold, implementations of SyncGetKeyValue
are required to do what described by the contract today. If size of value is beyond threshold, implementation may choose to do the partial read, but if the operation is too costly they could return an error like CHIP_ERROR_PERSISTED_STORAGE_FAILED
. The threshold would be some a value such that platforms could use stack space instead of being forced to allocate a costly large buffer just to be compliant with API contract.Credentials::kMaxCHIPCertLength * 2
which is why we are defer making some sort of decision on this for after 1.0f/a1/*
to remove all keys associated with a fabric index 0xa1) we need to formalize the requirement and possibly assist in a solution for those that have already gone down the path of hashing keys.Uses of PersistentStorageDelegate
in SDK post 1.0 to explore:
PersistentStorageDelegate
should avoid using raw packed structs. Raw packet structs are really not future proof and don't age well. They should instead try using Matter TLV structures.The ability for platforms to quickly audit their PersistentStorageDelegate
has been created. Based on https://github.com/project-chip/connectedhomeip/projects/86#card-83751232, many platforms have passed the audit. At this point it is on the platforms themselves to pass the audit. If there is a platform not adhering definitions and failing the audit, a separate issue can be created for that platform specifically.
Problem
Using chip::Server::GetPersistentStorage on Linux (e.g. in chip-all-clusters-app), whatever storage implementation is being used under the hood, it isn't returning a CHIP_ERROR_BUFFER_TOO_SMALL error (as documented) if calling SyncGetKeyValue with a buffer too small; it just silently truncates the data it gets.
E.g. I wrote 4 bytes and passed in a buffer of size 2, and got back no error, size 2 (but because it actually held 4 bytes, I should have received an error).
Proposed Solution
Audit code to ensure implementation is following the published API.