axone-protocol / contracts

πŸ“œ Smart contracts for the Axone protocol (powered by CosmWasm)
https://axone.xyz
BSD 3-Clause "New" or "Revised" License
122 stars 19 forks source link

πŸ—œοΈ Inconsistency in Object Storage with Different Compression Status #583

Open bdeneux opened 2 weeks ago

bdeneux commented 2 weeks ago

Description

While implementing issue #556, an inconsistency was observed in a specific scenario related to object storage. This inconsistency arises when an attempt is made to store an object that already exists in the storage but with a different compression status.

Currently, if an object is stored without any compression and later an attempt is made to store the same object with compression, the contract operation completes successfully. However, the compression is not applied to the object. This results in a discrepancy between the expected state of the object (compressed) and its actual state in the storage (uncompressed).

This inconsistency can lead to confusion, as the state of the object in the storage does not reflect the parameters used in the storage request. It is expected that if an object is stored with compression, the object in the storage should be compressed, regardless of its previous state.

Proposed Solutions

  1. Apply Compression and Overwrite Existing Object: This solution involves applying the requested compression to the object and overwriting the existing object in storage. This approach ensures that the object's state in storage aligns with the parameters of the most recent storage request.

  2. Return an Error: This solution involves returning an error, such as ObjectAlreadyExists, when an attempt is made to store an object that already exists with a different compression status. This approach prevents the overwriting of existing objects and maintains the immutability of stored objects. It also provides clear feedback to the user about the unsuccessful operation.

Given that object data is immutable, returning an error when an attempt is made to store an object that already exists with a different compression status seems to be the most appropriate solution. This approach respects the immutability of stored objects and provides clear and immediate feedback to the user, thereby preventing potential confusion and errors.

Lets me know what do you think @amimart, @ccamel

ccamel commented 2 weeks ago

Good point! I'd agree with you @bdeneux , i.e. don't rewrite an object that's already been stored, even if the compression parameter changes, for reasons of immutability. By immutability, I don't just mean the content of the file, but also the way it's stored on the chain.

However, thinking about it, I believe that setting the compression parameter at the file level was a mistake, well, let's just say it raises questions... It'd have been more appropriate to set it at the bucket level. This approach ensures a consistent compression mechanism within a single bucket, addressing the issues of immutability and predictability we encountered: one file corresponds to one checksum and a single compression method.

bdeneux commented 1 week ago

Moving the compression parameter to the bucket level does indeed seem to offer a more consistent and user-friendly solution. It effectively addresses the issue of immutability we've been facing.

It implies a refactoring and breaking changes and bring less functionality than now but more easiest to use and consistency.

I propose we open a separate issue to discuss this refactoring in more detail. If we decide to proceed with the refactoring, it would inherently resolve the current issue, allowing us to close it. Alternatively, if we decide not to refactor at this time, we could implement a temporary workaround to prevent inconsistency. This could involve returning an error in the specific case where the compression parameter changes for an existing object, until such time as we might refactor the bucket compression system.