I think a good approach can be to create a filestore at application level (similar to the jsonstore already created for application data) that can store binary data in any backend storage.
For now, a easy (and not so bad) implementation can be to store the files in a DB table (a second table in the same syndesis db). Three fields:
id
filename
blob
The storage pattern would be:
create a (random) file uid
store the data
use the file uid in other parts of the schema
Main access patterns: GET/POST/DELETE
Tooling may issue a single multipart request to upload the extension or two separate requests (first upload the file, then use the id in the schema).
This will allows us to move e.g. file storage to a external s3-like system. That can be useful in case of remotely synchronized environments.
Opinions?
We have discussed 2 options so far:
I think a good approach can be to create a filestore at application level (similar to the jsonstore already created for application data) that can store binary data in any backend storage.
For now, a easy (and not so bad) implementation can be to store the files in a DB table (a second table in the same syndesis db). Three fields:
The storage pattern would be:
Main access patterns: GET/POST/DELETE
Tooling may issue a single multipart request to upload the extension or two separate requests (first upload the file, then use the id in the schema).
This will allows us to move e.g. file storage to a external s3-like system. That can be useful in case of remotely synchronized environments. Opinions?