Closed lmignon closed 3 weeks ago
@lmignon
Are these new fs_storage modules ready and safe for production for v16? I'm looking forward to testing this with s3 bucket (WASABI S3 provider and Min.io provider) for some projects. I have a dev/staging ready for testing but if it's not ready yet, it's no point for me to move forward.
I see basically 3 modules fs_storage, fs_file and fs_attachments. Are all of them required together? I'm working in a container environment over multiple servers, so I suppose fs_attachment is defintely required. fs_storage is the "base" I suppose? The only confusion is fs_file which is also showing alpha status still.
Also, is there any docs on how I can explicitly exclude/ignore certain images like thumbnails etc...? I want to avoid that small images are also loading from an external S3 bucket. I would like those to load from the database instead of the default filestore on server.
Any pointers and feedback please?
There hasn't been any activity on this issue in the past 6 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days. If you want this issue to never become stale, please ask a PSC member to apply the "no stale" label.
@lmignon Shouldn't migration scripts be created in those modules (for those people that come from v15)?
@MiquelRForgeFlow I've developed my own migration script to migrate from 10 to 16. It's not trivial but here it's the SQL...
migration of product images and product medias
# Copyright 2023 ACSONE SA/NV
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
import logging
_logger = logging.getLogger(__name__)
def migrate_product_media(cr):
# create a temporary column in fs_product_media to store the old storage file id
cr.execute(
"""
alter table fs_product_media add column x_old_storage_file_id integer;
alter table fs_media add column x_old_storage_file_id integer;
"""
)
_logger.info("Create fs_product_media records from storage_file")
cr.execute(
"""
INSERT INTO fs_product_media (
product_tmpl_id,
x_old_storage_file_id,
lang,
sequence,
write_uid,
write_date,
create_uid,
create_date,
name,
mimetype
)
SELECT
r.product_tmpl_id,
f.id,
f.lang,
r.sequence,
f.write_uid,
f.write_date,
f.create_uid,
f.create_date,
f.name,
f.mimetype
FROM
product_media_relation as r,
storage_media as s,
storage_file as f
WHERE
r.media_id = s.id
AND s.file_id = f.id;
"""
)
_logger.info("%s fs_product_media records created", cr.rowcount)
# create fs_media records for media that are linked to more thant one product_media_relation
cr.execute(
"""
INSERT INTO fs_media (
x_old_storage_file_id,
lang,
write_uid,
write_date,
create_uid,
create_date,
name,
mimetype
)
SELECT
f.id,
f.lang,
f.write_uid,
f.write_date,
f.create_uid,
f.create_date,
f.name,
f.mimetype
FROM
storage_media as s,
storage_file as f
WHERE
s.file_id = f.id
AND s.id IN (
SELECT
media_id
FROM
product_media_relation
GROUP BY
media_id
HAVING
count(*) > 1
);
"""
)
_logger.info("%s fs_media records created", cr.rowcount)
_logger.info("Link fs_product_media to fs_media")
cr.execute(
"""
UPDATE
fs_product_media as pm
SET
media_id = fs_media.id,
link_existing = True
FROM
fs_media
WHERE
pm.x_old_storage_file_id = fs_media.x_old_storage_file_id;
"""
)
cr.execute("select id from fs_storage where code = 'fsprd_eshop'")
fs_storage_id = cr.fetchone()[0]
_logger.info(
"Create ir_attachment records from storage_file linked to one product_media_relation"
)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.product.media',
fs_product_media.id,
'specific_file',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
product_media_relation as r,
storage_media as s,
storage_file as f,
fs_product_media
WHERE
fs_product_media.x_old_storage_file_id = f.id
AND fs_product_media.media_id is null
AND r.media_id = s.id
AND s.file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
_logger.info(
"Create ir_attachment records from storage_file linked to one fs_media"
)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.media',
fs_media.id,
'file',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file as f,
fs_media
WHERE
fs_media.x_old_storage_file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
def migrate_product_image(cr):
cr.execute(
"""
alter table fs_product_image add column x_old_storage_file_id integer;
alter table fs_image add column x_old_storage_file_id integer;
"""
)
_logger.info("Create fs_product_image records from storage_file")
cr.execute(
"""
INSERT INTO fs_product_image (
product_tmpl_id,
x_old_storage_file_id,
sequence,
write_uid,
write_date,
create_uid,
create_date,
name,
mimetype
)
SELECT
r.product_tmpl_id,
f.id,
r.sequence,
f.write_uid,
f.write_date,
f.create_uid,
f.create_date,
f.name,
f.mimetype
FROM
product_image_relation as r,
storage_image as s,
storage_file as f
WHERE
r.image_id = s.id
AND s.file_id = f.id;
"""
)
_logger.info("%s fs_product_image records created", cr.rowcount)
# create fs_image records for image that are linked to more thant one product_image_relation
cr.execute(
"""
INSERT INTO fs_image (
x_old_storage_file_id,
write_uid,
write_date,
create_uid,
create_date,
name,
mimetype
)
SELECT
f.id,
f.write_uid,
f.write_date,
f.create_uid,
f.create_date,
f.name,
f.mimetype
FROM
storage_image as s,
storage_file as f
WHERE
s.file_id = f.id
AND s.id IN (
SELECT
image_id
FROM
product_image_relation
GROUP BY
image_id
HAVING
count(*) > 1
);
"""
)
_logger.info("%s fs_image records created", cr.rowcount)
_logger.info("Link fs_product_image to fs_image")
cr.execute(
"""
UPDATE
fs_product_image as pm
SET
image_id = fs_image.id,
link_existing = True
FROM
fs_image
WHERE
pm.x_old_storage_file_id = fs_image.x_old_storage_file_id;
"""
)
cr.execute("select id from fs_storage where code = 'fsprd_eshop'")
fs_storage_id = cr.fetchone()[0]
_logger.info(
"Create ir_attachment records from storage_file linked to one product_image_relation"
)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.product.image',
fs_product_image.id,
'specific_image',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
product_image_relation as r,
storage_image as s,
storage_file as f,
fs_product_image
WHERE
fs_product_image.x_old_storage_file_id = f.id
AND fs_product_image.image_id is null
AND r.image_id = s.id
AND s.file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
_logger.info(
"Create ir_attachment records from storage_file linked to one fs_image"
)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.image',
fs_image.id,
'image',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file as f,
fs_image
WHERE
fs_image.x_old_storage_file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
_logger.info("Link main image on product_template")
cr.execute(
"""
UPDATE
product_template
SET
main_image_id = sub.id
FROM (
SELECT
fs.id,
pt.id as product_tmpl_id
FROM
fs_product_image fs,
product_template pt,
storage_image si
WHERE
fs.x_old_storage_file_id = si.file_id
AND si.id = pt.x_main_image_id
) AS sub
WHERE
sub.product_tmpl_id = product_template.id;
"""
)
_logger.info("%s main image linked", cr.rowcount)
_logger.info("Link main image on product_product")
cr.execute(
"""
UPDATE
product_product
SET
main_image_id = tmpl.main_image_id
FROM product_template tmpl
WHERE
tmpl.id = product_product.product_tmpl_id;
"""
)
_logger.info("%s main image linked", cr.rowcount)
_logger.info("Link variant image on product_product")
cr.execute(
"""
INSERT INTO fs_product_image_product_product_rel (
product_product_id,
fs_product_image_id
)
SELECT
pp.id,
fs.id
FROM
product_product pp,
product_template pt,
fs_product_image fs
WHERE
pp.product_tmpl_id = pt.id
AND fs.product_tmpl_id = pt.id;
"""
)
_logger.info("%s variant image linked", cr.rowcount)
_logger.info("Migrate image_medium for fs_product_image")
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.product.image',
fsi.id,
'specific_image_medium',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file f,
fs_product_image fsi,
storage_image si,
storage_thumbnail st
WHERE
f.id = si.file_id
AND si.file_id = fsi.x_old_storage_file_id
AND st.res_model = 'storage.image'
AND si.id = st.res_id
AND st.size_x = 128;
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
_logger.info("Migrate image_medium for fs_image")
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.image',
fsi.id,
'image_medium',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file f,
fs_image fsi,
storage_image si,
storage_thumbnail st
WHERE
f.id = si.file_id
AND si.file_id = fsi.x_old_storage_file_id
AND st.res_model = 'storage.image'
AND si.id = st.res_id
AND size_x = 128;
""",
(fs_storage_id,),
)
def migrate(cr, version):
migrate_product_media(cr)
migrate_product_image(cr)
migration of se thumbnails thumbnails...
# Copyright 2023 ACSONE SA/NV
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
import logging
_logger = logging.getLogger(__name__)
def migrate_thumbnail(cr):
_logger.info("Create Seach Engine Thumbnails")
# create a temporary column in es_thumbail to store the old storage file id
cr.execute("ALTER TABLE se_thumbnail ADD COLUMN x_old_storage_file_id integer")
# create se_thumbnail records for storage_thumbnail with size x in (550, 300, 60)
# The sizes are the one used for the website only
# create thumb for fs_image
cr.execute(
"""
WITH attachment_id_file_id AS (
SELECT
att.id,
fs.x_old_storage_file_id
FROM
ir_attachment AS att
JOIN
fs_image fs
ON
fs.id = att.res_id
AND att.res_model='fs.image'
AND att.res_field='image'
)
INSERT INTO se_thumbnail (
size_x,
size_y,
mimetype,
name,
base_name,
attachment_id,
create_uid,
create_date,
write_date,
x_old_storage_file_id
)
SELECT
size_x,
size_y,
file_thumbnail.mimetype,
file_thumbnail.name,
th_info.url_key AS base_name,
att.id AS attachment_id,
th_info.create_uid,
th_info.create_date,
th_info.write_date,
file_thumbnail.id AS x_old_storage_file_id
FROM
storage_thumbnail th_info
JOIN storage_image image_origin ON th_info.res_id =image_origin.id
JOIN storage_file file_origin ON image_origin.file_id = file_origin.id
JOIN storage_file file_thumbnail ON th_info.file_id=file_thumbnail.id
JOIN fs_image as fs_image_origin ON fs_image_origin.x_old_storage_file_id = file_origin.id
JOIN attachment_id_file_id AS att ON att.x_old_storage_file_id = file_origin.id
WHERE size_x IN (550, 300, 60)
"""
)
_logger.info("%s se_thumbnail records created", cr.rowcount)
# create thumb for fs_product_image
cr.execute(
"""
WITH attachment_id_file_id AS (
SELECT
att.id,
fs.x_old_storage_file_id
FROM
ir_attachment AS att
JOIN
fs_product_image fs
ON
fs.id = att.res_id
AND att.res_model='fs.product.image'
AND att.res_field='specific_image'
)
INSERT INTO se_thumbnail (
size_x,
size_y,
mimetype,
name,
base_name,
attachment_id,
create_uid,
create_date,
write_date,
x_old_storage_file_id
)
SELECT
size_x,
size_y,
file_thumbnail.mimetype,
file_thumbnail.name,
th_info.url_key AS base_name,
att.id AS attachment_id,
th_info.create_uid,
th_info.create_date,
th_info.write_date,
file_thumbnail.id AS x_old_storage_file_id
FROM
storage_thumbnail th_info
JOIN storage_image image_origin ON th_info.res_id =image_origin.id
JOIN storage_file file_origin ON image_origin.file_id = file_origin.id
JOIN storage_file file_thumbnail ON th_info.file_id=file_thumbnail.id
JOIN fs_product_image as fs_image_origin on fs_image_origin.x_old_storage_file_id = file_origin.id
JOIN attachment_id_file_id AS att ON att.x_old_storage_file_id = file_origin.id
WHERE size_x IN (550, 300, 60)
"""
)
_logger.info("%s se_thumbnail records created", cr.rowcount)
cr.execute("select id from fs_storage where code = 'fsprd_eshop'")
fs_storage_id = cr.fetchone()[0]
# create ir_attachment records for storage_thumbnail with size x in (550, 300, 60)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'se.thumbnail',
se_thumbnail.id,
'image',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file as f,
se_thumbnail
WHERE
se_thumbnail.x_old_storage_file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
def migrate(cr, version):
migrate_thumbnail(cr)
There hasn't been any activity on this issue in the past 6 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days. If you want this issue to never become stale, please ask a PSC member to apply the "no stale" label.
This RFC proposes a refactoring of the storage backend addons to make them based on Odoo standard model (
ir.attachment
) instead of custom model.In the following, the
storage_backend
addon will be renamed infs_storage
andstorage.backend
will be renamed infs.storage
.Motivation
In the
storage_file
mainly provides a custom modelstorage.file
which is used to store the file content and metadata into a external file system.Both share a lot of same fields. The not shared fields are mainly related to specific functionality provided by either
storage.file
orir.attachment
.Here it's a table with the fields of the
storage.file
model and the corresponding field in their.attachment
model:The functional differences are:
storage.file
allows to access to the file content via 2 URLs stored into the database:internal_url
andurl
. Theurl
is the public URL to access to the file content. Theinternal_url
is the URL to access to the file content from the same 0doo's server. Theurl
refers to the file into the external file system.storage.file
is used to transparently store the file content into an external file system. When you manually create a record in thestorage.file
you can choose the external file system where the file content will be stored.storage.file
is not transactional. (If a file is stored withing a transaction and the transaction is rolled back, the file will not be deleted from the external file system).ir.attachment
allows to store the file content into the Odoo's filestore.ir.attachment
is transactional. (If a file is stored withing a transaction and the transaction is rolled back, the file will be deleted from the filestore by a GC mechanism).ir.attachment
can be transparently used to store content from fields of typeBinary
. This field type comes with a specific UI to upload/download.How can we reconcile both models?
fs_attachment
Since the main difference between both models is the place where the file content is stored, the idea is to extend the
ir.attachment
model to allow to store the file content into an external file system. With the last changes in their.attachment
model, odoo provides now 3 methods that can be used to hook into the process of storing, retrieving and deleting the file content. This methods are:_file_read
_file_write
_file_delete
Such an approach has already been implemented in addons from the camptocamp odoo-clout-platform repository: https://github.com/camptocamp/odoo-cloud-platform
The idea is to use the same approach here but using the api provided by the fsspec library through the new implementation of the
storage_backend
addon proposed in the pull request: https://github.com/OCA/storage/pull/250A new addon
fs_attachment
will be created to provide the glue between their.attachment
model and thefs_storage
addon. A new fieldcode
will be added to thefs.storage
model to allow to choose the file system where the file content will be stored. As it's done into thebase_attachment_object_storage
addon from C2C, it will be possible to globally configure the default file system where the file content will be stored for all the attachments.fs_file
The
storage.file
addon will be replaced by the newfs_file
addon. This addon will provide 2 new specialized field types:FSFile()
andFSFileName
. These new field types will allow you to add fields on your models to store a file content into an external file system. Compared to theBinary
field type, theFSFile()
field will require 2 additional parameters:storage_code
: the code of the file system where the file content will be stored.field_filename
: the name of theFSFileName
field.The
FSFileName
field will be used to store the name of the file. It will be act as a related field on thename
field of their.attachment
record behind theFSFile()
field.To avoid useless resources consumption when the field content is retrieved to be displayed into the UI, the method
convert_to_read
will be overridden to return a url to use to download the file content.A new JS Widget will be created to allow to upload/download the file content as an url. In the same spirit of minimizing the resource's consumption, this widget will not encode into base64 the file content when a new content is uploaded and put this content into the json document posted to odoo. Instead, it will call a new controller to upload the file content and set as value of the field the new url when the form is saved prior to the submission of the form.
To avoid to pollute our file system with files uploaded but not linked to any record in the database due to a transaction rollback or some troubles when a form is submitted, a GC mechanism will be implemented to delete orphan files.
The implementation of the
FSFile
field will be base on theBinary
field but will always put into the context thestorage_code
andfield_filename
to allows. Theses 2 parameters will be used by thefs_attachment
addon to select the file system where the file content will be stored and to set the name of the file.fs_image
The
storage.image
addon will be replaced by the newfs_image
addon. It will at least provides a new widget to allow the display of the image content from the url provided by theFSFile
field. If it were not for the automatic thumbnail creation mechanism, this module could be summarised as the creation of 2 new fields type:FSImage()
andFSImageAltName
. Thealt_name
would be a related field to a new thealt_name
field of their.attachment
record behind. TheFSImage
field would be an extension of theFSFile
field with 1 optional parameter:alt_name_field
.TO BE REFINED