Closed mtt-artis closed 1 month ago
Hi ! Thank you for the PR. Let's do it, the ability to efficiently download a CSV is a great addition.
Indeed, I think we should be able to do that without handlebars (using the csv library that we already include), and reuse the existing csv component name and parameters. This way the component downloads the csv directly when used in header position, and creates a download button when used in the middle of the page.
We will need to implement a mechanism to handle row-level parameters cleanly in header components (in HeaderContext), but it can then be reused for the json component
Hi (again),
I would love to reuse the csv
component, but I'm concerned it might confuse users since the same component could either be used as a button or as a HTTP response. This new option would require two separate files — one for the download button and another for the endpoint. Is the button a csv
or button
component now ?
For the handlerbars, that what I thought.
Can you provide the mechanism ? From what i understand, I may see a way to do it with the HeaderContext, but It not very clean as the body/stream is handle in this header section...
Moreover it might take some time cause i do not master rust
as you do 😉
Hello @mtt-artis ! I just implemented the usage of csv
as a header component, with efficient streaming of the csv file, and proper csv encoding and escaping.
I would be very grateful if you could test the latest version from master and give your feedback !
Hi @lovasoa 👋
Thank you for being so productive.
I ran some tests with a PostgreSQL database, and the results look accurate—they’re consistent with the JSON component’s output.
serial_col,bigserial_col,smallint_col,integer_col,bigint_col,decimal_col,numeric_col,real_col,double_col,money_col,char_col,varchar_col,text_col,bytea_col,date_col,time_col,timestamp_col,timestamptz_col,interval_col,boolean_col,uuid_col,inet_col,cidr_col,json_col,jsonb_col,jsonb_arr_col
1,1,32767,2147483647,9223372036854775807,99999.99999,99999.99999,3.4028234663852886e38,1.7976931348623157e308,,ZZZZZZZZZZ,ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ,Max Text Value,,9999-12-31,23:59:59,9999-12-31T23:59:59+00:00,9999-12-31T23:59:59+00:00,,true,,,,"{""max_key"":""max_value""}","{""data"":[""1"",""2"",""3"",1,2,3,[],{""max_key"":""max_value""},{}],""max_key"":""max_value""}","[""1"",""2"",""3"",1,2,3,[],{""max_key"":""max_value""},{}]"
vs
{"serial_col":1,"bigserial_col":1,"smallint_col":32767,"integer_col":2147483647,"bigint_col":9223372036854775807,"decimal_col":99999.99999,"numeric_col":99999.99999,"real_col":3.4028234663852886e38,"double_col":1.7976931348623157e308,"money_col":"","char_col":"ZZZZZZZZZZ","varchar_col":"ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ","text_col":"Max Text Value","bytea_col":"","date_col":"9999-12-31","time_col":"23:59:59","timestamp_col":"9999-12-31T23:59:59+00:00","timestamptz_col":"9999-12-31T23:59:59+00:00","interval_col":"","boolean_col":true,"uuid_col":"","inet_col":"","cidr_col":"","json_col":{"max_key":"max_value"},"jsonb_col":{"data":["1","2","3",1,2,3,[],{"max_key":"max_value"},{}],"max_key":"max_value"},"jsonb_arr_col":["1","2","3",1,2,3,[],{"max_key":"max_value"},{}]
quote
and escape
options could be useful and are not yet included in the documentation.
I've noticed some differences compared to the Handlebars CSV version:
The filename automatically adds a .csv
extension in the Handlebars version, which doesn’t happen in the stream version. https://github.com/sqlpage/SQLPage/blob/main/sqlpage/templates/csv.handlebars#L21
Handlebars version has a separator at the end of each line
Different option name: separator
vs delimiter
"\t"
as delimiter is not possible in the stream version
select
'csv' as component,
'test.csv' as filename,
'\t' as delimiter,
'?' as quote,
TRUE as bom;
select * from all_data_types;
An error occurred before starting to send the response body: Invalid csv delimiter: "\\t". It must be a single byte.
The fact that this query outputs a valid csv is super cool 👍
select
'csv' as component,
'Download my data' as title,
'test.csv' as filename,
'file-download' as icon,
'green' as color,
TRUE as bom;
select * from all_data_types;
select bigserial_col from all_data_types;
Thanks for the feedback! I'll try to fix the inconsistencies.
For the tab separator, I think the problem comes from your query: in sql '\t'
represents a literal backslash followed by a literal T. You can produce a tab with a literal single tab character in single quotes, or using E-strings: https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE
For the tab separator, I think the problem comes from your query: in sql '\t' represents a literal backslash followed by a literal T. You can produce a tab with a literal single tab character in single quotes, or using E-strings: https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE
You're right ! The query below works for both handlebars and top component 👍
select
'csv' as component,
'Download my data' as title,
'test.csv' as filename,
'file-download' as icon,
'green' as color,
E'\t' as separator,
E'\t' as delimiter,
TRUE as bom;
Another minor inconsistency is that errors in the Handlebars version return an empty file, whereas the CSV stream returns ""\n
.
Hi 👋,
This commit follows the discussion in #260 (better late than never!). It implements the
csv_attachment
component to return data as a CSV file.It works, but it feels a bit hacky due to the use of
is_embedded
and Handlebars. Please feel free to make any changes, rename the component, or point me in the right direction.If you like this implementation, I can add the component to the documentation.