Closed S1SYPHOS closed 2 years ago
There's lots of other stuff I'd like to know, for example why using the dataBuffer
is necessary, but since I'm not as experienced as you, I just left it like it was.
Apart from that, I turned everything into snake_case
, moved stuff around in order that it appears in the run()
function, renamed some variables, etc.
This got out of hand quickly, but I think most of it is alright with you - at least I hope it is :confused:
As suggested, I renamed csvFile
to dataFile
to avoid confusion in the future.
I guess you are very busy right now, if it's the PR that's bothering you, just let me know!
Hi Martin. sorry for not merging this PR earlier, I completely forgot.Thanks for your help !
There's lots of other stuff I'd like to know, for example why using the
dataBuffer
is necessary, but since I'm not as experienced as you, I just left it like it was.
Maybe I took a too obscure approach. The purpose is to be able to hande a single document that consumes multiple rows (via %SG-NEXT-RECORD%).
Imagine the data files holds 15 data entries, and Scribus template uses 4. The code at https://github.com/berteh/ScribusGenerator/blob/92d63372196f595d2f41624bf88ead72df94a1cb/ScribusGeneratorBackend.py#L168 will lead to making 4 substitutions from the document template top
Any simplification idea is welcome.
Apart from that, I turned everything into
snake_case
, moved stuff around in order that it appears in therun()
function, renamed some variables, etc.This got out of hand quickly, but I think most of it is alright with you - at least I hope it is confused
This PR adds support for JSON files being passed as
csvFile
, see #10I'd suggest though that everything related to CSV be renamed to 'data', so
csvFile
becomesdataFile
etc BUT CSV-specific variables should keep their names, likecsvSeparator
. Where an equivalent is needed, like withCSV_ENCODING
, a JSON variant should be added, in this caseJSON_ENCODING
.