Closed sc0ttj closed 5 years ago
After this issue is resolved, these liquid filters should be easy enough:
csv_to_array
- convert the given CSV data to a hash (or array of hashes)array_to_csv
- convert the given hash (or array of hashes) to CSVarray_to_json
- convert the given hash (or array of hashes) to JSONdone - merged https://github.com/sc0ttj/mdsh/pull/85
From https://jekyllrb.com/docs/datafiles/:
To do
assets/data
folderIdeas
Structure:
^ Both
foo.json
andbar.yaml
would be made available to all pages, in thesite_data
.. so their contents are available to the Markdown (and sub-shells), the templates and build scripts/process generally (as shell vars, arrays, etc)But, the files in
data/my-cool-post
should be parsed only when building from the filemy-cool-post.mdsh
.JSON and YAML parsing
more-data.yaml:
something.json:
In both cases, the data would be made available to during the build process as:
^ Once the JSON/YAML parsers decares the above, these variables and arrays/hashes should be available in the templates like any others (
$page_title
, etc).Using parsed data in templates
Once parsed, the data could be used in templates like so:
Indexed arrays:
Associative arrays:
site_header
function for an example.Easier associative arrays
It will be easier to use associative arrays in templates after the foreach function is implemented as an iterator in
mo
:Associative arrays, using
foreach
:Indexed arrays, using
foreach
:Useful libraries
As shell by default is piss poor with multi-dimensional data stuctures (like JSON, CSV and YAML), we need to add some Shell, AWK, Python or Perl scripts to the
functions/
dir, which can then be used to make theyaml
/csv
/json
data available to our build process and templates :)Essentially, the ideal scripts:
{ "ppl": [ { "id": 1 }, { "id": 2 } ] }
would let us do:Once working, these data processing/filtering scripts could also be used in various new liquid filters).
Shell scripts:
AWK scripts:
Perl scripts:
Python scripts: