One of the reasons CSV/TSV are so widely used is that these are "line-oriented" formats, so that very often an unboundedly large file can be processed without taxing computer memory, either by processing one line at a time, or by crafting a suitable "reduction" framework.
There are some tools that allow a stream of JSON entities (as envisioned by the JSONLines specification) to be processed in much the same way as sed or awk process lines of text.
At present, csvq's -f option does not support JSONLines or any similar streaming option. Thus the utility of csvq for converting very large CSV/TSV files to a JSON format is severely limited.
One of the reasons CSV/TSV are so widely used is that these are "line-oriented" formats, so that very often an unboundedly large file can be processed without taxing computer memory, either by processing one line at a time, or by crafting a suitable "reduction" framework.
There are some tools that allow a stream of JSON entities (as envisioned by the JSONLines specification) to be processed in much the same way as sed or awk process lines of text.
At present, csvq's -f option does not support JSONLines or any similar streaming option. Thus the utility of csvq for converting very large CSV/TSV files to a JSON format is severely limited.
(Other streaming options for JSON are described on the wikipedia page: https://en.wikipedia.org/wiki/JSON_streaming )