Closed alerque closed 5 years ago
Merging #57 into master will decrease coverage by
1.07%
. The diff coverage is80%
.
@@ Coverage Diff @@
## master #57 +/- ##
==========================================
- Coverage 85.31% 84.23% -1.08%
==========================================
Files 2 2
Lines 177 184 +7
==========================================
+ Hits 151 155 +4
- Misses 26 29 +3
Impacted Files | Coverage Δ | |
---|---|---|
yq/__init__.py | 84.15% <80%> (-1.08%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 4f78ac1...52e89dc. Read the comment docs.
Hmm, it seems -j
is actually a jq
argument already, so lets not clobber that.
Hmmm, this won't work — the code currently makes some assumptions such as that input and output formats are always the same. My patch works only if keeping JSON as the output format, but not if combined with -y
.
Okay I "fixed" the case of output conversion.
$ echo '{"a":true}{"b":false}' | ./yq -p -s -y .
- a: true
- b: false
$ echo '{"a":true}{"b":false}' | ./yq -p -y .
a: true
---
b: false
My fix is only temporary, I think a general reorganization would make the whole thing a lot more flexible. I started to mess with it but I'm out of time. I'll open another issue for that.
Thanks for the input, but I don't think this is needed.
The solution to the original problem is to pass the multiple JSON files as separate arguments: yq file1.json file2.json
instead of concatenating them all to stdin.
I understand there is a work around, and I can certainly program around this. But I think you're missing the point here. I don't have JSON files. I have a stream coming in from an API (in my case lots of results coming in batches of the largest number of queries the API can handle at once). Of course I can write it all out to files (or named pipes or whatever) and funnel them all back in. The issue here is that jq
handles this case (the whole point of the slurp argument) but by wanting YAML output I am limited to an entirely different (and much more complex) workflow --- when a simple fix here would allow for yq
to be used in place of jq
when YAML output is desired.
As it is I'm piping stuff to jq
just to turn around and send it into yq
which then passes it back to jq
. Is that really necessary? Or would a pass through option so jq
only has to see the data once be better?
This would close #56 using my suggestion of a pass-through option that doesn't parse the input streams at all, hence allowing any JSON that would be valid
jq
input instead of only what can be parsed as if it was YAML>Note I'm not a python programmer so the implementation may be a bit on the hackish side of things, but I'm willing to fix it if you point me in the direction of something.