lbryio / lbrycrd

The blockchain that provides the digital content namespace for the LBRY protocol
https://lbry.com
MIT License
2.57k stars 178 forks source link

added scripts to generate parsable API docs #187

Closed BrannonKing closed 6 years ago

BrannonKing commented 6 years ago

Issue: #131 . There are two scripts:

  1. jrgen, matching https://github.com/mzernetsch/jrgen
  2. "v1" which matches https://github.com/lbryio/lbry/blob/master/scripts/generate_json_api.py
eukreign commented 6 years ago

Seems like there is a mix of py2 and py3 exclusive syntax used (print keyword in contrib/devtools/clang-format.py which will fail in py3 as syntax error and def get_obj_from_dirty_text(full_object: str): which will fail as syntax error in py2).

Is this intentional? Should all code be only py2, py3 or should it work in both py2 and py3?

BrannonKing commented 6 years ago

@eukreign , the clang-format scripts predate my time here. I just made it run as it didn't before. I don't know that anyone here wrote them, judging from the copyright messages in them. As far as I know, they have always been Py2 scripts. For this PR, the focus should be on the "docs" scripts.

eukreign commented 6 years ago

Alright, it was in this PR so I thought it was related. Which files should I review as part of this PR?

lyoshenka commented 6 years ago

@BrannonKing can this be merged? also as part of this, can you run the script (the one that matches the daemon script) and commit the output as well?

BrannonKing commented 6 years ago

Open questions on this:

  1. Should we check in the generated output as well? (Anyone making changes to the API would be responsible to rerun the script and check in an updated file.)
  2. Should we make the reproducible build script run the document generation?
  3. Should we make TravisCI run the document generation? And tag changes? Or have it check in the output?

We have a request to make the output available on the web.

eukreign commented 6 years ago

Following from my answer on slack about how to unit test these scripts: to be able to use the generated output as the test itself, you have to commit and version the output deliberately (not via travis) so that when the docs do change it can be verified for correctness (did the docs change because the input source changed or did they change because the generator script changed?). Therefore, my advice would be:

  1. Yes.
  2. Don't know what this means, something lbrycrd specific I assume?
  3. If you have extra time to dedicate to this task then definitely it would be beneficial to run the generate script and verify that the latest output matches the previously commited output and fail the build if it doesn't (asking commiter to run the doc build script locally, verify the changes make sense and then commit the new json doc file).
lbrynaut commented 6 years ago

Looks good to me. Has this been run through clang-format? It seems travis is only unhappy with style.

BrannonKing commented 6 years ago

I have not ran the formatting. I'm having to undo 20 years of habit to remember to format before checking in. I still want to make that a report rather than a requirement.

It sounds like we would additionally like a report from Travis (or the local build) that states that the generated API doc needs to be updated. This involves starting the server. Have we done that on Travis before? Do we run any integration tests on Travis?

lbrynaut commented 6 years ago

Do we run any integration tests on Travis?

No. Possibly related to #164