Open bergkvist opened 4 years ago
This would be amazing! Marking help wanted.
Requires the yaml extension to work:
Note that the circled strings above could point to a website hosting the static json schema file instead of a local file path.
A potentially useful starting point: https://github.com/chrusty/protoc-gen-jsonschema
The main complexity here is going to be dealing with the Any/extension model within a pure JSON schema. We will need to keep a map of extensions, and then somehow allow the sub-extension config to be built by schema/plugin.
also relevant: https://github.com/redhat-developer/vscode-yaml
It seems like there are a couple of different types of plugin we could create here - using kubernetes as a good example facing a similar set of problems...
This is definitely the easiest - and probably the most useful. It would provide auto-completion too
The https://github.com/redhat-developer/vscode-yaml plugin provides this i think (for some reason ive struggled to get this working properly - but i havent tried too hard)
This would provide a way of visualizing and perhaps managing active deployments
This would be more akin to what is provided here https://github.com/envoyproxy/envoy-tools/blob/master/envoy-curses/README.md
This is what https://github.com/Azure/vscode-kubernetes-tools provides
I think in envoy terms it would require a few assumptions about the type of deployment to be useful
the first is definitely easier, and i think would also be the first step towards the second if we wanted to develop that
@phlax IMO we should focus on the simple code validation effort first. I think this would be the biggest bang for our buck. Note also that we have various plans around an Envoy UI built into Clutch (https://github.com/lyft/clutch), which is somewhat related to this effort. cc @dschaller you might want to sync up with @phlax who is going to be contracting with us for a bit to help improve a bunch of tooling.
@phlax IMO we should focus on the simple code validation effort first. I think this would be the biggest bang for our buck.
cool, agreed.
on that basis, really its just a matter of creating the jsonschema i think, and perhaps posting (and maintaining) it on https://www.schemastore.org/json/
im wondering if we would need to create a vsix
or post it on the vscode marketplace, or whether we just need to document how to use it with vscode-yaml
Ive been looking at this again and trying to figure out how best to get this working.
Here is a quick report on progress and what i have found so far
i had thought that i would need to run protoc
against each and every .proto
file, and then mangle the output to eg set correct links
after experimenting with this further this seems to be wrong
afaict its only necessary to run protoc
against the boostrap proto file - other required schemas are then added as definitions
to the schema file.
The only mangling i found necessary to make it work is to update the internal $ref
s to point to the included definitions.
I have uploaded (with a .txt extension to allow embedding) the generated schema file
This file can be used in vscode (with the .txt extension removed) as outlined here https://github.com/envoyproxy/envoy/issues/13078#issuecomment-692352798
if anyone is willing to test out and feedback on the vscode boostrap jsonschema above that would be very helpful
The bootstrap schema doesnt include any extensions so these will need to be generated separately and then linked in
see https://github.com/envoyproxy/envoy/issues/13254#issuecomment-699525377 for discussion of how this might be achieved
this will also require that extensions are associated with extension points - which will need to be addressed first (cf #13167 #13531)
im seeing some schemas that should possibly be hidden, eg
descriptions are using annotation fields which are often not that helpful, eg
its not clear if all of the fields being show are valid, eg
some descriptions (/annotations) have embedded .rst
snippets which dont travel well, eg
@phlax personally my feeling is that we should use bazel/python to generate the schema. I honestly think it will be faster/easier to manually write out the JSON schema as a portion of the protodoc process, since it's so closely related to the generated doc tree, documenting extensions, etc.
@htuch @lizan @adisuissa @kyessenov wdyt?
@mattklein123 it is my intention to use bazel, and im using python to mangle it already.
For now tho im more interested in what bazel will produce, how well that will work in vscode, and what mangling might be required
Cool makes sense. Looks like great progress! I agree that there are going to be data quality issues but I think that should be relatively easy to fix once we figure out the types of errors we see.
re the dodgy renderings, im wondering if/what pre-mangling is required before feeding the file to protoc
- eg stripping annotations
regarding rst
renderings, i dont think there is an easy fix
one possible medium/long term fix is to replace the rst with md (imho/e md -> rst is kinda easier than the other way round)
the md could then be mangled (more easily) before being fed to vscode
regarding should-be-hidden fields - i guess we want to strip but rem the annotations so that they can be removed in this case
I think one of the killer features of RST is internal link checking. So, we need to preserve this capability in the API docs, no matter what concrete syntax we use for docs.
Re: mangling, I think protodoc
provides already some examples of how to do things like annotation stripping (and interpretation if needed).
Re: extension points and bootstrap, this makes sense. One thing I'd keep in mind is that this is an iterative fixed point process; each time you find an extension point, you need to analyse each extension belonging to that extension type, and some of those extensions might themselves contain extension points.
so re rst/md - im struggling to ascertain how links should be formatted for vscode extensions.
vscode has support for a custom field markdownDescription
which should handle this correctly
(at least on linux/my local machine) support for formattted descriptions in the vscode yaml adapter is patchy - it works in hover mode but not for autocomplete
i have opened a bug here - https://github.com/redhat-developer/vscode-yaml/issues/417
re link validation - there is an md
-> rst
plugin which i have used - i think we can make it do the right thing in terms of links - and then have the links validated during the rst render.
Converting all of the proto descriptions is quite a big task tho, so i reckkon its outside the scope of this bug, and can be tackled after if desired
any updates on this extension?
any updates on this extension?
yep - some progress, altho still some way off.
there was some upstream issues that meant that for envoy's config/schema it wasnt working well - these have now been resolved altho i havent retested since
atm im focused on simplifying the build pipeline and shifting to docs - artefacts from this are needed to create the schema
i have a kinda alpha version working with bootstrap config so far - the very WIP PR is here #15100 - altho its ~broken atm and needs some updating
I just created this tool for generating the json schemas from proto https://github.com/jcchavezs/envoy-config-schema. Does that look like something you would like to adopt?
cc @dio
Thanks, @jcchavezs! Looks like this example here: https://github.com/jcchavezs/envoy-config-schema#examples is really helpful!
Hey, is this something still being worked on? Thanks
Intellisense for envoy configuration files (yaml)
Having to lookup keywords in documentation causes a slower developer feedback loop. yaml indentation can also be be a challenge to get right at first attempt with deeply nested structures.
Intellisense can help you quickly get feedback on which keywords are available, which also tells you which "depth" you are at within the yaml tree structure.
An example of an extension like this is the Kubernetes extension for VSCode - providing intellisense for Kubernetes resource definition files (which also uses yaml files)
Relevant links