h0lg / netAmermaid

An automated documentation tool for visually exploring .NET assemblies along type relations using rapid diagramming.
MIT License
3 stars 1 forks source link

Interactive Version inside ILSpy? #6

Open christophwille opened 3 months ago

christophwille commented 3 months ago

In preparation for our ILSpy developer days in Vienna I happened across your repository (we have Mermaid diagrams on our topics list).

Would it be possible to turn your solution into an "interactive" variant inside of ILSpy? By that I mean using our existing tree and maybe a WebView2 component in a view window. Or is the processing too heavy for that?

h0lg commented 3 months ago

Hey @christophwille, that's good news, netAmermaid was trying to bridge that gap exactly :)

If I understand you correctly, you want to use the original ILSpy tree for type selection and then render the selected types and their relations using mermaid in a WebView.

I think you could get somewhere by

Concerning the JSON (see ClassDiagrammer.cs) format the HTML diagrammer factory creates:

If you have further questions, let me know!

christophwille commented 3 months ago

I had a closer look at your solution during our event - and I noticed that it could be "lift & shifted" into our ILSpyX project where it can be used by ilspycmd (automation) as well as ILSpy (interactive). Would that be something you'd be willing to entertain?

For the interactive bits - I was looking at super-lightweight solution as well using eg https://github.com/samsmithnz/MermaidDotNet to only render a slim diagram at first (along the lines of dotpeek's dependency graph). Maybe somewhere drawing the line between heavy preprocessing and quick interactive rendering.

h0lg commented 3 months ago

I had a closer look at your solution during our event - and I noticed that it could be "lift & shifted" into our ILSpyX project where it can be used by ilspycmd (automation) as well as ILSpy (interactive). Would that be something you'd be willing to entertain?

Sounds good to me. More eyes and hands on it!

For the interactive bits - I was looking at super-lightweight solution as well using eg https://github.com/samsmithnz/MermaidDotNet to only render a slim diagram at first (along the lines of dotpeek's dependency graph). Maybe somewhere drawing the line between heavy preprocessing and quick interactive rendering.

I think I understand where you're going. The netAmermaid CLI is designed as a factory that takes a pre-selection of types and bakes their info into a HTML diagrammer for the final selection to happen there. That final step is not necessary if ILSpy does the selecting. You could generate mermaid syntax directly instead of going through a meta-model.

At that point the netAmermaid CLI may truly not be what you want - unless you still like the shape of the ClassDiagrammer.cs output model. An overload of ClassDiagrammerFactory.BuildModel() that takes a collection of ICSharpCode.Decompiler.TypeSystem.ITypeDefinition is cheap. With that you could directly generate mermaid syntax from the output model in C# (effectively re-creating mermaidExtensions.processTypes from script.js) without the need for much JavaScript other than click handlers for navigation in the HTML for the WebView.

Not sure whether you want to take the performance hit of having to re-generate the HTML and reload the WebView on every interaction though. Maybe you'd want to "buffer" some info by pre-processing (e.g. all types in the namespaces of selected types or include relations of the first degree) so that the UI updates immediately while you re-generate the next "buffered" model in the backgound if necessary?

christophwille commented 3 months ago

I really want to go down two routes:

So maybe something along those lines

That would bring netAmermaid into ILSpy for the first bullet item from the top list. However, if that happens, is it still fruitful to keep netAmermaid standalone? (inside ILSpy it would get way more exposure, but for the downside of you having your own repository) Going NuGet is problematic because you depend on ics.Decompiler which is part of ILSpyX. So it really boils down to: do you want netAmermaid to join ILSpy?

h0lg commented 3 months ago

I really want to go down two routes

I see - for the first option using or merging netAmermaid makes complete sense.

As I've said, merging it into ILSpy sounds good to me. Continuing it as a stand-alone project for vanity makes little sense, as long as the current capabilities of the diagrammer factory CLI get transferred or replaced by ilspycmd:

christophwille commented 3 months ago

So we need to merge the existing ilspycmd CLI options with those of netAmermaid in a sensible manner (no duplication, no naming inconsistencies). The existing ilspycmd ones:

https://github.com/icsharpcode/ILSpy/blob/master/ICSharpCode.ILSpyCmd/README.md

We already have

And there the easy matches stop, because --type doesn't match the expectations of --include.

We need

Would you be willing to prepare the PR? (vanity) If so, here are a few more pointers in addition to those from previous comments:

Does that sound like a plan?

h0lg commented 3 months ago

Would you be willing to prepare the PR?

Absolutely. Most of what you say sounds reasonable and good to me. I do have a few questions and remarks:

the technical background (how it works, do steal parts of this thread from your explanations) maybe as an excluded README in the ILSpyX/Diagrams folder

What do you mean by "excluded"? Excluded from where or what?

the "end user" docs as a Wiki page

Do you mean the HTML diagrammer doco? And do you suggest moving the markdown into a github wiki page? If so, will it be in source control? If not, wouldn't it be better to keep it in source control to make it easier to keep the doco in sync?

The UI version ("Save Project" lookalike) - maybe start out with only really copying that and ignoring all other options (like the default cmd line you show in the netAmermaid README)

I think I understand what you mean. Please confirm or correct me: In the ILSpy UI, add an assembly tree context menu entry for generating a HTML diagrammer from the selected assembly similar to the existing one "🖫 Save Code...". In the first version, leave out advanced options like type selection, i.e. generate a diagrammer for all types in the assembly.

christophwille commented 3 months ago

Would you be willing to prepare the PR? the technical background (how it works, do steal parts of this thread from your explanations) maybe as an excluded README in the ILSpyX/Diagrams folder

What do you mean by "excluded"? Excluded from where or what?

File visible in Solution Explorer, but not copied to output. That's what I meant by excluded (from build).

the "end user" docs as a Wiki page

Do you mean the HTML diagrammer doco? And do you suggest moving the markdown into a github wiki page? If so, will it be in source control? If not, wouldn't it be better to keep it in source control to make it easier to keep the doco in sync?

Wiki on GH is source-controlled https://docs.github.com/en/communities/documenting-your-project-with-wikis/viewing-a-wikis-history-of-changes - reason for choosing this is mostly being that we already have end-user-facing docs in that location. It is a bit easier to maintain as well and doesn't pollute git history with non-code changes.

The UI version ("Save Project" lookalike) - maybe start out with only really copying that and ignoring all other options (like the default cmd line you show in the netAmermaid README)

I think I understand what you mean. Please confirm or correct me: In the ILSpy UI, add an assembly tree context menu entry for generating a HTML diagrammer from the selected assembly similar to the existing one "🖫 Save Code...". In the first version, leave out advanced options like type selection, i.e. generate a diagrammer for all types in the assembly.

Exactly. Our Save code also doesn't sport all the options ilspycmd has - the UI should be easily accessible features, and "pro" is either ilspycmd or if demand is there (ie people opening issues) that we add a "pro" UI as well.

Btw, we also have https://github.com/icsharpcode/ILSpy/tree/gh-pages - ilspy.net. Now, index.md does redirect intentionally, but eg we could host some samples for say AvalonEdit or SharpZipLib there to show off the diagramming capabilities (as a future PR to that branch).

h0lg commented 3 months ago

Got it - thanks for elaborating and some fair points.

Concerning something you wrote earlier:

./html turned into a node-only-tooling story

What do you mean by this? For context: At the moment, the netAmermaid solution contains two projects: A C# Console project building the diagrammer factory, and a Node JS project for developing the diagrammer. But the latter one only really uses NPM to pull in development resources - namely eslint for the script and gulp for transpiling the .less into CSS. This is where I could also pull in mermaid.js to get rid of the CDN as you mentioned. What would you like me to change about that setup?

christophwille commented 3 months ago

./html turned into a node-only-tooling story

What do you mean by this? For context: At the moment, the netAmermaid solution contains two projects: A C# Console project building the diagrammer factory, and a Node JS project for developing the diagrammer. But the latter one only really uses NPM to pull in development resources - namely eslint for the script and gulp for transpiling the .less into CSS. This is where I could also pull in mermaid.js to get rid of the CDN as you mentioned. What would you like me to change about that setup?

I meant specifically to get rid of https://github.com/h0lg/netAmermaid/blob/main/html/html.njsproj and instead only use node to do the necessary LESS-to-CSS conversion and whatever else would be necessary - and recommend VS Code for editing those files (JS is way better served by VS Code than VS). Yes, I am voting for two tools - VS only including the (commited) output of the VS Code editing of ./html.

Please add a .gitignore https://www.toptal.com/developers/gitignore/api/node in ./html - do not modify the root one.

h0lg commented 3 months ago

I understand, thanks. And you're right, the JS tooling is better in VSCode.

Let's think about what that means for developing the HTML app: In order to test and debug the HTML template, script and styles, you'll need to run them through ilspycmd to generate a diagrammer. And not any build, but one that uses the updated template and resources. So for a smooth DevEx and to replace the post-build event I used for this in VS, the VSCode node.js project would need a run script that looks a bit like this:

  1. transpile the .less into .css
  2. either 2.a build ilspycmd using the dotnet CLI (to have a build using up-to date resources) or 2.b copy the up-to-date resources to where the ilspycmd build used in the next step picks them up
  3. call ilspycmd to generate a diagrammer
  4. open that diagrammer in a browser

2.a would be the cleaner, but slower solution. 2.b would mean "modifying" e.g. the resources in the ilspycmd Debug/html build output, but make rebuilding it unnecessary - which would be a lot quicker.

Let me know what you think.

christophwille commented 3 months ago

What-if we invert the DevEx? Like having generated data (not checked in, obviously) sitting next to the html/css/js? So bascially the C# code would not need to run once you have a test data set, and you can keep iterating on the html/css/js, having a nice inner loop.

h0lg commented 3 months ago

What-if we invert the DevEx? Like having generated data (not checked in, obviously) sitting next to the html/css/js? So bascially the C# code would not need to run once you have a test data set, and you can keep iterating on the html/css/js, having a nice inner loop.

I have thought about your idea for a bit and yes, it can be done, but maybe not as easily as you imagine. The reason for that boils down to the HTTP spec that prevents you from referencing .json files from .html file opened from the file:// system because Cross origin requests are only supported for HTTP.

So if you you want the diagrammer to keep working from the file system, you have to bake the JSON into the document. That means having a pure node.js inner loop as you describe, would require two rather smelly pieces of code:

  1. Some C# code and/or config to create the test data (which is not checked in) , which half-runs the ilspycmd to create a diagrammer, but then redirects the model JSON output to file - an execution path which is otherwise useless (because of above CORS restriction). ilspycmd would need an extra parameter, switch, commented or conditional code for that.
  2. Some simplified node.js version of the C# code that bakes the JSON into the HTML template and saves it as a diagrammer - and that has to stay in sync with the original to some degree.

I don't know about you, but the localized weirdness of copying the HTML template, script.js and styles.css over to the ilspycommand Debug/html output folder and then running that build from node.js against e.g. itself to create a diagrammer looks a lot more attractive right now.

christophwille commented 3 months ago

I have used https://www.npmjs.com/package/http-server in the past to get around the local files problem in node. Wouldn't something like "If I don't have json in the html, look at a defined file outside?"

Edit: to clarify - I meant to use http-server for DevEx, not runtime. And the intelligent switch "json in html, json in separate file" would allow that as well as a future option for users to opt between inline json and external file for publishing to Web or local consumption via https://www.nuget.org/packages/dotnet-serve/

h0lg commented 3 months ago

I don't know.

Unless you are planning to skip the diagrammer generation of an HTML template in the dev pipeline altogether (which you could do and look at template.html directly in a browser I guess?), introducing a HTTP server to get around the CORS issue introduces unnecessary complexity IMO. When the diagrammer is generated from the HTML template, you might as well bake in the JSON, in C# or node.js . There is little benefit even for a web version to have the JSON in a separate file. Yes, you could lazy-load it and may get a faster initial page load - but you won't be able to do anything until the JSON is loaded anyway. And that web build won't work locally without a HTTP server. Do you really want to open that can of future bug reports? I'd keep it simple. We're talking about static JSON data here.

If I understand your idea correctly, it requires:

Generating a diagrammer in node.js from pre-generated JSON is probably a bit quicker than running ilspycommand on a sample target assembly. But that's the only benefit to that solution I can think of.

At face value, it looks more complex with more moving parts and a new external dependency - and I can think of some other reasons you might not want to go there:

I guess you could split the C# diagrammer generation into

  1. generating a model.json from an assembly and
  2. generating a diagrammer from a model.json and a template.html

which could at least spare you the duplicated node.js diagrammer creation.

But I still don't know, the pipeline I've outlined above seems a lot simpler (i.e. more robust) and as I've pointed out, I don't see a good reason for the layover at JSON airport other than gaining a few seconds in DevEx. If I'm missing something, please point it out to me.

christophwille commented 3 months ago

The main goal is to make the JS/HTML/CSS dev loop independent from running ilspycmd. Here's the idea again:

You'd run ilspycmd once to get a model.json. And that is copied into the ./html folder. And then the Web Dev can iterate on the JS/HTML/CSS without ever calling ilspycmd again.

The idea is to modify template.html in a way that: if there is no inline json, go looking for a model.json. And that's it.

h0lg commented 2 days ago

@christophwille I've had some time to think about and look into this and succeeded in tightening the inner dev loop for the HTML diagrammer. The solution I came up with works almost like you imagined it, but instead of running the template.html in the browser, there is a gulp tasks to "fill in" the template with the model.json to generate a class-diagrammer.html output with the model.json baked in. This way you may stay in the JS/HTML/.less editing loop after generating a model.json once, for which there also is a task. No HTTP server required to get around CORS restrictions. Try out the VS Code tasks calling the corresponding gulp tasks and let me know whether this is what you had in mind for the DevEx.

I noticed there is an issue we didn't consider: The build of the C# project depends on the .less transpilation to .css - because it won't be able to copy it to any diagrammer output folder if it's not in the build output. I wrote a sketchy pre-build task calling the corresponding gulp task in the html folder. It works on my machine - but don't know whether it would elsewhere. NPM and VS don't play well and I don't know where to fix the paths so all commands and modules are found. I took the inspiration to use %appdata% from the VsCode console - it runs gulp from there despite its local installation for an unknown reason.

christophwille commented 1 day ago

In the case of ILSpy, it (the C# projects) should not depend on node (not even a hard sell but more of a complete no-go). Can we do something like drop the transpiled stuff in a known location where it is actually checked in? In a ways that the C# part cares only about the final JS/CSS artifacts, whereas the JS dev is capable of modifying them in their inner loop.

h0lg commented 1 day ago

Yeah, I figured you wouldn't want to depend the C# build on that.

Sure, the obvious place to track the transpiled versions would be the html folder, next to their source - where the HTML diagrammer dev loop requires them anyway. We're currently only talking about one .css file - since all the scripts are already in a deployable format.

That would mean including the mermaid.min.js in source control as well, either in its original location inside the normally excluded node_modules folder - or as a copy in the html folder, where the HTML diagrammer's dev loop needs it anyway (I suggest this option). It's currently copied there by the C# export task that generates the model.json and ignored, but that could be done using another gulp task to update the tracked version in this location from the node_modules. )

If that sounds acceptable and the JS/HTML/CSS DevEx is working for you, I'll draft a PR and we can pick it up there.

christophwille commented 15 hours ago

As for mermaid... I was doing some experiments when we last discussed. It simply is too big to include even compressed. Reference from the official CDN?

h0lg commented 4 hours ago

The minified, uncompressed version is about 3.2Mb. I guess you don't want to blow up the repo size by including it. If you mean loading mermaid.js from the official CDN instead of a local copy in the html template - sure, I can revert to that.

h0lg commented 3 hours ago

I've removed the npm dependency on mermaid.js, which relieves the C# build process of it as well. The template and diagrammers generated from it now load mermaid.js from the official CDN by default.

For users interested in off-line use, I've added a download link with basic instructions to the html template.