Closed mwpowellhtx closed 5 years ago
I'm not sure what services are you referring to, but the CodeGeneration.Roslyn.Tests
are not only unit tests (I don't think there are any, actually), but rather functional tests. Outside of the DocumentTransformTests
the test is whether it'll compile or not, because generators must run for the code to be correct.
DocumentTransformTests
are testing whether DocumentTransform
, given preset inputs, will generate an expected source document. But that includes running almost all the pipeline except parsing CLI tool arguments - parsing compilation (one string), resolving references, finding attributes, instantiating generator, merging results and spitting out source document.
The question I am getting a handle on is a couple fold.
CompilationManager
. Not the first time something like that was done, neither would it be the first time I took a pass at that issue, either. Although this time around I might formalize it as a first class NuGet package.CompilationManager
, as contrasted with saving generated code to disk. That is, perhaps providing an event that receives the generated code just prior to being saved.In general I am finding the test cases, loosely called, somewhat nonsensical, literally cases like deriving a FooA
from a class Foo
, I mean, really... :wink: However, that being said, I'm just looking for a way that I can more or less subscribe to and/or watch the CG in action.
The main issue there I think is the dotnet-codegen tooling artifact that is invoked separate from any of the active, i.e. test, run-time. Perhaps I can at least inject a deterministic output directory into the target(s) that the test then watches for the output.
Which I'm starting to think therein lies the rub. My unit tests need to provide file level comprehension of the projects, and their CG, under test.
@amis92 I'm not sure what a functional test is, but, yes, more akin to an end-to-end or integration test, if you will.
@mwpowellhtx Pardon me, but I feel like we're talking through a wall almost every time.
Why do you need to test the whole system? Why not just your generators? What is a Compilation Manager? It's all already tested in-memory. Where is the question you're referring to? It all kind of feels like you're just writing down your thoughts on some matter, alien to me, and somehow I'm supposed to guess what was the context of your reply. I feel terribly bad, but I'm exhausted and I don't see myself being able to drill down on your posts for much longer. I'd suggest that if you want to discuss something, try to specify the issue at hand as clear as possible, instead of referencing some situation you had we have no knowledge of. You're lately often including references to your own fork, and yet I can't see anything new there. Therefore I am not able to investigate anything on my own.
@amis92 Yes, I apologize the context is a bit foreign to this project. I am recasting CGR a little bit, and taking the opportunity to better organize the framework, including the test bits, cleaning them up such that they are less enmeshed, etc. When my prototype is ready I expect to publish to g/h.
Studying the Document Transformation tests, is it fair to say this is less of a document transformation test as it is, say, EmptyPartialAttribute
, AddGeneratedUsingAttribute
, AddGeneratedAttributeAttribute
, etc, tests... facilitated by the Document Transformation, that is.
No, I wouldn't say so. These are actually mostly feature/behavior tests for ICodeGenerator
classes, so that results the generator returns is correctly merged into generated document.
So indeed these are DocumentTransform tests, because that's exactly the job od that class.
No, I wouldn't say so. These are actually mostly feature/behavior tests for
ICodeGenerator
classes, so that results the generator returns is correctly merged into generated document.
Good point. That is an assumption of the merge with partial classes, structs, etc.
@amis92 I pretty much figured out how to test end-to-end without polluting the actual test environment with extraneous or academic `suffix´ bits.
But I am also taking the position with my recasting efforts that a CGR should only be there to facilitate the actual code generation. Nothing more, nothing less. It is left for the CG authors to determine the name spaces or other child nodes they want to relay through the generated compilation units. Which I'm finding greatly simplifies the core offering.
It depends on the fact that the tooling netcoreapp
is just an assembly like any other, and we can seamlessly integrate with its Program.Main
at a unit test level. Short of invoking from a command line, never mind from Microsoft Build targets, that is about as comprehensive an integration test as there is. Plus I can gain comprehension of scenarios such as, `regenerate when source is updated´, things of this nature.
So-called... Just trying to grasp what is actually being verified there. What I want to do verifying my recast of the CGR is to verify whether services were actually invoked. I suppose that is evident indirectly by whether generators are invoked by virtue of their respective attributes triggering. This to me seems like the bottom line. To me all the other decoration, renaming derivative elements with suffixes, things of this nature, is secondary, but I just wanted to gain a little insight. I'll close this issue, but feel free to respond. Thanks so much!