Closed DaRosenberg closed 7 years ago
@jtkech @Jetski5822 I can of agree with it, but at the same time I have already talked with you about it and I think you had good reasons to do it but I can't remember any.
All what @DanielStolt said make sense, and i'm open to all suggestions.
That's said, i remember there are different things to take into account, but i don't remember the details. Let me re-think about this, i will elaborate.
As a starting point, maybe good to read part of the dyn compilation wiki page.
Best.
I was not aware of that wiki page (apologies) but now I have read it.
I think the arguable assumption made in that wiki page is this:
Dynamic compilation is useful in a development context, you can update a module source file, a package or a core project and just hit F5, all dependent modules will be dynamically re-compiled. When a module has a dependency on a core project which is not part of Orchard.Web, if needed this non ambient project is also re-compiled at runtime.
Dynamic compilation is not actually needed for this scenario ("just hit F5"). What I always do instead is to add a project dependency in the solution (which is not the same as a project reference) from Orchard.Web
to the third party module. This will ensure that the third-party module is always built when you hit F5.
Mostly I don't do this in the original Orchard.sln
but rather create my own copy solution file and make these kinds of additions there, to avoid altering the original Orchard.sln
and make it easier to pull in subsequent Orchard updates into my repo down the road without getting merge conflicts.
No worries, i'm re-reading the wiki before giving more details on this.
I do the same as @DanielStolt.
Deployment of scenario number 1 should be handled by using containerization like Docker to be sure everything works before deployment. Deployment scenario number 2 should be managed by the Orchard Gallery where it will get Nuget packages and/or Github releases "precompiled modules". There should be no reason that we publish source code on a production server... and it should never try to recompile anything since it adds loading time on the entire application while doing so. Of course, Razor Views will still have dynamic compilation on prod servers, but that's expected.
To add one more thing here - for people who really care about that "dynamic compilation" like developer experience, there are things like the dotnet watch command.
Sorry for the length and as i remember.
Dyn compilation / loading
So, the starting point is that we want a modular app which doesn't specify in its project.json file any dependencies on Modules and Themes. So, i agree dynamic loading seems to be necessary.
Then i agree, dyn compile is more useful in a dev context. E.g when you update a core project which is not part of the main app, all dependent modules will be dynamically re-compiled. Then, for a prod context we already have done precompiled modules which are some kind of packages without any project and source files and with all needed binary assets in the module bin folder. Then, in this case there is no dyn compilation before loading. I will talk about this below.
Regular project and dynamic compilation
But 1st what happens with a regular app project which references in its project file all needed projects and packages. When you do a dotnet pack
or a dotnet build
it is compiled and built meaning in part that referenced projects assemblies are outputed to the app binary folder.
Then when you publish the app, more assemblies are outputed, e.g referenced packages which are not part of the targeted framework. And this in a structured way, e.g, when a library provides different implementations for different runtime environments, there are stored in a runtimes
subfolder.
So, when we dyn compile a module, we also do a kind of dotnet build
by storing referenced projects assemblies which are not part of the main app (not ambient). But also a kind of dotnet publish
by also storing non ambient packages assemblies, and in a structured way e.g for specific runtime assets.
Precompiled modules
So, we already have tested precompiled modules which embed needed binaries and then which are not compiled before loading. But, to have all needed binaries (e.g non ambient runtime assets) we needed to use all binaries generated by our dynamic compilation which do a part of dotnet build and publish
.
I agree, the best would be to use regular nuget packages to add modules. I didn't work on this, but here i see some problems we would need to overcome. Normally we use a nuget package by referencing it through the project file of an app. Then we do a dotnet restore
to update the dependency graph (stored in project.lock.json
), then we compile / build / publish ...
But modules are not part of the ambient app, they can be referenced by other modules but which also are not part of the app. So, modules are not part of the building, publishing processes of the main app.
So, to integrate a third-party module at runtime by pointing to a regular nuget package, we would need (i don't know, just an idea on the fly) to do ourselves a kind of dotnet restore
to resolve the dependency graph, this to be able to load it ...
Note: Anyway, this could not be done in a prod env if a module references e.g some specific packages which are not part of the main app / targeted framework. We would need an existing package storage.
Indeed, to make a precompiled module working, we needed to use all binaries generated by the dynamic compilation which do a part of dotnet build and publish
(see above). We need to do this even in a dev context because modules are not ambient. In fact, we do all that is necessary to make modules part of the ambient application (which is not the case) but at runtime.
So, another idea is to 1st create a custom dotnet pack
to produce nuget package which embed precompiled modules (as described above). In part it will do the same as the dynamic compilation, this to generate all the necessary binaries.
Then, when adding a module with such a nuget package we would need to load and unpack it in the right place to be there as a precompiled module, as decribed above with all needed binaries. Then, right now we would need to restart the app to load it.
Note: Here, i would need to learn more on nuget packages implementation.
Note: we use the same underlying roslyn compiler as dotnet build
uses, this through a nuget package. So, e.g C# 6 syntax is supported.
Best.
Remember that people will use Continuous Integration servers and modules that will reference other modules won't compile on such servers if they are not precompiled/packaged as Nuget packages on a Nuget/MyGet server.
@jtkech Thanks for sharing your thoughts. I'm going to process this in more detail, but I'd like to point out two things already.
to integrate a third-party module at runtime by pointing to a regular nuget package, we would need (i don't know, just an idea on the fly) to do ourselves a kind of dotnet restore to resolve the dependency graph, this to be able to load it
Hm, I'm not sure I follow here. If we assume that the module developer has done a dotnet build
before packaging and shipping the module, wouldn't that module's entire dependency graph have been installed into the module at that point, and the resulting .dll files be included in the package?
we use the same underlying roslyn compiler as dotnet build uses, this through a nuget package. So, e.g C# 6 syntax is supported.
C# 6 just happens to be the current example, I meant to make a more general point, so let me elaborate on this. Let's say you are building or running an Orchard website, and you want to use my modules. When I (the module developer) develop the module on my machine, I can use any current or experimental version of any language I want. I could use an alpha version of C# 8 if I want. Hell, I can even use Visual Basic. I can also use any custom Roslyn language extensions, with language constructs I invented! It's all fine as long as I compile the module to IL and target the correct framework before I give it to you.
But if I want you to be able to compile my module dynamically, I can't use any of that stuff, I must use only the language versions and features that I can be sure will be installed on you dev machine and/or your production machine. And what are those anyway? We would essentially need to define an informal contract of what is considered the language baseline.
Again, as i remember so not sure at 100%.
First, i also think that it's better to prevent compilation in a prod context.
Hm, I'm not sure I follow here. If we assume that the module developer has done a dotnet build before packaging and shipping the module, wouldn't that module's entire dependency graph have been installed into the module at that point, and the resulting .dll files be included in the package?
Yes for the list of dependencies in the .nuspec file (but not with resolved paths). But not necessarily all the needed assemblies. I think it's ok if the package only has dependencies on some referenced projects or packages which are part of our main app or the targeted framework. Otherwise, to get all necessary binaries you would need to reference the package in a regular way through a project, then do a dotnet restore, then build and publish the project.
Maybe we could define modules as standalone projects (currently they are portable libraries), then we could publish them to generate all binaries, then maybe we could pack them with all needed assemblies ... Note: as i have seen, the regular dotnet pack only do a build, not a publish.
For the following about language i agree but i've no solution.
Best.
And in another side i also agree that it would be better not to rely on all this stuff. So maybe better to just add regular packages (which can be modules / themes) as needed in the project.json of our orchard app.
But, if we don't want at all to be forced to update the main project file when adding extensions, we have to make them ambient at runtime (and all that this implies). Then, here i'm open to all possible solutions.
Best.
I dunno... we can't keep what we have anyway because we need to shell out to the cli now.
I was thinking of abstracting out all the loaders to separate packages and then in the host, you configure what you need.... that way if you want it... you opt in to it.
This would also allow a better upgrade pipeline. I personally find the extensions project too cumbersome, and think something's need to be optional.
As i remember because i think about too many things, so i'm sure of nothing ;)
You're right about loaders and things that may be optional. When i done this i just kept the loaders as they was and just add a way to compile an extension when the dynamic loader is used. And the way to configure and use options was not well defined.
Note: Currently there is an implicit option, when modules are precompiled there is no compilation.
we need to shell out to the cli now
Here i think you mean to only use the cli. Yes you're right, it would be simpler and easier to maintain.
But, by only using the cli, i didn't find an obvious way to generate precompiled modules with all necessary binaries as described above, this to fully load them at runtime according to their dependencies.
Note: Right now the code for dynamic storing (in probing folders) and dynamic loading is quite as large as for dynamic compilation.
Best.
Been thinking a lot about this lately.
I realize I am probably biased by how I personally use Orchard, but I would actually prefer if we went all the way and fully embraced the notion of O2 being a development-time extensible CMS rather than a runtime extensible one.
Personally I think it makes sense as a general philosophy for Orchard. It is more of a development framework than an out-of-box product. I'm not saying it shouldn't be good out-of-box - only that it's probably a more favored choice with teams looking to build something on top, because it prioritizes extensibility and flexibility over simplicity.
I think it would benefit the product if we embrace the constraints of the following workflow:
This would mean no dynamic extension loading, and of course no dynamic compilation.
So how exactly would you technically "compose" your Orchard at development time? Well I think everything can be distilled down to two basic cases:
The source code modules would typically be the ones you write as part of your solution, but could also be ones others have shared with you in source-code form of course. And consequently, the NuGet-referenced ones would typically be third-party, but could of course also be modules you developed and are hosting in a NuGet feed, public or internal.
I think the NuGet-referenced modules are the easiest case. You would add these as package references to Orchard.Web
. The packages would contain compiled binaries (which would be installed normally as assembly references to the DLLs in the package) along with all the content files (Razor views, scripts and stylesheets, etc) which would be installed into Orchard.Web/Modules/ModuleName/
. If a module depends on other modules, this is handled with package depedencies, and any additional dependency modules are also installed in exactly the same way. At the end of the day, after running dotnet restore
all referenced module binaries and their dependencies will be in Orchard.Web/bin
and each module's content files will be in Orchard.Web/Modules/ModuleName
. No need for any dynamic loading.
Source code modules would be handled slightly differently. You would add these yourself to Orchard.Web/Modules/ModuleName/
and add them to your solution file (either modifying Orchard.sln
or creating your own parallel solution file). You would then add a project reference to your source code modules from Orchard.Web
, which would make sure they are built and their binaries end up in Orchard.Web/bin
. No need for any dynamic loading in this case either.
One small caveat: since source code modules are not installed as packages their dependencies will not be automatically installed; if they depend on other modules, you will have to add those too (either as source code modules or NuGet-referenced modules the same way as any other modules). I don't see this as a disadvantage - it's no different from how source-code modules work today, and it makes more sense for modules you add at a source-code level.
All the built-in modules (shipped as part of Orchard) would be referenced as source code modules exactly as above, of course.
I might be missing obstacles here, but the way I see it, both of these cases would work without any need for dynamic compilation or dynamic loading, as the assemblies would all be referenced from Orchard.Web
and thereby automatically loaded. The only issue (and I personally think it's a minor one) is that you need to modify the Orchard.Web
project file in order to "compose" your Orchard.
The benefit of all this is that the Orchard host could be made much cleaner and simpler, and it would be fine to make more assumptions about runtime.
We would lose this notion of "installing extensions into an already running site" but I have honestly never liked that notion and have always recommended against it. I think if you want different code in your environment, you should do a new deployment.
Would very much like to hear your reflections on these thoughts of mine @jtkech, @Jetski5822, @Skrypt and @sebastienros. Am I trying to take it too far? Is there a value to runtime extensibility that I'm not seeing? Are there any technical reasons I am missing why these two scenarios of development time extensibility would not work as I envision?
I remember a recent discussion with @sebastienros where i said "wow, all this stuff just because we don't want to modify the Orchard.Cms.Web/project.json
file". Need more thinking, will let you know my findings.
But as you said it depends on the philosophy of Orchard.
It is more of a development framework than an out-of-box product.
I've no closed opinions and it doesn't depend on me, but maybe we really want e.g to be able to add a [precompiled] extension without a dev environment to make it part of the app, i don't know.
But after a 1st reading, what i like with your approach is that it is the simplest one ;)
Best.
My initial thoughts when I read some of this thread were against removing dynamic compilation and extension loading, because they seemed too powerful a notions. And what about the Modules & Themes gallery?
But I then realized that one or more NuGet feeds would be the new gallery. Although one would not be able to quickly try out a theme at run-time, that goes the same for any NuGet package - you'll have to try it out at develop-time by installing it, building the project and then executing it. And other than playing around, it is ill-advised to install extensions in production anyway. For one thing, that's almost the same as changing source code in production and then losing those changes after the next deployment. If dynamic compilation and extension loading are to stay, it would be for developers at develop-time. But the following says it best:
Daniel:
I realize this feature was probably conceived in the WebMatrix era to support a PHP-like development cycle where you can author extensions with no build step. But I think we can agree the .NET Core landscape on which O2 is built is a very different world, where build and publish steps are much more naturally integrated and easier to use.
So I agree, at least for now, that O2 does not need to ship with dynamic compilation (or even dynamic extension loading).
Regarding the following statement:
It is more of a development framework than an out-of-box product.
Though this may be true today, it doesn't mean it will be true tomorrow. At which point perhaps we can can revisit dynamic compilation and extension loading. And perhaps .NET itself is not the right platform for dynamic compilation and extension loading in the first place. Perhaps, what we need then is a scripting run-time, where scripts are stored in the persistence layer (solving the issue of "changing source code in production and losing it"). There could even be an entire new market place around scripted extensions. I kinda get excited just thinking about that.
Anyway, these are my 2 cents. I am curious to hear from others.
Just for more infos.
Just realized that, without any change, if you reference in the app project.json a module or a core project (which was only referenced by modules), it is detected as ambient and we use LoadAmbientExtension()
(for a module) which simply return the already loaded ambient assembly. So this scenario already works.
About NuGet-referenced modules, i would need to learn more (will do if needed), but not sure we can do (at least for now) what we want with static content files. See this nuget issue, see below for some comments.
What i understand is that in the packages.config
world specific ps scripts are used to output content files to targeted projects. But in the project.json
world, content files are considered immutable and then they can be referenced (through generated project.lock.json files) but they stay in the package storage. This to prevent from version issues and to be cross platform.
This issue start with this comment where 1 is one of our goals.
Goal: to be able to create a nuget package, which can:
- Copy content (CSS/images/CSHTML files/...) to a project
- Reference set of dependencies (zero or more)
- Transform existing files (config files, source files)
- Run pre-post install scripts
But here the answer.
1, 2, 4 are not supported and not planned to be supported.
Here about the NuGet packages.config world and the project.json world.
There are two worlds for NuGet the packages.config world and the project.json world. What you read is referring to the packages.config world. The unfortunate fact is that powershell scripts is a feature that only work in visual studio and sometimes only in specific versions of visual studio. It does not work from command line, and it does not work in any cross platform way. It is a source of a constant stream of bugs around updating in the command line or on a CI machine, it creates unpredictable changes to the project that may never be properly updated/reverted. Similarly the old content model suffers from similar problems. All that stuff worked because NuGet was sort of a "scripting" addon to visual studio. This approach causes merge conflict issues, the inability to move the packages folder/share projects without breaking the world.
Here about the generated project.lock.json
file (after dotnet restore
) used to reference e.g content files but which stay in the package storage because they are considered immutable. Then about a possible scaffolding model we would need but not planned to be supported.
Our decision going forward is that NuGet's resposibility it to figure out dependencies and download the packages, leaving a lock file and stepping out of the way.
So the state of things right now is: We support describing references in a NuGet package We support describing immutable content in NuGet package
We are discussing possible options for support for a scaffolding model - But we haven't reached any useful conclusion on what that would look like, so this is not completely dismissed, but it is not in the plan for the RTM release of .net core.
So the model in the immediate term is supporting references, and immutable content (see ContentFiles), but nothing more, longer term we might find a way to support mutable content ...
Best.
@jtkech Very interesting. I didn't know NuGet was being simplified so much for the .NET Core world. But I do think it's a good thing - ultimately I agree with the folks who are saying that NuGet shouldn't try to do scaffolding because those kinds of changes are by nature not reversible/updatable.
However, for referencing themes and modules I don't see why we need any type of scaffolding. As far as I can see we only need three things:
Orchard.Cms.Web
to its location in package storage.Orchard.Cms.Web/Modules/ModuleName
.If I'm reading the discussions in the NuGet repo correctly, all of these things are supported, correct?
Some references are made to this blog post for more details on content files, and in that blog posts it looks to me like all we need is supported. What am I missing?
I guess I should try and create a simple package with these two things and try to reference it in this way, to see how this all works in reality. I'm just waiting for the updated ASP.NET Core tooling for VS 2017 as my magic milestone to jump aboard the .NET Core train... :)
Also - the information in NuGet pack and restore as MSBuild targets which was updated more recently seems clear on the support for including content files.
@DanielStolt
You're right the best is to try it, i will do it but right now i need to learn a little to create custom packages.
Yes, they reference this blog post but as i understood somebody was saying that it applies to the 1st world.
What i have understood is that in the 2nd world immutable files are referenced but stay in the package storage. So, the need of scaffolding packages which support mutable content where files are populated to the project and e.g are not updated automatically passing to a new version.
Yes, i've seen some updates to support this but as i remember it was related to .csproj
files, not project.json
, and specific to VS 2017, so not cross platform.
As you i would prefer simplicity but when i saw this nuget issue, i wanted to share it. Hope it will be ok :)
Best
It's my understanding that, by the time O2 is released, everything .NET Core will have moved to .csproj
and MSBuild. And I don't think it's specific to VS 2017, should work fine from CLI and also cross platform.
We should forget about project.json
completely, as soon as right now. Most asp.net projects have already been migrated to .csproj
files. I am personally just waiting for a better build of VS.NET 2017, but at the same time it might be too late as module loading might just not work with a simple migration.
@jtkech that should be your next effort if I may ask.
I'm waiting to find a decent internet connection for VS2017 :)
Sent from my iPhone
On 11 Jan 2017, at 23:58, Sébastien Ros notifications@github.com wrote:
We should forget about project.json completely, as soon as right now. Most asp.net projects have already been migrated to .csproj files. I am personally just waiting for a better build of VS.NET 2017, but at the same time it might be too late as module loading might just not work with a simple migration. @jtkech that should be your next effort if I may ask.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@DanielStolt i heard about .csproj
files but i was thinking it works only through VS 2017 by using specific scripts, but by reading the following in your above link, i think you're right, thanks ;)
NuGet 4.0+ can work directly with the information in a .csproj file without requiring a separate packages.config or project.json file ...
@sebastienros ok for .csproj
files, but if, as discussed here, we will not need dyn compilation and also dyn loading, my next effort would be easy ;) Otherwise no problem to work on it.
There are too many comments here, too lazy to read everything, can someone write a summary if there is agreement on dropping dynamic compilation and dynamic loading but only support Package references?
@sebastienros Here is a summary:
Orchard.Cms.Web/Modules/ModuleName
1 - Orchard Core assemblies could be Nuget packages themselves. 2 - The reservations expressed on my side concerns mostly making this work with CI. In the scenario that a module would reference another module directly this would never work. With Nuget/Myget packages we know that modules are already compiling and that all dependencies resolves. Versioning of packages can then be automated from the CI and/or MyGet/Nuget server itself. 3 - By using NuGet packages we would know which Orchard Core packages can be updated on a specific module. Though modules packages could be updated from the Admin UI by querying which modules are installed and compare versions with NuGet latest (this would make the server on production to get an error about a file already in use if we want to update an assembly that is already loaded). Each modules could have a metadata module file "like we have" that could tell on which version of Orchard Core it will work and which other modules it has dependency on... Though normally the Nuget package should include all it's dependencies already. So if this module runs against a specific version of another module it should be included in it's /bin folder. 4 - This should really be just used for debugging a module locally.
The only problem I can see here is always about unloading a module from runtime which is not a regression from 01. I think that if you change modules installed on production server then you change the entire scope of the application. Reloading the AppDomain just makes the service to get interrupted while doing so. Using dockerization makes more sense then.
Using packages will also make this easier for the Orchard Gallery eventually.
All Orchard Core projects will be released as nuget packages, except maybe the clients.
Sent from my iPhone
On 9 Feb 2017, at 21:08, Jasmin Savard notifications@github.com wrote:
1 - Orchard Core assemblies could be Nuget packages themselves. 2 - The reservations expressed on my side concerns mostly making this work with CI. In the scenario that a module would reference another module directly this would never work. With Nuget/Myget packages we know that modules are already compiling and that all dependencies resolves. Versioning of packages can then be automated from the CI and/or MyGet/Nuget server itself. 3 - By using NuGet packages we would know which Orchard Core packages can be updated. Though modules packages could be updated from the Admin UI by querying which modules are installed and compare versions with NuGet latest. Each modules could have a metadata module file "like we have" that could tell on which version of Orchard Core it will work and which other modules it has dependency on... Though normally the Nuget package should include all it's dependencies already. So if this module runs against a specific version of another module it should be included in it's /bin folder. 4 - This should really be just used for debugging a module locally.
The only problem I can see here is always about unloading a module from runtime which is not a regression from 01. I think that if you change modules installed on production server then you change the entire scope of the application. Reloading the AppDomain just makes the service to get interrupted while doing so. Using dockerization makes more sense then.
Using packages will also make this easier for the Orchard Gallery eventually.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Ideally everything should be a Nuget package, but the web app and custom/customized modules.
Right. The web app could be a separate project itself. In the scenario of a SaaS frontend the only thing you need is the Web API access to the Core backend. HMAC and/or JWT auth
Thinking about this and custom/customized modules should also be Nuget packages if you want them to reference each others and to compile with a CI. Maybe more Myget then. AppVeyor has private nuget feeds.
https://github.com/fsprojects/Paket Found this the other day.
In master
I didn't read through this in detail, just searched and scanned, so sorry if I miss a point here but I think the most important aspect (or for me, rather the only use-case) of dynamic compilation is to let developers change code in their own extensions during development time and see the change with a simple browser refresh. In O1 this is a great productivity increase and if you're working with VS and running your app with IIS Express (or IIS) then it's the same for Core.
I'm specifically talking about the time when you're working on the app (I don't think dynamic compilation should be enabled when the app is deployed), you already have a module project added, and you modify the contents of a .cs file.
What are the arguments against having this? I understand it would need to be built, so I understand it not being a priority necessarily, but why actively decide not to have it at all?
The best argument ever: It doesn't work. And getting something to work took @jtkech a year of effort, with a brittle solution. By this I mean loading modules dynamically, without package/project reference. There might still be some space to get projects to be rebuilt automatically when a cs file is changed, though I think VS already does that for us, or it was doing it at some point.
So this is about module loading? The opening comments tells otherwise. I don't want to get rid of project references and everything, that's fine, just having to recompile code changes.
Actually what I mean under "dynamic compilation" is what dotnet watch
does. And while you can't use that directly with Ctrl+F5 now I learned that you can add a neat shortcut to it in VS. So nothing needed here.
It was working at some point but, as i remember, some people wanted more complex things e.g by referencing (even indirectly) a specific runtime library. This in place of keeping modules as portable libraries. What's good now is that if we have build errors, we tend to say that we did an error in our own code.
And when they moved from Project.json
to .csproj
files, everything had to be rewritten.
What doesn't work is unloading assemblies on runtime. So you still need to restart the app pool if you want to remove a module... or update a module... which is why this feature will always be flaky.
Yes, i forgot this one. Hmm, maybe now we can but it does not seem so easy.
Yeah targetting .NET Core 3.0 and up ; I don't mind if we can unload assemblies then... but 3.0 is releasing sometime in 2019 ... everything before will just be previews.
Hi everyone. First, I was amazed by your work when I saw Mr. Sebastian present the orchard core on Visual Studio channel. The dynamic loading dll directly is the first reason for me to look after Orchard. Everyone on this page should take a look at: ExtCore project (I'm not the author)
As of a few weeks ago dynamic compilation of sorts actually works (not due to a change in Orchard, possible a change in Visual Studio and/or the .NET Core SDK): You can change C#, refresh the site, and it'll be automatically compiled (similar to dotnet watch
). Dynamic extension loading, however, is not something that would make too much sense (it wasn't too useful in Orchard 1 either). (By dynamic extension loading I mean that you could add modules/themes as source to a deployed app and they'll be loaded, instead of having to add them to the app during development and having to deploy it.)
In Orchard 1.x today we have support for dynamic compilation of extensions (modules and themes). For Orchard2 I propose we do not implement this.
NOTE: This is not the same as dynamic loading of extensions - which is of course necessary and a good thing IMO.
For integrating a third-party module into your Orchard website, I see two scenarios:
netstandard
.In my experience from the years I've worked with Orchard 1.x I have never had any benefit from dynamic compilation, but I have experienced one disadvantage with it: as a module developer you are limited to the compiler functionality and language features that the Orchard dynamic compilation engine supports, which today means, for example, we cannot use C# 6 or later syntax in Orchard modules.
I realize this feature was probably conceived in the WebMatrix era to support a PHP-like development cycle where you can author extensions with no build step. But I think we can agree the .NET Core landscape on which O2 is built is a very different world, where build and publish steps are much more naturally integrated and easier to use.