Closed dazinator closed 5 years ago
This is interesting, and seems useful, thank you for sharing what you've found @dazinator.
It sounds like one easy way to improve CSLA in this regard would be to formalize a way for you to provide a Func
(as part of configuration) that would be invoked to create a new instance of the DbContext
when necessary. That would avoid your change to the core code.
formalize a way for you to provide a Func (as part of configuration)
Yes I think that would be helpful, it would save having to implement a custom DbContextManager
.
If you allowed a Func<> to be configured for creating the instance, I guess you would also need to allow a boolean
to be specified which states whether the DbContextManager
should dispose of the dbcontext
or not on the last deref().
I don't think it could be per-instance, it would have to be a static
that's set on the type at the time of configuration. Otherwise that Func
would be scattered all over your code.
Agreed. So similar to object factory loader approach then
FactoryDataPortal.FactoryLoader =
So something like:
DbContextManager.FactoryOptions = new DbContextFactoryOptions() { FactoryMethod=myFunc, LeaveOpen=true}
Idea behind name LeaveOpen
is stolen from Stream
. MsDocs:
Unless you set the leaveOpen parameter to true, the StreamWriter object calls Dispose() on the provided Stream object when StreamWriter.Dispose is called.
So DbContextManager
woukd honour LeaveOpen when it came to considering disposal of the dbcontext instance.
Going one step further, I do think there could be an opportunity to address this a bit more broadly..
Take this Func idea and maje it a Func<T>
and that is configured at a global level like CslaServiceProvider.ResolveFunc = myFunc
Csla would then have a general purpose hook for container adaption / service resolution. For example in its default DbContextManager
s, ObjectFactoryLoader
s or anywhere else it needs to new up an instance of some type or resolve some services, it could use this to resolve the instance instead. The default / fallback implementation could do an Activator.CreateInstance() perhaps, to remain consistent with today.
This would then allow something like:
CslaServiceProvider.Func = myFunc
DbContextManager.FactoryOptions = new DbContextFactoryOptions() { LeaveOpen=true}
I have a feeling though that Csla could just use the System.IServiceProvider
interface for this as it seems to be available very widely in .net..
ServiceProvider someProvider = BuildServiceProvider();
Csla.ApplicationContext.ServiceProvider = someProvider
If something like this was available in Csla then:
I would register my object factories with my ServiceProvider
and would no longer need to implement a custom ObjectFactoryLoader
as the default Csla one could now resolve using Csla's ServiceProvider.
I would register my DbContext with the ServiceProvider, and similar story for DbContextManager's.
With respect to scoping..
This complicates things. We would need the ability to override the IServiceProvider
that Csla is using depending on current scope (Request, Hangfire job etc). You would end up coming back to having something similar to code I have shown above where you have the default ServiceProvider at a static (or in global context) level, and ability to set a scoped one that can be stored in LocalContext and which will override the global one when present.
Many containers today provide an IServiceProvider
adaptor, but I am not certain its necessary for this to be used. Csla could define its own adaptor interface if it needed to.
Pretty wide change for Csla though so appreciate this kind if thing is not entered into lightly. A configurable func for dbcontextmanager would be better than nothing for now.
I've been thinking about this quite a lot, including conversations with @JasonBock.
First, we need to understand that this will flow from Microsoft's IServiceProvider
scheme - used in ASP.NET Core, but available to all .NET Core software. It is a common denominator.
Second, there are two scenarios to consider. We're only talking about server-side code here, invoked by the data portal, which means DP_XZY methods and ObjectFactory
methods.
For ObjectFactory
methods we can use constructor (ctor) injection, because those object instances are created, used, and destroyed specifically for the purpose of acting as a factory.
For DP_XYZ methods things are more complex, because we do not want to give references to these non-serializable/location-dependent objects (like database connections) to the business object as a whole - only to the DP_XYZ method(s) being invoked.
I think this means we need to write our own per-method DI implementation.
Typical IoC frameworks have ctor injection and property injection - but using either one of those will be bug-prone because I can basically guarantee that developers will forget to store those injected values in non-serializable properties/fields and the CSLA forum will be forever plagued by questions about why business object graphs refuse to serialize.
What we need is the ability for the data portal to use IoC concepts when invoking the DP_XYZ methods. Maybe these methods start allowing parameters that are injected?
private void DataPortal_Fetch(int id, ICustomDal myDal) { }
In a sense the data portal already "injects" the criteria parameter. Conceptually we're talking about the data portal now being smart enough to look at the list of defined services from .NET Core and injecting anything that matches the other parameters people might add to their DP_XYZ methods.
I suspect it is harder than it sounds, but we need to agree on a high level design to explore before getting too far down the road.
Thoughts?
For DP_XYZ methods things are more complex, because we do not want to give references to these non-serializable/location-dependent objects (like database connections) to the business object as a whole - only to the DP_XYZ method(s) being invoked.
Yes that's a key point..
The workaround to avoid this currently is to use the ServiceLocator
anti-pattern within the DP_XYZ method - which is the method I have been using. This is typically where you want to get the current "scope" (i.e container) and locate any needed services (i.e DB Context) from that scope. This puts the responsibility of creating a mechanism for managing what the current scope / container is on the developer - most will create a singleton container instance and go from there to add checks for things like http request scope etc.
I am all for method injection in the dp_xyz methods then, because that should remove that concern from within the bodies of these methods such that they become simpler. However that concern would now have to be one for CSLA - i.e from which scope should it inject from? Providing some API in which the current scope (IServiceProvider) can be set would be good - this is similar to the approach I took above, i.e using LocalContext to store scopes that override the default in GlobalContext.
One minor thing - method injection doesn't solve the concern you mentioned entirely, because a developer could still grab an injected paramater and set a property ;-) - but it's still better, and perhaps that's a close as we can get!
a developer could still grab an injected paramater and set a property
We can only hold people's hands to a certain point :)
Does it mean we are going to abondon 'contrained construction' anti-pattern for plugable dataaccess in favor of more 'famous' DI containers?
Regards
Personally I really like using a provider pattern when loading my DAL. But there's no doubt that DI simplifies the code in the DP_XYZ methods by shifting any loading of the DAL up into some other code, and the DAL is just provided as a parameter to your DP_XYZ method.
Following the ASP.NET Core DI model, basically you'll select your DAL in your app's startup.cs
where you initialize the IoC container, and whatever DAL you've configured there is what'll be injected into the DP_XYZ methods.
I'm assuming encapsulated invocation - by far the most widely used data portal model. The other 3 models would inject database contexts or connections or something else.
Of course DI is a "turtles all the way down" thing - so the DAL object that's injected to DP_XYZ would also have been created by having the database connection/context injected into the DAL - again something defined on app startup in the IServiceCollection
(typically ConfigureServices
method).
@rockfordlhotka , I couldn't get the Provider pattern working when we use namespace hierarchy between MyProject.DataAccess and MyProject.DataAccess.Sql. The Interfaces defined in DataAccess project and Implementations in DataAccess.Sql have to at the top level namespace.
Regards
When @keithdv and I were thinking about Ystari, we spent a fair amount of time designing a way to do DI was operation-based, and also considered how to make this work with distributed scenarios. The notes are here, I'd suggest reading them as they're pretty relevant to this conversation.
https://github.com/MarimerLLC/Ystari/blob/master/docs/dependency_injection.md
I'd think the best way to introduce this feature would be to allow an IServiceProvider
to be set somewhere in csla application context. If this is present, then the server side DataPortal can use it to perform method injection of the DP_XYZ methods on the BO. If this is not present, then it can continue to behave as it does today.
Then depending on the host platform, if people want to leverage this feature, they'd have to make sure the CSLA IServiceProvider is set to the correct scoped IServiceProvider prior to BO DP_XYZ method invocation. For ASP.NET Core hosts, this means having a UseCsla()
method that people can call in startup.cs that adds some CSLA middlware to the pipeline that takes the service provider from the HttpContext.RequestServices
and sets it into the appropriate csla property, such that the server side dataportal will use it. For OWIN based hosts it would be a similar story. For .NET Core applications that aren't web based (i.e now that winforms and wpf are coming, or if its a console app etc) then the developer would still configure an IServiceProvider during app startup, but they could create new scoped providers as necessary (for example per winform etc) and set that scoped IServiceProvider into csla application context as appropriate. For WCF hosts on the .NET Framework, i've not looked at IServiceProvider integration.
I am already using this approach successfully, just without the method injection of the DP_XYZ calls - I am grabbing the current scoped IServiceProvider within my DP_XYZ method bodies and using it as a service locator.
One caveat of the method injection, it's not just registered services that would need to be injected, but also any object instances you passed directly into DataPortal.Fetch() - i.e such as Criteria. Microsoft.Extensions.DependencyInjection.ActivatorUtilities.CreateInstance()
is an existing method in the framework that does what we would need, at the constructor injection level.
@dazinator you say
I am grabbing the current scoped IServiceProvider within my DP_XYZ method bodies
How are you grabbing that object?
This appears to be the heart of the code we'll need:
var services = serviceCollection.BuildServiceProvider();
var t = obj.GetType();
var m = t.GetMethod("DoCoolStuff");
var ps = m.GetParameters();
var prms = new object [ps.GetLength(0)];
for (int i = 0; i < ps.GetLength(0); i++)
{
var pType = ps[i].ParameterType;
prms[i] = services.GetRequiredService(pType);
}
m.Invoke(obj, prms);
@rockfordlhotka
How are you grabbing that object?
At the top of this thread, look at the ServiceProviderCslaExtensions
class. I've given an explanation below it but to recap:
serviceProvider.SetDefaultServiceProvider()
That extension method places it into csla's Global context.
Then, say I want to CSLA to use a scoped IServiceProvider
rather than application one:
var someScopedServiceProvider = HttpContext.RequestServices();
someScopedServiceProvider.SetCurrentScopeServiceProvider():
That extension method puts the IServiceProvider into csla's LocalContext.
Then lastly there is a static method that I can use within csla DP_XYZ or anywhere else, to get the current service provider. It works by returning the one in LocalContext if present, falling back to GlobalContext.
In asp.net core applications I Use() middleware created for CSLA that at the start of a request does httpcontext.RequestServices.SetCurrentScopeServiceProvider();
Oh other thing I found that works well using the above mechanism is unit testing. I can run xunit async tests in parallel rather than one at a time, thanks to the fact that LocalContext flows with async operations, each async unit test sets up an IServiceProvider with dependencies and then sets it into LocalContext using SetCurrentScopeServiceProvider()
.
Based on the thread in #1102:
So the algorithm when criteria is provided needs to be something like:
object
or match the types of supplied criteria parameters
The algorithm when NO criteria is provided needs to be something like:
If there is a need to support criteria objects that implement interfaces, maybe use of IsAssignableFrom could help solve this issue? Maybe also add an exception if zero matching methods are found to help developers find mistakes?
So have CSLA define an empty ICriteria interface and require that people extend it in their interfaces?
public interface ICriteria {}
public IPersonCriteria : ICriteria
{
// blah, blah, blah
}
It would simplify my algorithm slightly, but seems like it adds a burden to everyone using CSLA. @ajj7060 is using intefaces - what do you think of this idea?
I totally agree - exceptions are necessary if the method is ambiguous or if none is found.
After reading the algorithm, i think i better understand all the edge cases that need to be accounted for. Would this work (and possibly make things much simpler)?
private Task CreateAsync(ISomething criteria1, IWhatever criteria2, [Inject] IService service, [Inject] IService2 service2)
You could assume any parameter not marked with an InjectAttribute
is supposed to be a criteria, and do you matching based on the number of criterion (?).
Also regarding ambiguities, i think most DI containers will pick the greediest ctor if you don't explicitly tell it which one you want. Maybe it would be best to go that route here too (and throw if there are still ambiguities).
I like the idea of FromServices like attribute. Definitely eliminates the guess work.
FromServices is consistent with how you inject services into MVC action methods on asp.net core. So +1 from me also.
Not everyone is a fan of that attribute: https://github.com/aspnet/Mvc/issues/3578
At the same time, as I'm trying to implement the algorithm from earlier in this thread, I struggle with disambiguating these (for no criteria at all):
private void Create() { }
private void Create(IMyCriteria c) { }
Both could match a zero parameter scenario, because we could assume the second method's parameter (as an interface) could be provided by via DI.
So I'm rather settling on the idea of the FromServices
attribute to cull the DI attributes and separate them from the criteria attributes.
Or we could go the other way, and require a Criteria
attribute for all criteria parameters. But controversial as it might be, FromServices
is apparently already a pattern so that's probably the way to go.
Though I can't use the actual FromServices
attribute, as it appears to be a part of ASP.NET Core, not generally available to .NET Core 😞
Interesting read on that controversy thanks Rocky. So FromServices is really just a model binding feature in asp.net core mvc - it tells the model binder that the binding source for that parameter is not the http request, but is the IServiceProvider, and with that source known, it uses a different strategy to resolve the parameter - one that uses HttpContext.RequestServices IServiceProvider.
I'm wondering if perhaps CSLA should use its own attribute name for this concept to avoid confusion with mvc's? Or should it try for parity even though its going to have to be a seperate attribute anyway (as like you say we shouldn't depend on mvc's!). Im not good at naming though here some other ideas thrown about:
Could possible omit "Service" from those for brevity.
At the same time, as I'm trying to implement the algorithm from earlier in this thread, I struggle with disambiguating these (for no criteria at all):
private void Create() { } private void Create(IMyCriteria c) { }
Both could match a zero parameter scenario, because we could assume the second method's parameter (as an interface) could be provided by via DI.
Well, if you didn't want to go with an attribute, my take is that if you have to pick one and there's no criteria, go with the 'greediest' one, which is the IMyCriteria, which is what DI containers would do with ctor resolutions. Then try to resolve via the DI container.
Some DI containers I think will throw if they can't resolve something for IMyCriteria, which I think is fine; perhaps then you could throw if the DI container returns null
since you probably always expect that something gets passed in.
Although it seems like the controversy is using FromServices
on controller properties or ctor parameters, not for action method parameters (which is basically what Csla is doing). I didn't know about any of this because i haven't kept up with MVC core 😆
Or we could go the other way, and require a Criteria attribute for all criteria parameters. But controversial as it might be, FromServices is apparently already a pattern so that's probably the way to go.
I feel like that could/would break peoples existing stuff. Since DI support for data portal methods is new, I think the new thing should get the attribute, not the old thing (criteria parameters).
One more than; can we please not have Service
as part of the attributes name, if that's the route taken. I feel like service is so overused/abused that its almost meaningless to the point that every object with even one method is now a "service."
The problem with having no attribute @ajj7060, is that it would make your life more challenging (I think) because you could never have a zero parameter create/fetch method - it would never be invoked.
So in your case, where you use interface-based criteria, this method would always be the zero and one parameter selection:
private void Create(IMyCriteria c) { }
So both of these would resolve to that method:
var obj = await DataPortal.CreateAsync<MyType>();
var obj = await DataPortal.CreateAsync<MyType>(new MyCriteria(123));
What would happen on the server (when no criteria is provided) is that the DI container would fail to resolve IMyCriteria
so the parameter value would be null
.
And maybe that's OK?
Though over the years I've gone through a lot of work to ensure null
works as expected - as in null
is not the same as no criteria at all. Empty and null aren't the same concepts.
For attribute naming. The FromService
name I suspect flows from the fact that when you configure the .NET Core DI container you are explicitly setting up a ServiceCollection
, so the naming is consistent.
However, I'm fine with us choosing our own name. I like Inject
and Resolve
personally.
Yeah the lame thing is, all those parameters are technically being "Injected" or "Resolved". It's just whether the source for that resolution is stuff sent in the request through the dataportal (i.e ICriteria) or whether it's services registered in a container. The default source should be data portal parameters to preserve backwards compat, and this attribute just lets you point CSLA's "Resolver" (for want of better term) to a different source. When viewed like that [FromServices] actually makes perfect sense for MVC as the source is that source is the ServiceCollection and "From" implies you are specifying a target location. So the atrribute is not saying this param shoud be [injected] and the others shouldn't be because technically they all are, just with different strategies.
However if I had to pick one of the above, I'd go with Resolve
;-)
Or even (I know someone above will hate this) [Service] just because the source is an IServiceCollection.
The default source should be data portal parameters to preserve backwards compat, and this attribute just lets you point CSLA's "Resolver" (for want of better term) to a different source.
Should consider the case whether we want to allow a object provided in the DataPortal request (i.e ICriteria) to override a parameter that has [Resolve].
For example I can see a case where it might be handy to register a default criteria instance, so that way you don't have to specify any criteria when you make a DP request, unless you want something non default. Crudely written example:
var boThatUsedDefaultCriteria = DP.XYX();
private void DataPortal_XYZ([Resolve]MyCriteria)
{
}
services.AddSingleton<MyCriteria>();
var boWithNonDefaultCriteria = DP.XYX(new MyCriteria(false));
Hmm, @dazinator that throws a wrench in things 🔧
So having done a bit of reading around constructor DI, it sounds like it is an accepted anti-pattern to have multiple ctors when using injection. DI containers just pick the one with the most parameters and always ignore all others, and throw an exception if there are more than one "with the most parameters".
What I've been doing thus far with my algorithm is supporting both the concept of overloads for various criteria sets and variation in DI-resolved parameters.
So I'm left wondering. Perhaps we should say that there can be exactly and only one create/fetch/insert/update/delete/deleteself method per business class. Entirely do away with the concept of "empty" or no criteria or variations in criteria.
The result of this would be that you'd have exactly one CreateAsync
method that would accept parameters for any variation of a create call you require, and my resolver will do its best to match all supplied criteria parameters to that method, and also try to resolve each interface-based parameter via DI.
btw, I now have a decent set of unit tests for my current (overly complex?) implementation that does allow overloads of multiple criteria parameters:
So I'm left wondering. Perhaps we should say that there can be exactly and only one create/fetch/insert/update/delete/deleteself method per business class.
Yes this would make things much simpler to reason about but.. I'm trying to think of the impact that would have on how you go about defining criteria classes in general. BO's can often be fetched in various ways - by I'd, by name, by date range etc etc. In this new world, rather than having separate fetch criteria for each of those are you saying you'd have one fetch criteria and one dp_fetch method? I like it but that would increase the complexity of fetch criteri classes as they would have to become all encompassing?
Unless you could do something like either of the following:
Dp.Fetch(new IdCrit(1), null)
Dp_Fetch(IdCrit id, NameCrit name)
{
If(id!=null)
GetById(id)
Else
GetByName(name)
}
As you add new criteria this one dp_fetch method signature could then expand?
Or failing that: maybe this new single / all purpose criteria object could itself be a composite of more granular ones:
Dp.Fetch(new FetchCriteria(new IdCrit(1), null));
Dp_Fetch( FetchCriteria crit)
{
If(crit.Id !=null)
GetById(crit.Id)
Else
GetByName(crit.Name)
}
I think either could work.. I prefer the latter though.. it looks cleaner and probably no changes for the data portal.
This, btw, is the reasoning behind why the original implementation only supports 0 or 1 parameter. 0 for simplicity, 1 because you can always pass in a complex type if you need multiple values.
I'd like to get @JasonBock's thoughts on this whole single vs multiple method discussion as well. It was his idea to have the data portal pass multiple parameters, which seems so nice on the surface, but clearly makes things more complex on the DI side...
I see!
Does that mean for the super lazy you could provide a new general purpose FetchCriteria
class out of the box that acted like a dictionary or even an expando / dynamic object?
var fetchCrit = new FetchCriteria():
fetchCrit["id"] = new IdCrit(1);
Dp_Fetch(FetchCrit crit)
{
var idCrit = crit.Id // expando style access idea
var nameCrit = crit.Name // non existent property access would just return null..
// or same as above but dictionary style access rather than dynamic object style.
}
I appreciate it's not the nicest and you loose compiled time checks and safeguards. I just suggest it because it almost gets you to being able to define a standard interface for the dp_ methods and everyone loves interfaces :-)
I'd like to get @JasonBock's thoughts on this whole single vs multiple method discussion
Ok. I look forward to seeing what the outcome is for the next version of csla.
@dazinator you and I are essentially reviewing the thought process that went into the implementation of the data portal we all know and love (?) today. Which is valuable, because that was all thought through back in 2001 or something 😄
I think what is making this change complex is that I'm looking at both DI and the multiple parameters from #1176 all at the same time.
@rockfordlhotka Sorry for any confusion; i'm 100% good with the attribute approach. I was just trying to reason out without it in case there was heavy opposition to it here.
I think without it though the scenario with Create()
and Create(IMyCriteria)
if the Create()
never got resolved that'd be fine; just like today i might also have Create(string)
and if I never call DP.Create(string)
it'd never get called either.
I think though (if we're talking about no attribute, which I think isn't the way to go) if Csla tries to resolve IMyCriteria
and the container actually returns null (some will throw if it can't resolve) Csla should throw and not pass null the the Create method.
So I'm left wondering. Perhaps we should say that there can be exactly and only one create/fetch/insert/update/delete/deleteself method per business class. Entirely do away with the concept of "empty" or no criteria or variations in criteria.
Well we actually do have multiple Fetchs/Creates now, to support slightly different ways to get the same data base. I'd really had to lose that, as places we've done that have been to simplify the Create/Fetch code as having one god
fetch was overly complex.
So I am really liking the attribute to mark DI injected parameters. And if having multiple criteria parameters is marking things more complex, I think saying the DP methods can have zero or one criteria parameter as the first parameter in the method is fine... that's what we have today anyway.
Is there a value to allowing more than one criteria, when all this time we've been bundling things into a CriteriaBase<T>
subclass all along anyway?
Does that mean for the super lazy you could provide a new general purpose
FetchCriteria
class out of the box that acted like a dictionary or even an expando / dynamic object? I appreciate it's not the nicest and you loose compiled time checks and safeguards. I just suggest it because it almost gets you to being able to define a standard interface for the dp_ methods and everyone loves interfaces :-)
Sorry but I hate the idea of losing compile time static type checks 😄
@ajj7060 you may want to look at that unit test code I linked to earlier in the thread - the current prototype implementation probably meets all your needs (I hope!). I took your suggestion and built the algorithm to be greedy, so these aren't ambiguous (even though they sort of are):
[Create]
private void Create() { }
[Create]
private void Create([FromService]IDal dal) { }
Both can technically handle the "no criteria" case, but because the algorithm is greedy it will consider this not ambiguous, and will select the second overload.
@rockfordlhotka ah ok, i will check that out tomorrow, thanks!
Oh happiness! I now have a test that invokes a method with a combination of criteria and DI parameters!
This is coming together pretty nicely so far.
Just chucking this out there.. another approach might be to serialise an expression tree
and then `deseriliaze' it, compile and invoke it passing in the service provider as an argument.
End effect would be something like:
dp.Fetch((sp)=>this.GetById(1, "foo", sp.GetRequiredService<ILogger>());
I think that could do away with DataPortal_Xyz methods entirely and gain compile time checking as well as remove any ambiguity around which method is going to be called..
Probably a fair bit of work though. Looking at bottom answer here it should be possible: https://stackoverflow.com/questions/23253399/serialize-expression-tree
I'm pretty sure Hangfire (popular job runner) uses this approach for scheduling jobs.
@rockfordlhotka If I'm understanding the code, I think that covers everything that was discussed by everyone in the thread. Glad to hear its coming along nicely!
Still need to finalize the attribute name.
Despite my earlier rant, I'm not too worried about the attribute name 😆 I think I like Inject, followed by Resolve, followed by FromService though. Hopefully if someone is reading the code Inject will immediately make them think of Dependency Injection. Resolve should too, but maybe not as immediately obvious, and FromService is good because that's what exists in Asp.net.
I'm really excited about these features, they will really make Csla easier to use I think!
Great news @rockfordlhotka
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Apologies for the length of this :-)
I'd like to describe some issues / challenges I have had to work though recently for an asp.net core application. I think there is some potential here for adding some support to CSLA to help enable DI in asp.net core to flow through CSLA scenarios. This could be in the form of adding some support classes, I will describe below.
First a brief overview of my setup:
I have an asp.net core web application, which also hosts a
hangfire
job service (runs jobs on background threads within the sameAppDomain
).My aims:
startup.cs
. Use whatever container I like as long as it supports theIServiceProvider
adaption (many do).DbContextManager
in my CSLADataPortal_XYZ
methods.DbContextManager
should resolve theDbContext
instance that is in scope. More onscopes
below. This could beApplication
scope,Request
scope or some arbitrary scope.ObjectFactories
- and be able to resolve them fromcurrent scope
.The reason I say "current scope" is because the current scope can vary depending on where your CSLA usage is:
IServiceProvider
) are all configured on startup. This is the default scope if no other scope is overriding it.IServiceProvider
inHttpContext.RequestServices
.ServiceScope
for each job executing on a background thread.Now consider a couple of CSLA extension points that potentially need to resolve dependencies. Here are the ones that were important to me:
IObjectFactoryLoader
- This creates theObjectFactory
instances. For DI purposes this needs to resolve the factories from the "current scope". If it doesn't respect the current scope, you get bugs. For instance shared instances of DbContexts being used accross multiple factories. You see this in the background job scenario.DbContextManager<C>
- When not using ObjectFactories, I often just useDbContextManager<C>
in the DataPortal_XYZ methods. However it should resolve theDbContext
instance from the current scope.The problem I have had to work through then is this issue of
Current Scope
. There is no method (to my knowledge) of just arbitrarily detecting what is the "current scope" - i.e you can't just do thisServiceScope.Current
.Consider the following code, imagine both methods are executing at the same time, on different threads:
There is no way to write an
ObjectFactoryLoader
orDbContextManager
that automagically discovers these arbitrary scopes. And so my conclusion is, CSLA (or more precisely - the hooks I am talking about likeDbContextManager
and IObjectFactoryLoader`) have to be told about the current scope at the time of usage.The current implementation of
DbContextManager
for EF7 / Core will always create a new instance ofDbContext
which it then disposes of on the finalderef()
. My need is for to use the instance within the current scope. This may mean one is created, or it may mean an existing instance is re-used - but its the current scope that dictates the lifetime.This eventually led to me implementing some additional helper classes to handle this concept of scope with CSLA in asp.net core scenarios:
Basically:
IServiceProvider
get's stored inGlobalContext
- by callingSetDefaultServiceProvider
.IServiceProvider
for whatever the "current scope" is, get's stored inLocalContext
and CSLA will use that if it is present, otherwise it will fall back to the default one.Middleware
that runs on each request, and callsSetCurrentScopeServiceProvider
with the IServiceProvider that is created for current request scope.I made a tweak to the existing
CslaBootstrap
class to set the default service provider:To get this working with Hangfire background jobs, after hangfire creates its scope for the job, I do this:
My
IObjectFactoryLoader
implementation looks like this:Note it takes a
Func<IServiceProvider>
. This is created like so:This means whenever
ObjectFactory
's are created, they will be resolved from the current scope.I changed
DbContextManager
so that rather than create a new instance ofDbContext
which it then disposes of, it now resolves from current scope - and doesn't need to dispose.The nice thing about using
LocalContext
is that with your latestContextManager
implementation - this uses AsynLocal. So even when CSLA is used accross tasks that switch threads, the current scope flows with it.Hopefully this might help someone who hits a similar scenario!