Open jdom opened 6 years ago
Good! That factory pattern from HTTP is really nice. However one point:
it would make the assumption that a client library must be configured in the same way as the app host
I don't want to replay the whole discussion as you already linked it here, but I never tried to give the impression or imply that the client library must be configured the same way as in the host.
The whole discussion resumes into:
ILoggerFactory
passed down from the host to the client) into the Cluster Client container.All those can be easily achieved if, regardless of having an IClusterClientFactory
or not, the ClientBuilder
have an optional parameter on its constructor that takes IServiceCollection
, period.
It doesn't matter from where it comes from. The root application, a hand made collection, a custom container implementation, the generic host, it just doesn't matter. It is just that it would allow people to inject their own service collection implementation, which can save us to configure common dependencies twice and at same time give to the outside system/framework/host/type the control on whether to build its dependencies or not.
Orleans should not obligate people to use its internal DI container otherwise, it goes against the whole discussion of using the Microsoft.Extensions.DependencyInjection
abstractions. By using the abstractions, that means we don't care what is the actual implementation of the DI container neither we manage its lifecycle.
That said, I still think the cluster client factory is a good idea if it really follows HttpClientFactory
idea. Otherwise, it will be another thing invented here that will be introduced and when we realize that we need to do it it the right way, it will be be too late and we will fall in the dance of "we can't change it that way otherwise we will break people".
Both the IClusterClientFactory
and add a IServiceCollection
optional parameter are no brain changes that will never break anyone, totally opt-in, and they both comes with benefits and no downsides IMHO.
I don't want to re-start the debate with the same arguments, as we already had it in the linked thread, but the summary you made is just your side of the discussion, heh. I'll just say that I agree with sharing the container on a silo host, because it is indeed hosted. The client library is just a library. It happens to use a container so that the configuration is similar to those of the Silo counterpart, is just an implementation detail, not because we expect to host it in the same way as a Silo. We could have implemented a similar API without a container, and the design should and would still apply. This is using DI as it is supposed to be used, it's just that the internal implementation of the client runtime is isolated from the outside by design.
The proposed IClusterClientFactory
, on the other hand, makes sense to have it hosted with the frontend app, so its registration should interact with app's container. The factory would automatically pass the pre-built logger factory to the cluster clients it creates (unless specified otherwise in the client's configuration), and I really think is not a bad thing to do, but actually a very good thing, so you can choose how to log if you don't like the default or want to share the same thing across all your clients or co-hosted services. I really didn't hear a valid argument on why passing the pre-built logger is a bad thing (only the "use DI containers in the correct way" argument but without a backing reason that is a little less subjective, which I'm open to learn and change my viewpoint on).
BTW, keep the discussion coming if there are new arguments, as I think we are coming to a better design, but let's try to not repeat the same as in the chat without new insights.
Yeah I see your points as we discussed on gitter.
I'm not saying it is wrong without arguments. The bigger one is the fact that IHttpClientFactory
does that. It leverage external DI container and it doesn't care how you are creating it as long as it comply with IServiceCollection
and the logging just works without reinventing the wheel and adding it to its own DI.
If a library is DI-friendly, I would expect it play nice with my own container without doing the same thing internally. That is basically what we've seen on other libraries like in IHttpClientFactory
for example.
But anyway, as we already discussed, it can be done whatever the way you guys decide todo, just please don't change IClientBuilder
nor make what it does internal. That way people, like myself, is able to (reeinvent the wheel and) re-implement it in a way that works.
Fwiw my vote goes for IServiceCollection
into both ClientBuilder
and SiloHostBuilder
. We've been building a new service needing two instances of an IClusterClient
where Orleans is cohosted with ASP.NET Core, and the main painpoint here is the separate containers. It's annoying to have to think about which dependencies are registered where in that scenario.
For the ClientBuilder
debate, as far as I can see, if it just used the existing container, you wouldn't have to configure anything extra, whereas if you'd have to pass in ILoggerFactory
, you'd have to configure it separately, as in ConfigureLogging
? As in
services.AddOrleansClient(loggerFactory); // need this for the ClientBuilder thing
vs
services.AddOrleansClient(); // it's already in services here
I don't really understand the need for it to be explicitly attached to the ClientBuilder
when you don't need to, that's atleast not what you'd expect in this ASP.NET Core Microsoft.Extensions.DependencyInjection world where the IHttpClientFactory
indeed is a good example
So for us where we need two IClusterClient
, one from the new EnableDirectClient
awesomeness and one connecting to an external service, we've ended up wrapping the external one in a class like SomeServiceClusterClient
so that we can differentiate. Seems like if we want a IClusterClientFactory
to have both of these clients we'd need a shared container right?
Our setup today is the following into the ASP.NET ConfigureServices call:
services.AddOrleansSomeServiceClient(); // Add and connect the SomeServiceClusterClient, as it is a dependency for this service
services.AddOrleans(); // Add and start the cohosted Orleans service, this also adds the internal IClusterClient from EnableDirectClient
@martinothamar thanks for describing your scenario, and indeed what I described originally would cover it. I think you are getting confused into implementation details, as perhaps I wasn't clear: The IClusterClientFactory
registration would indeed participate in the app's container (whether that's an ASP.NET frontend, an Orleans silo, or both), and automatically get injected (and pass along) the logging factory, as well as the configuration providers so that they are readily usable in each of the clients configuration. Notice how in the code snippet I shared, there is nothing related to logging in the AddClusterClient(...)
section.
That doesn't require the client runtime itself to share the container with the app's container (and in fact, if we were to do that, it wouldn't work correctly in some cases, since here you have 2 very distinct clients). What is required is that the client entry points (ie: IClusterClient
) are made accessible in the app container, and that would be exposed via the IClusterClientFactory
(in the case of the client to an external cluster).
Then in your app code, whether you have a controller or a grain that wants to communicate with an external cluster, you would just inject the factory:
public class MyController: ApiController
{
public MyController(IClusterClientFactory clusterClientFactory)
{
// I'm not defining the API on how to get the client yet (whether it's a ValueTask<IClusterClient>
// so that the client is readily usable and connected or a synchronous response that would
// auto-connect on the first grain call, or what. This is just the general idea
// to show that you'd get the client fully configured (with the app logging and all) from the factory
IClusterClient clusterClient = clusterClientFactory.GetClient("my external cluster");
// then use it in an action or whatever
}
}
Ahh, makes sense. Thanks.
How about the IClusterClient
from the EnableDirectClient
call? I still have to retrieve that one from the Orleans container into the app container in the case of Orleans and ASP.NET Core cohosting? If IClusterClientFactory
is just for external clients
Edit: nevermind, if SiloHostBuilder gets the servicecollection of course it will be there.
Putting this in the backlog, as we'll not likely address this until we start looking at updating the host builder. Not that they are linked, only that both sets of changes should be scheduled in the same Orleans release.
In the interim, it would be helpful to settle on the public surfaces, like what extension functions (AddClusterClient?) and interfaces (IClusterClientFactory) we'd need, and their definitions. Thoughts?
Since most modern frontends user either ASP.NET (Mvc or WebApi) and with the new generic Host that was shipped in Microsoft.Extensions.Hosting 2.1, we can expect most Orleans clients will be using the
IServiceCollection
abstraction for building their frontend hosts. It would be good to provide a simple way to configure anIClusterClientFactory
that can build 1 or more cluster clients. This can benefit from passing the cross-cutting services configured in the app's container into each of theIClusterClient
s that get built. In particular for now, pass in the logger factory and the host's configuration (whatever config providers that were configured using Microsoft.Extensions.Configuration), and potentially then use Polly for connection retries, etc.Note that I'm explicitly not suggesting we should share the app's container with the IClusterClient(s), but instead share it with the factory and keep the cluster client's container as an internal implementation that is isolated from the app's container or lifecycle.
We can use HttpClientFactory as an inspiration.
Then, configuring an Orleans cluster client factory would be a matter of calling an extension method on an
IServiceCollection
and configuring the client (or named clients) normally, except without explicitly configuring Logging or Configuration.Of course we can also support a default (non-named) client.
Then you can just inject the
IClusterClientFactory
in your app code to get a hold of a particular client (or just inject theIClusterClient
if it was registered as default. The factory can also act as a hosted service (if using the generic Host) to gracefully shutdown all the clients when the host is shutting down.For reasons why I don't advocate sharing the container, 1 is obvious: it would make the assumption that a client library must be configured in the same way as the app host, and it wouldn't even support multiple clients. For others, there's this longer chat thread with @galvesribeiro