Closed WhitWaldo closed 4 months ago
/assign
Amazing to see this added!
@yaron2 I've been spending the afternoon trying to figure out what the intent was behind the watch method and I'd really appreciate any insights because the comments are otherwise a bit.. lacking.
My guess is that the SDK is intended engage with the sidecar and set up a long-running listener for any jobs that flow through. It identifies itself with an App ID and Namespace when it registers the watch and presumably the scheduler is then limiting what jobs precisely flow back through to the app's watcher method.
Assumptions:
So this is where I'm unclear of what the vision at the SDK level is and I'd appreciate your thoughts (@philliphoff too):
In this scenario, the developer is expected to set up all jobs only during an initialization step along with some sort of delegate that handles that happens whenever the watcher matches to one of the registered jobs. If the service crashes or otherwise has to restart, the job is simply overwritten when the new job with the same name shows up (I haven't confirmed that this is the behavior that takes place or not) and as such, it's always known precisely how to handle any invoked methods and what type the payload is expected to deserialize to.
The downside here is that if the developer ever wants to schedule additional jobs, they'd need to rebuild and redeploy the service. So should the developer ever have a mechanism to register jobs on the fly?
The developer is expected to be able to register new jobs at any point via the SDK/API and perhaps passes along a delegate that's registered somewhere to handle it. When the watcher recognizes a job come back in, it looks up the delegate from its registry, invokes it and sends back the request result message with the job ID.
The downside here is that if the service crashes, there's no mechanism to rebuild the job invocation delegates and that doesn't seem in keeping with the reliable and distributed nature of Dapr.
Further, there's a chance that the developer will change the schema of the returned type between whenever it was registered and when it was triggered, causing the method to throw (perhaps indefinitely since it'd never signal back that the job was completed successfully).
While pondering each of the above, it occurred to me that this is very much like a problem I've already spent a good deal of time learning about - the Reaqtor project and its distributed register of on-the-fly-invokable methods. While one needn't rebuild their whole concept here, a rudimentary version of the capability might be a great fit here. The high level idea is that the user would submit an expression that performs whatever they're trying to do and we'd use the Bonsai library to serialize it into a string. The job is submitted with this serialized expression and the data and when it's recalled, the expression is re-hydrated, the data passed into it and something is done with the result.
To my knowledge, they only have a .NET version of Bonsai publicly released, but their whitepaper suggests there's also a JavaScript version and perhaps others for other languages in Microsoft somewhere. But this would facilitate a cross-platform approach to storing the job method itself alongside its data so it doesn't need to be registered in memory somewhere.
The downside here is that at least in .NET, expressions are woefully behind current language capabilities and the latest on a long-running thread about them is that there are no plans to modernize them. A brief lap through the Reaqtor samples will illustrate that while it's possible to do fairly simple operations then, a lack of trivial async support would likely induce a lot of complaints about the .NET SDK on this front.
Is there another approach I've missed that is preferred?
@WhitWaldo Do I understand things right in that applications need to implement a gRPC "app callback" service in order to receive job events (rather than just specifying an HTTP route to be invoked a la pub-sub)?
@philliphoff That's what it looks like to me - it appears to expect that there's some long-running service that subscribes to receive notifications (perhaps partitioned by app ID and namespace since that's passed along). When it receives job triggers, it's supposed to invoke them, then return an ID so it can be marked as completed.
Going in, I had assumed this would just repurpose pubsub in some manner, but that doesn't appear to be the case.
Further, because all jobs are registered and pass along the app ID and namespace (as the requestor app) and the watch registration also requires this to be passed along), it suggests only the app that registered the job can receive notifications about job triggers.
@WhitWaldo Do I understand things right in that applications need to implement a gRPC "app callback" service in order to receive job events (rather than just specifying an HTTP route to be invoked a la pub-sub)?
HTTP triggering of apps is also supported in addition to a gRPC appcallback
@WhitWaldo Do I understand things right in that applications need to implement a gRPC "app callback" service in order to receive job events (rather than just specifying an HTTP route to be invoked a la pub-sub)?
HTTP triggering of apps is also supported in addition to a gRPC appcallback
Can you point me to any interface that suggests how to do this? I don't see this on any of the protos.
The job definition only provides for the schedule and payload.
The job metadata suggests that it's just identifying the requestor app based on the comments.
The schedule job request only includes those two properties and a job name.
@WhitWaldo Do I understand things right in that applications need to implement a gRPC "app callback" service in order to receive job events (rather than just specifying an HTTP route to be invoked a la pub-sub)?
HTTP triggering of apps is also supported in addition to a gRPC appcallback
Can you point me to any interface that suggests how to do this? I don't see this on any of the protos.
The job definition only provides for the schedule and payload.
The job metadata suggests that it's just identifying the requestor app based on the comments.
The schedule job request only includes those two properties and a job name.
It's not part of the protos because these are the interfaces for gRPC. We are updating the docs tomorrow to add the entire specification for HTTP.
@WhitWaldo Do I understand things right in that applications need to implement a gRPC "app callback" service in order to receive job events (rather than just specifying an HTTP route to be invoked a la pub-sub)?
HTTP triggering of apps is also supported in addition to a gRPC appcallback
Can you point me to any interface that suggests how to do this? I don't see this on any of the protos. The job definition only provides for the schedule and payload. The job metadata suggests that it's just identifying the requestor app based on the comments. The schedule job request only includes those two properties and a job name.
It's not part of the protos because these are the interfaces for gRPC. We are updating the docs tomorrow to add the entire specification for HTTP.
Could you speak at all to how the app callback is envisioned to work on the client?
@WhitWaldo Do I understand things right in that applications need to implement a gRPC "app callback" service in order to receive job events (rather than just specifying an HTTP route to be invoked a la pub-sub)?
HTTP triggering of apps is also supported in addition to a gRPC appcallback
Can you point me to any interface that suggests how to do this? I don't see this on any of the protos. The job definition only provides for the schedule and payload. The job metadata suggests that it's just identifying the requestor app based on the comments. The schedule job request only includes those two properties and a job name.
It's not part of the protos because these are the interfaces for gRPC. We are updating the docs tomorrow to add the entire specification for HTTP.
Could you speak at all to how the app callback is envisioned to work on the client?
Yes, the same as pub/sub with the user registering an endpoint in the format of /job/{job-name}
to receive the callback.
You can see a sample in our test code here for the endpoint: https://github.com/dapr/dapr/blob/7ea7016c0150e6a7b7bc7c30e34d56f0001c5e90/tests/apps/schedulerapp/app.go#L189
And here is the handler of the method: https://github.com/dapr/dapr/blob/7ea7016c0150e6a7b7bc7c30e34d56f0001c5e90/tests/apps/schedulerapp/app.go#L154
@yaron2 Could you ping me when the docs are live?
One more question - am I right in reading that only the requestor of the job can receive the job trigger notification? Or can any service opt into receiving all the jobs (presumably with some sort of namespace filtering on the server)?
@yaron2 Could you ping me when the docs are live?
One more question - am I right in reading that only the requestor of the job can receive the job trigger notification? Or can any service opt into receiving all the jobs (presumably with some sort of namespace filtering on the server)?
I'll ping you, yes
Re: question, only the requestor of the job will get the callback.
Pushed my latest local commits:
Dapr.Common
shared project as an abstract class. Each of the Dapr client projects going forward would have its own proto-generated client that is built atop this builder (keeping with the idea that we expose the Dapr*Client
publicly but implement as an internal Dapr*GrpcClient
. DaprException
into this shared project, but didn't change the namespace so it doesn't break existing references for anyone.ToTimeSpan
to go with the FromTimeSpan
method so Golang's interval string can be bidirectionally parsed (and added tests)RegisterJob
, GetJob
and DeleteJob
are fully implemented via gRPC (though register might need tweaking for HTTP invocations). Register supports each of the scheduling approaches suggested and the various overloads do the necessarily validation of each cross-combination of options.WatchJobs
implementation is a work in progress and is presently stalled as I figure out how this was intended to be realized independent of an HTTP invocation. Again, the biggest hold up here is the following scenario: Developer sets up endpoints to handle jobA
and jobB
ahead of time. During execution, the developer would like to set up jobC
and calls RegisterJob
accordingly. Presumably, the developer would now like to provide a mechanism by which jobC
should be handled, but it seems like poor DevEx to simply say "sorry, recompile and redeploy with a static endpoint for newly registered jobs". That works in PubSub because the publisher and the various topics exist independent of whatever is pushing the data, but here, it's all created in Dapr so it feels like there should be some sort of on-the-fly handling mechanism too.@WhitWaldo Wait, I'm thoroughly confused now. WatchJobs() is in the scheduler protos, right? Isn't that API used only between the sidecar and the new scheduler service? I thought that the app would use the new alpha methods on the dapr runtime protos which doesn't have a streaming "watch" equivalent (that I can see). Perhaps I'm just missing something fundamental.
@philliphoff To be honest, I didn't even see the Scheduler methods on the runtime proto. They look to be the same shape as those on the scheduler proto. I implemented against the scheduler proto, so sure, could just as easily drop the watch method. But there's no dependency from this project on Dapr.Client, including the runtime proto as a result.
Still don't see anywhere to register an HTTP invocation even peeking at the runtime proto, so I'll have to sit tight on what that looks like.
The APIs that SDKs use to invoke Dapr are in this proto: https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto
And for Dapr calling the app via gRPC, it's this: https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto
The protos don't register HTTP operations - like pub/sub, bindings and other APIs that call into the app, receiving events is decoupled from sending events. An app can, for example, create a job via the protos and receive it with it's choice of protocol: if the app is configured to run in HTTP, Dapr will deliver the message on an HTTP endpoint as I mentioned above. If the app chose to receive via gRPC, it will use the AppCallback proto. This is the same and consistent with pub/sub, bindings and service invocation.
The APIs that SDKs use to invoke Dapr are in this proto: https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto
And for Dapr calling the app via gRPC, it's this: https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto
The protos don't register HTTP operations - like pub/sub, bindings and other APIs that call into the app, receiving events is decoupled from sending events. An app can, for example, create a job via the protos and receive it with it's choice of protocol: if the app is configured to run in HTTP, Dapr will deliver the message on an HTTP endpoint as I mentioned above. If the app chose to receive via gRPC, it will use the AppCallback proto. This is the same and consistent with pub/sub, bindings and service invocation.
This all makes a lot more sense now that I realize I was looking at entirely the wrong protos. Latest commit fixes all this to point to the correct runtime protos and fix a bunch of the references. Working on adding the callback support right now.
Working on adding the callback support right now.
@WhitWaldo I think the big question is which model to use:
@philliphoff
Working on adding the callback support right now.
@WhitWaldo I think the big question is which model to use:
* Have users create a route and then decorate it with the job it's intended to be used by (pub-sub style) * Have users register a handler for a job and then the SDK generates an appropriate route which calls the handler (actor style) * Both?
I'm building a sample for the gRPC approach where the user just adds a class that implements AppCallback.AppCallbackBase
and adds an override for OnJobEventAlpha1
, but that's a very prescribed approach that I feel the SDK could improve on.
I'm inclined to favor the attribute approach and just put a [Job("jobname")]
on each method with documentation ensuring that the user sets the first argument as whatever type they're deserializing the job payload to (though I'd like to figure out how ASP.NET Core does this in their minimal API request handler delegates so the arguments could be passed in any order).
Thoughts?
I'm building a sample for the gRPC approach where the user just adds a class that implements AppCallback.AppCallbackBase and adds an override for OnJobEventAlpha1, but that's a very prescribed approach that I feel the SDK could improve on.
@WhitWaldo While I think we'd eventually want this, I think it requires apps to go all-in on the gRPC for everything. I don't believe an app can, say, use the gRPC callback model for jobs but use the HTTP route approach for pub-sub in the same app. There's also the issue of only one service can be registered of that gRPC service type, but that type lumps all of the callbacks together (method invocation, state store, jobs, etc.) which means there needs to be some way to "share" the service with potentially other Dapr packages (e.g. the original Dapr.AspDotNet, any new Dapr.PubSub, etc.). It's possible (we could use similar ideas to what we did for pluggable components) but probably requires splitting it out into a separate common package.
I'm inclined to favor the attribute approach and just put a [Job("jobname")] on each method with documentation ensuring that the user sets the first argument as whatever type they're deserializing the job payload to (though I'd like to figure out how ASP.NET Core does this in their minimal API request handler delegates so the arguments could be passed in any order).
@WhitWaldo Thinking a bit more, given that it looks like job handlers are on fixed routes like method invocation (vs. pub-sub, where the subscription includes the route), attribute-based mapping may not work. If that's the case, then a callback-based approach where the route is auto-generated behind the scenes may be better.
I'm building a sample for the gRPC approach where the user just adds a class that implements AppCallback.AppCallbackBase and adds an override for OnJobEventAlpha1, but that's a very prescribed approach that I feel the SDK could improve on.
@WhitWaldo While I think we'd eventually want this, I think it requires apps to go all-in on the gRPC for everything. I don't believe an app can, say, use the gRPC callback model for jobs but use the HTTP route approach for pub-sub in the same app. There's also the issue of only one service can be registered of that gRPC service type, but that type lumps all of the callbacks together (method invocation, state store, jobs, etc.) which means there needs to be some way to "share" the service with potentially other Dapr packages (e.g. the original Dapr.AspDotNet, any new Dapr.PubSub, etc.). It's possible (we could use similar ideas to what we did for pluggable components) but probably requires splitting it out into a separate common package.
@philliphoff Ah, I don't know anything about this. Definitely worth digging into as that would be an excellent reason to both not expose/document that today until that can be resolved. I'm not super familiar with all things gRPC and how it works with .NET, so that's worth some homework in the future to figure out a better approach.
I'm inclined to favor the attribute approach and just put a [Job("jobname")] on each method with documentation ensuring that the user sets the first argument as whatever type they're deserializing the job payload to (though I'd like to figure out how ASP.NET Core does this in their minimal API request handler delegates so the arguments could be passed in any order).
@WhitWaldo Thinking a bit more, given that it looks like job handlers are on fixed routes like method invocation (vs. pub-sub, where the subscription includes the route), attribute-based mapping may not work. If that's the case, then a callback-based approach where the route is auto-generated behind the scenes may be better.
I was considering doing a reflection-based registration at startup to find all any methods decorated with [Job]
and map the names, then invoke accordingly when prompted. I'm also not super-familiar with how Actors presently does this either, so I'll similarly have to dig into that.
Unrelated - I don't know why the DCO check is failing. I double checked all the commits just a moment ago and they all show the DCO message.
I was considering doing a reflection-based registration at startup to find all any methods decorated with [Job] and map the names, then invoke accordingly when prompted. I'm also not super-familiar with how Actors presently does this either, so I'll similarly have to dig into that.
@WhitWaldo I'd avoid reflection if at all possible--it's role has pretty much been supplanted by generators that can do the equivalent but at compile-time instead, which is important both for performance as well as being friendly to AOT.
I think the catch is that the job route is either statically mapped for each job name, in which case an attribute seems a bit superfluous, or is dynamically mapped such that it handles all possible jobs and would then have to have multiple attributes, which feels a bit weird.
Because of the expectation of job handlers having static routes, it may just work best to dynamically generate them under the covers. If and when the jobs feature evolves to allow a true mapping between job name and handler route, then it might be easier to add attributes that enable such maps.
Unrelated - I don't know why the DCO check is failing. I double checked all the commits just a moment ago and they all show the DCO message.
@WhitWaldo I dislike the whole DCO concept. You may just have to rebase/squash commits, amend with signature, and then force push it back to resolve it.
I was considering doing a reflection-based registration at startup to find all any methods decorated with [Job] and map the names, then invoke accordingly when prompted. I'm also not super-familiar with how Actors presently does this either, so I'll similarly have to dig into that.
@WhitWaldo I'd avoid reflection if at all possible--it's role has pretty much been supplanted by generators that can do the equivalent but at compile-time instead, which is important both for performance as well as being friendly to AOT.
I think the catch is that the job route is either statically mapped for each job name, in which case an attribute seems a bit superfluous, or is dynamically mapped such that it handles all possible jobs and would then have to have multiple attributes, which feels a bit weird.
Because of the expectation of job handlers having static routes, it may just work best to dynamically generate them under the covers. If and when the jobs feature evolves to allow a true mapping between job name and handler route, then it might be easier to add attributes that enable such maps.
This makes sense and I'm not opposed to it. I've got a lot of projects that use Metalama for compile-time generation - is there any opposition to using an open source license for them or do you know if there's a preference to use the dramatically less flexible Roslyn source generators?
Unrelated - I don't know why the DCO check is failing. I double checked all the commits just a moment ago and they all show the DCO message.
@WhitWaldo I dislike the whole DCO concept. You may just have to rebase/squash commits, amend with signature, and then force push it back to resolve it.
I agree - I'd much rather see a single-sign-off process like the one necessary to commit to any of Microsoft's OSS packages and not have to deal with adding the sign-off text (especially since I can't get VS to consistently apply it, so I have to resort to manually typing it with each commit).
I'll wait until we're happy with everything on this thread and then close it, squash it all and re-open the PR ready for a final review. If it's unhappy now with any of the DCO messages despite not seeing what it doesn't care for, I suspect it'll be unhappy about the several upcoming commits as well.
@philliphoff I mentioned this on the other thread, but I thought I'd repeat it here for posterity.
What are you thoughts on dropping the serialization altogether here and just accepting a byte[]
for the job payload? It puts the responsibility back on the developer to handle serialization however they want and means that we avoid the inevitable bug report of someone experiencing a runtime error because they've specified a different type to deserialize fom than they used whenever they created the job. By instead only accepting a byte[]
, they can use whatever serialization they desire on their side and gets the SDK out of having to support pluggable serialization.
It also means here that we can drop the second overload for each method and just have a single method for each kind of schedule, each with their different (and mostly optional) arguments.
What are you thoughts on dropping the serialization altogether here and just accepting a byte[] for the job payload? It puts the responsibility back on the developer to handle serialization however they want and means that we avoid the inevitable bug report of someone experiencing a runtime error because they've specified a different type to deserialize fom than they used whenever they created the job. By instead only accepting a byte[], they can use whatever serialization they desire on their side and gets the SDK out of having to support pluggable serialization.
@WhitWaldo We could take the .NET runtime approach and have the client offer a "plain" method that accepts the raw input (probably better to be ReadOnlyMemory<byte>
rather than byte[]
) and then offer extension methods that wrap the method with JSON-specific flavors (see HttpClient.PostAsync()
and HttpClientJsonExtensions.PostAsJsonAsync<TValue>()
). Those extension methods could have an additional overload that accepts, for example, a JsonSerializerOptions
instance to configure things, which then offers very similar behavior to today's Dapr client class but without embedding all of the serialization logic in the "main" methods. While we still technically implement the serialization, it's more directly configurable by the user.
What are you thoughts on dropping the serialization altogether here and just accepting a byte[] for the job payload? It puts the responsibility back on the developer to handle serialization however they want and means that we avoid the inevitable bug report of someone experiencing a runtime error because they've specified a different type to deserialize fom than they used whenever they created the job. By instead only accepting a byte[], they can use whatever serialization they desire on their side and gets the SDK out of having to support pluggable serialization.
@WhitWaldo We could take the .NET runtime approach and have the client offer a "plain" method that accepts the raw input (probably better to be
ReadOnlyMemory<byte>
rather thanbyte[]
) and then offer extension methods that wrap the method with JSON-specific flavors (seeHttpClient.PostAsync()
andHttpClientJsonExtensions.PostAsJsonAsync<TValue>()
). Those extension methods could have an additional overload that accepts, for example, aJsonSerializerOptions
instance to configure things, which then offers very similar behavior to today's Dapr client class but without embedding all of the serialization logic in the "main" methods. While we still technically implement the serialization, it's more directly configurable by the user.
@philliphoff Now that's a neat idea - it leaves the door open to have a collection of supported extensions and if there's a third-party option that wants to complement/supplement with more, there's no reason they can't augment the namespace with more from their own package.
Agreed on ReadOnlyMemory<byte>
I thought about that while typing it out, but couldn't immediately remember what it was about it that didn't work on the Crypto side and didn't want to immediately go look it up.
@yaron2 Another question for you while waiting on the docs (though I'd be surprised if this made it into the first draft): The Job
proto uses a google.protobuf.Any
for the data property:
message Job {
// The unique name for the job.
string name = 1;
// The schedule for the job.
optional string schedule = 2;
// Optional: jobs with fixed repeat counts (accounting for Actor Reminders).
optional uint32 repeats = 3;
// Optional: sets time at which or time interval before the callback is invoked for the first time.
optional string due_time = 4;
// Optional: Time To Live to allow for auto deletes (accounting for Actor Reminders).
optional string ttl = 5;
// Job data.
google.protobuf.Any data = 6;
}
Any is documented here. The Any
definition cites a string for type_url
and a bytes for value
:
message Any {
string type_url = 1;
bytes value = 2;
}
But in the Go example you cited, it indicates that the data property is comprised of a string for type
and a string for value
:
type jobData struct {
DataType string `json:"@type"`
Expression string `json:"expression"`
}
type job struct {
Data jobData `json:"data,omitempty"`
Schedule string `json:"schedule,omitempty"`
Repeats int `json:"repeats,omitempty"`
DueTime string `json:"dueTime,omitempty"`
TTL string `json:"ttl,omitempty"`
}
If I'm converting a byte array in C# to an Any from the generated proto file and registering that in Dapr when I schedule the job, what are you doing to it such that it's returning a string (so I might reverse it and get my byte[] back)?
I appreciate it!
@philliphoff I started writing a code generator for the handler registration and then spent the morning reading up on what the ASP.NET Core team is doing with regards for optimizing minimal APIs and their delegate handlers. Put simply, they wanted to use source generators there so they can achieve better compatibility with AOT than they can with delegates (hidden behind an opt-in flag: <EnableRequestDelegateGenerator>true</EnableRequestDelegateGenerator>
in a PropertyGroup
).
But they're also looking at using the interceptors introduced with C#12 to augment this further and I suppose replace each of the overloads with a specific method for whatever the mapping should be for even better performance (I think you get this if you just opt into AOT regardless of the other flag since it's also considered preview).
Anyway, given that this effectively gives us the benefit of their work to have already implemented the source generators, I intended just opted to handle registration via a minimal API (for now, unless we need others) so that path registration is as easy as:
app.MapScheduledJob("myJob", (ILogger logger, JobDetails details) =>
{
// Do something
});
The delegate supports dependency injection as well as model binding and the first argument would let them specify the name of the job they're targeting. This extension then just changes the path to "job/{jobName}" and registers it on the IEndpointRouteBuilder
not dramatically unlike how it's done in Dapr.Actors.AspNetCore
. Finally, I removed all the attributes as we're not doing any reflection any longer and thus they weren't necessary.
I added a sample project and locally installed the preview bits, but haven't been able to get it to run to trial this out. Going to try rebooting later and see if that helps.
I think it'd be nice to add some helper extension methods to convert types into the ReadOnlyMemory<byte>
(perhaps just strings and types that can be serialized into JSON) and back, but otherwise, I think this is just about complete once I can get an E2E test working and add some more unit testing outside of the extensions.
Anyway, given that this effectively gives us the benefit of their work to have already implemented the source generators, I intended just opted to handle registration via a minimal API (for now, unless we need others) so that path registration is as easy as
@WhitWaldo I think this is a good place to start, given the static nature of job routes, and it minimizes the need for users to understand the internals of Dapr job dispatching.
Oh good, all the tests are passing again. Glad to see that! I'll see if I can't hammer out some unit tests to go with and get this to run locally so I can trial out that sample project too and get this wrapped up.
What are you thoughts on dropping the serialization altogether here and just accepting a byte[] for the job payload? It puts the responsibility back on the developer to handle serialization however they want and means that we avoid the inevitable bug report of someone experiencing a runtime error because they've specified a different type to deserialize fom than they used whenever they created the job. By instead only accepting a byte[], they can use whatever serialization they desire on their side and gets the SDK out of having to support pluggable serialization.
@WhitWaldo We could take the .NET runtime approach and have the client offer a "plain" method that accepts the raw input (probably better to be
ReadOnlyMemory<byte>
rather thanbyte[]
) and then offer extension methods that wrap the method with JSON-specific flavors (seeHttpClient.PostAsync()
andHttpClientJsonExtensions.PostAsJsonAsync<TValue>()
). Those extension methods could have an additional overload that accepts, for example, aJsonSerializerOptions
instance to configure things, which then offers very similar behavior to today's Dapr client class but without embedding all of the serialization logic in the "main" methods. While we still technically implement the serialization, it's more directly configurable by the user.
@philliphoff In the latest batch of commits, I added a collection of extension methods that'll support JSON and string serialization (with a JsonSerializationOptions
passed in as an optional argument) and deserialization along with unit tests for the latter set.
As I discovered after a few failing unit tests, Moq doesn't work with static methods, which makes this project rather difficult to get full coverage on. Is there any interest at some point in seeing if there's a commercial mocking framework that would grant a license for this open-source project (e.g. Telerik's JustMock)? I've got to imagine there are a great many opportunities for richer coverage we just can't test with open-source mocking tooling.
Finally, I added a change to the generic Dapr client builder that took a dependency on Microsoft.Extensions.Http
so we could source the HttpClient
from its IHttpClientFactory.CreateClient
method instead of creating one every time the builder is used and it'd be registered in DI alongside anything else. Unfortunately, this package takes a hard dependency on the 8.0. logging which broke Dapr.Extensions.Configuration because of its chain of references and its existing reference to 3.1.2 for Microsoft.Extensions.Configuration
. Updating to 8.0. on the configuration project introduces a nullable incompatibility minefield, so I've instead I've fixed it for now by reverting the Microsoft.Extensions.Http
to 3.1.32 (and upgrading the configuration project to the same). All that to say that when we drop .NET 6, it'd be a great time to both review #1004 and keep working through #1316 because with package updates, we're likely going to introduce more nullability work.
Unclear why the build is failing on CI/CD - it's building all the way through locally.
@philliphoff Closing this PR in favor of #1331 now that all the unit tests are passing. Did a squash merge per your suggestion to fix the DCO error.
Description
Added a Dapr Jobs client for the new Jobs API
Issue reference
We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation.
Please reference the issue this PR will close: #1321
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list:
This isn't yet ready to be merged as I need to figure out how the sidecar signals job triggers back to the app.