Open netcorefan1 opened 3 years ago
Hi,
Thank you very much for your feedback. I'm sorry for the delayed response, I didn't get any notification about your comment.
We focus a lot on communication performance in this project, but we have not yet been focusing on the startup performance. Dynamic code generation will cause a performance hit, but it should just be a one-time cost. As long as each short-lived connection doesn't use a separate service interface, the connection should be very fast after the first one. Are you seeing slow connections even after making the first connection to a service?
The same caveat as GrpcDotNetNamedPipes exists in the Lightweight named pipe implementation. The Lightweight RPC implementation uses a different protocol than gRPC, so you will not be able to make a gRPC connections to a Lighweight server.
The InprocRpcEndPoint allows you to use RPC communication within the same process. Its usage is rather limited, but we needed it in our other projects (where the services could sometimes be published in the same process and sometimes in an external process). It is not related to Pipelines.Sockets.Unofficial.
The difference between the grpc-dotnet and gRPC C# implementations is just which gRPC framework that is used. If running .NET Core 3.0 or later, I recommend using the gRPC C# implementation. We will investigate if we can remove the ASP.NET core dependency in .NET 5.0
There's no immediate plan to implement MessagePack or BinaryPack. However, implementing a new serializer is very easy, so if there's a demand this is something we can do. (Maybe someone can send a pull request?)
I don't have any experience with vs-streamjsonrpc and nng.NETCore so I cannot do a comparison with these projects.
protobuf-net.Grpc is mainly focused on gRPC and follows more closely to the gRPC specification. I have not looked closely at MagicOnion, but it also seems focused on gRPC and the server setup example uses the gRPC project template.
SciTech.RPC aims to be a more general purpose RPC framework, which just supports gRPC as an option. For instance, SciTech.Rpc easily supports interprocess and intranet communication, without requiring a gRPC server (or similar). SciTech.Rpc also offers support for a more object oriented approach (if wanted), includes easy exception handlin through exception converts, built-in Windows authentication, and other features that may or may not be included in the other frameworks. When (if) I get the time I will try to make a more detailed comparison.
SciTech.Rpc was created because we needed a RPC framework that could replace .NET remoting for intranet communication in a large legacy project (.NET Framework 2.0) and none of the available frameworks supported our needs. It will also be used in the next major release of .NET Memory Profiler (which currently uses WCF).
I'm not sure how AddLocalRpcMethod works, but currently SciTech.Rpc offers client notifications through standard event handlers (and soon other delegates). Streaming methods are also available, which may also be used for client notifications. For version 1.1 we will investigate other possibilities of client callback.
SciTech.Rpc is still in beta, and we have not done anything to promote it yet. There are some minor refactorings that we want to do, and then the documentation must be updated. Hopefully we will be able to release an official version within a few weeks.
Best regards,
Andreas Suurkuusk
Hello and don't worry for the delay. I also responded with delay because I am pretty busy.
Start up can take even two seconds while with non dynamic code around 300 ms or a bit more. Subsequent connections (warming) are bit better, but still slow. A good amount of times I only need to connect, execute a method and close the connection, therefore I must care about this. Comparisons are intended with interfaces. Didn't tried yet with raw data with your method (I was not even aware it was possible). It's good to know, but I am afraid interface is the main feature we all need from RPC. It will be nice when your tool will be integrated in the build process, so that the process will be automatic. I tried to run and seemed to work well, it's just the hassle to being forced to do everything by hand. You had a very nice idea about this tool!!
Yes, its usage is rather limited, but it's a great bonus. One day I may need it, too.
Some people recommends grpc-dotnet because it takes the full advantage of the .Net core framework while the grpc C# is more for the old .Net Framework although seems to support .net core too. I tried grpc-dotnet, it's extremely configurable and allow to do so much things with the fluent interface (beautiful but not very intuitive). I will try the C# version as you recommended to see how much differs. IPC under ASP .NET is pretty overhead although they says that performance is very high. Let see what will happens when and if they will remove this dependency.
It's a bonus having such serializers, but I noticed that in quick connection of the type "execute a method and close" they perform worse. I suppose they only bring advantages on very high flow of data.
Thanks for the explanation. I like much more your approach because it covers all of the need without being forces to split the code base into other third party code. And the idea of LightWeightRpcServer is amazing. It is the first one I am going to use in the projects. It works well, my only complain is that serialize interface allows a maximum of 9 parameters in a method and I didn't found a way to include "params" (gave me errors). Didn't had such problem with other libs. I fixed it by using ValueTuple which added some complexity to the code. It would be nice if you could solve this.
Well, I am also not sure how it works, but basically I assume that the client can expose a function that the server can call when needed to obtain local data. Actually I can't think of any use for this, but I am also new to IPC. If I realize that I may need, I will find more info and will let you know. Uahhh...delegate, callbacks and even .Net events... this is amazing!
You said that you created SciTech.Rpc because you needed a RPC framework that could replace .NET remoting. Well, I am excatly in your same situation and after days of trials with your libs and other ones I still didn't found a way to make it work. May be that you can give me the right directions. I am trying to upgrade EasyHook from .Net Framework into .NET 5 and I am having problems because it uses the old .net remoting. Basically this is is what happens: EasyHook creates an IPC server like this:
var Channel = IpcCreateServer<HelperServiceInterface>(ref InRemoteInfo.ChannelName, WellKnownObjectMode.Singleton);
public static IpcServerChannel IpcCreateServer<TRemoteObject>(ref String RefChannelName, WellKnownObjectMode.InObjectMode, params WellKnownSidType[] InAllowedClientSIDs) where TRemoteObject : MarshalByRefObject
{
return IpcCreateServer<TRemoteObject>(ref RefChannelName, InObjectMode, null, InAllowedClientSIDs);
}
BinaryServerFormatterSinkProvider BinaryProv = new BinaryServerFormatterSinkProvider();
BinaryProv.TypeFilterLevel = TypeFilterLevel.Full;
IpcServerChannel Result = new IpcServerChannel(Properties, BinaryProv, SecDescr);
ChannelServices.RegisterChannel(Result, false);
if (ipcInterface == null)
{
RemotingConfiguration.RegisterWellKnownServiceType(typeof(TRemoteObject), ChannelName, InObjectMode);
}
else
{
RemotingServices.Marshal(ipcInterface, ChannelName);
}
RefChannelName = ChannelName;
return Result;
The client pass to the native dll a RemoteInfo structure containing the name of channel along with other data, On injection the dll emit a remote event in the loader and it is there that start the code execution. The maximum I have obtained is injection success, but no remote event received.
if(!RTL_SUCCESS(NtCreateThreadEx(hProc, (LPTHREAD_START_ROUTINE)RemoteInjectCode, RemoteInfo, FALSE, &hRemoteThread)))
I have no idea on how to make this to work. I tried with LightWeightRpcServer with no success. The error is: "C++ completion routine has returned success but didn't raise the remote event." Remoting works on a completely different way and is very difficult for me to find out how to replicate the behaviour of remoting with your libs and other ones. Remoting seems to act differently and since has not been ported in .Net Core I had to replace with something else. Have you an idea? Some directions?
Last thing... There is another great project called polymessage. The author is a nice person. I saw that his project has features that your doesn't and viceversa. It would be nice if you could cooperate together. Joining both the projects into one big project would be amazing, but even if you don't feel that, I am sure it would be of great benefit for both the projects if you become collaborators. https://github.com/cvetomir-todorov/PolyMessage Feel free to contact him. Is a very nice person. Thanks
Hi,
I'm sorry for the delay again. I'm really busy working on this project, and several others.
I have not seen performance as slow as two seconds, or even 300 ms. I have not profiled first time connections for a service interface, but for smaller service interfaces it seems to be around a few ms. I have run a few benchmarks where connections are made through a previously connected service interface. For named pipes the connection time was around 250µs (connect, make a simple RPC call, disconnect), for TCP connections it was around 400-500µs but the benchmark failed to finish. This is something we need to investigate.
Do you have any additional information about the performance tests you've made? I'm not sure what you mean with "raw data". What are you referring to?
I have published a few benchmarks at the end of this post.
See below for some performance benchmarks.
(6) It's easy to extend the protocol so that it supports more than 9 parameters, but the request packets are based on predefined RpcRequest types, so we need to have some limit. Add support for "params" should be possible and not too much work.
I'm not sure how to help you with your EasyHook problem. The error message "C++ completion routine has returned success but didn't raise the remote event." does not seem related to the RPC communication. If you believe this is RPC related, you can maybe open up another issue with more details?
Below you will find a few benchmarks results from SciTech.Rpc. The ConnectDisconnect benchmark was commented above (1).
SingleCall: A single simple RPC service call (Add(int,int)
). Waiting for result before making next call.
SingleCallInline: A single simple RPC service call executed inline (same thread as) with request reader. Waiting for result before making next call.
ParallelCalls: Parallel simple RPC service calls. Makes 8 requests on the same connection and then waits for all responses.
ParallelCalls2: Parallel simple RPC service calls. Makes 8 requests on the separate connections and then waits for all responses.
The fastest call benchmark is SingleCallInline for the inproc connection. Essentially, this shows the raw performance of the SciTech.Rpc implementation, with no communication overhead.
Making a single RPC call using a light weight TCP connection, takes ~60µs, compared to the "raw" gRPC time of 130/180µs (e.g. about twice as fast).
Making parallel call using separate connections is also about twice as fast (20µs vs 42/46µs). When using a single connection the performance is about the same, but then all communication will be serialized through a single pipe in SciTech.Rpc, which I don't believe is the case for gRPC.
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19041.746 (2004/?/20H1)
Intel Core i7-9700K CPU 3.60GHz (Coffee Lake), 1 CPU, 8 logical and 8 physical cores
.NET Core SDK=5.0.200-preview.20601.7
[Host] : .NET Core 5.0.2 (CoreCLR 5.0.220.61120, CoreFX 5.0.220.61120), X64 RyuJIT
DefaultJob : .NET Core 5.0.2 (CoreCLR 5.0.220.61120, CoreFX 5.0.220.61120), X64 RyuJIT
Method | ConnectionType | Mean | Error | StdDev |
---|---|---|---|---|
ConnectDisconnect | LightweightTcp | NA | NA | NA |
ConnectDisconnect | LightweightNamedPipe | 254.5 μs | 2.13 μs | 1.66 μs |
Method | ConnectionType | Mean | Error | StdDev |
---|---|---|---|---|
SingleCallInline | LightweightInproc | 5.310 μs | 0.0311 μs | 0.0291 μs |
SingleCall | LightweightInproc | 7.992 μs | 0.1318 μs | 0.1233 μs |
ParallelCallsInline2 | LightweightInproc | 5.466 μs | 0.0242 μs | 0.0215 μs |
SingleCallInline | LightweightTcp | 54.195 μs | 0.4513 μs | 0.3523 μs |
SingleCall | LightweightTcp | 63.843 μs | 1.2266 μs | 1.5512 μs |
ParallelCalls | LightweightTcp | 46.946 μs | 1.2012 μs | 3.4850 μs |
ParallelCalls2 | LightweightTcp | 20.133 μs | 0.3338 μs | 0.3123 μs |
ParallelCallsInline2 | LightweightTcp | 19.659 μs | 0.1251 μs | 0.1109 μs |
SingleCallInline | LightweightNamedPipe | 58.703 μs | 1.1615 μs | 1.7025 μs |
SingleCall | LightweightNamedPipe | 60.795 μs | 1.1967 μs | 1.6381 μs |
ParallelCalls | LightweightNamedPipe | 27.645 μs | 0.5453 μs | 0.7821 μs |
ParallelCalls2 | LightweightNamedPipe | 18.525 μs | 0.4301 μs | 1.2680 μs |
ParallelCallsInline2 | LightweightNamedPipe | 18.332 μs | 0.3646 μs | 0.9981 μs |
SingleCallInline | Grpc | 136.433 μs | 1.1560 μs | 1.0248 μs |
SingleCall | Grpc | 136.332 μs | 0.9791 μs | 0.8679 μs |
ParallelCalls | Grpc | 48.769 μs | 0.9566 μs | 1.0236 μs |
ParallelCalls2 | Grpc | 48.368 μs | 0.9604 μs | 1.1061 μs |
ParallelCallsInline2 | Grpc | 48.327 μs | 0.9192 μs | 1.3758 μs |
SingleCallInline | NetGrpc | 189.671 μs | 1.9388 μs | 1.8135 μs |
SingleCall | NetGrpc | 193.009 μs | 2.4925 μs | 2.3315 μs |
ParallelCalls | NetGrpc | 46.735 μs | 1.2360 μs | 3.6445 μs |
ParallelCalls2 | NetGrpc | 46.901 μs | 1.3396 μs | 3.9287 μs |
ParallelCallsInline2 | NetGrpc | 45.229 μs | 1.4331 μs | 4.1805 μs |
SingleCall: A single simple RPC service call (Add(int,int)
). Waiting for result before making next call.
ParallelCalls: Parallel simple RPC service calls. Makes 8 requests on the same connection and then waits for all responses. I suspect the gRPC implementation may use separate HTTP/2 connections, even if only one gRPC connection is created.
Method | ConnectionType | Mean | Error | StdDev |
---|---|---|---|---|
ParallelCalls | Grpc | 46.71 μs | 0.920 μs | 1.348 μs |
SingleCall | Grpc | 129.23 μs | 1.196 μs | 0.999 μs |
ParallelCalls | NetGrpc | 42.27 μs | 1.297 μs | 3.722 μs |
SingleCall | NetGrpc | 179.38 μs | 1.518 μs | 1.345 μs |
Hello friend, do not worry! I am also involved in other projects and I have not found the time to continue the tests on your libraries, but I can tell you that I am very satisfied from what I have seen. Regarding the performance issue I have not gone into complicated tests and comparisons because what have seen my eyes were enough to understand the difference. I am always talking of quick connections of the type "return the result of a function and close". Under heavy pressure I assume that probably there shouldn't be noticeable difference.
The results I see on your bechmarks are amazing especially the ones from InProc. I am totally extraneous to this technology, but I can't wait to try. I hope that in the release you will able to update the repo with a (easy) sample. I am also surprised that LightweightTcp performs similarly to LightweightNamedPipe. Are you using raw Memory Mapped Files for LightweightNamedPipe or the .Net classes which still uses MMF to exchange data, except that it is hidden inside the kernel. What I have read is that raw memory mapped file are very very fast and can't even compare with named pipes. You may find interesting the Interprocess library. Is a cross-platform shared memory and it is used internally by Microsoft and developed by folks at Microsoft. Sending and receiving messages is almost heap memory allocation free reducing garbage collections. They spent lot of time into optimizing. May be that this lib could become a new powerful weapon to add to your arsenal?
Anyway, to me the benchmarks seems good. Are you satisfied from the result? or you want to continue improvements?
For raw data I mean exchanging row bytes without interfaces or serializers, just plain bytes. For example there could be situation in where the client only need to pass a byte to represent a particular number and get back another byte as a response. It Should be much faster such kind of communication.
Regarding the 9 parameters limit, honestly is very rare the need to pass so much parameters, but this time I had. If you could add a couple of more without breaking anything it would nice otherwise it doesn't matter. The params is also something that I see very rarely, but strangely this time I had to deal with it. Nothing that I couldn't resolve, it just the annoyance. If one day you feel to improve this it will be great, but no one really care if you leave as is.
For EasyHook do not worry, I played with it for several days until I realized it was to tied into remoting and still uses the old Framework. I had to find something else and now I am busy with that. As soon as I complete the project I will implement your library, but I would prefer much more to see if inproc can be even better.
I prepared a sample for you. It compares your LightweightNamedPipe with GrpcDotNetNamedPipes. SciTechTest.zip
On cold start GrpcDotNetNamedPipes is three times faster.
On second start GrpcDotNetNamedPipes is more than two times faster. Results on subsequent runs are unnoticeable.
Did you finished the tool that autogenerate the assemblies and remove the dynamic code? I am curious to see if in this way they would perfom equally (or may be even better than GrpcDotNetNamedPipes!).
You said that interfaces can slowdown and I must admit that I am a little worried because I didn't expected this.
Thanks for your help!
Finally I found the time to take a look at your test project.
You are correct that GrpcDotNetNamedPipes is faster in your test, even though I got very different numbers, around 200ms for GrpcDotNetNamedPipes and 400ms for the SciTech.Rpc implementation (see below).
However, your test includes process start up times, and thus dependency loading which will differ between the implementations. This is not something we are (currently) optimizing for.
Starting GrpcDotNetNamedPipesServer server...
GrpcDotNetNamedPipe server listening...
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 188.8635
Starting SciTechPipe server...
Server listening...
I am SciTech server!
Elapsed time (ms): 395.7777
Starting GrpcDotNetNamedPipesServer server...
GrpcDotNetNamedPipe server listening...
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 135.0431
Starting SciTechPipe server...
Server listening...
I am SciTech server!
Elapsed time (ms): 327.425
If I change the test, so that the server is pre-started, and then run the connect/disconnect a few times, the timing looks different:
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 44.9794
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.7886
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.4919
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.4253
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.3892
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.2661
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.2735
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.2563
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.2345
I am GrpcDotNetNamedPipes server!
Elapsed time (ms): 0.262
I am SciTech server!
Elapsed time (ms): 143.2847
I am SciTech server!
Elapsed time (ms): 5.454
I am SciTech server!
Elapsed time (ms): 2.5751
I am SciTech server!
Elapsed time (ms): 1.8409
I am SciTech server!
Elapsed time (ms): 1.1496
I am SciTech server!
Elapsed time (ms): 1.1735
I am SciTech server!
Elapsed time (ms): 1.113
I am SciTech server!
Elapsed time (ms): 1.0487
I am SciTech server!
Elapsed time (ms): 0.9625
I am SciTech server!
Elapsed time (ms): 1.0084
As can be seen, the elapsed time settles around 1ms for SciTech.Rpc and at 0.25 ms for GrpcNamedPipes. 1 ms is slower than the benchmarks I've run (see previous comment), which I cannot explain at the moment (I will need to investigate further). However, the named pipe connection in SciTech.Rpc is similar to a WCF named pipe connection and is probably more complex than the GrpcNamedPipes connection. The SciTech implementation can probably be improved, but it's not high priority issue.
I have planned to implement a memory mapped files implementation as well, so I will definitely take a look at the "Interprocess" library for this. Thanks for bringing it to my attention.
I'm not sure why you would want "raw data" communication when using an RPC library. The whole point of the RPC library is to add an abstraction layer above the raw communication. If you want to use raw communication, you can use sockets or streams directly. This will maybe allow you to reduce communication overhead, but then you will need to implement some other type of communication protocol (which may again add some overhead). The benchmarks in the previous comment indicate that the overhead is about 5µs for a simple call (serializing/queuing/dequeuing/deserializing). When using a TCP or named pipe connection, the call takes about 60µs, so the overhead is < 10%. When using gRPC the call takes about 130-170µs, so the overhead will be less than 5%. So I definitely don't think raw communication will be much faster.
Pre-generating assemblies will probably improve startup time, but comes with a lot of difficulties. First of all, .NET Core/.NET 5.0 does not provide support for emitting IL to a file, so the code generation will need to be updated to use another technology when pre-generating (e.g. using Roslyn). Furthermore, for fully pre-generated assemblies, the serializer used may also need to be pre-generated, e.g. Protobuf.NET. I'm not sure that the current state is related to pre-generation in Protobuf. Anyway, this is a feature that I would like to implement if possible, but it will not be included in the v1.0 release.
Best regards,
Andreas Suurkuusk SciTech Software AB
Hello my dear friend, I am glad to see you again and bring my little contribute to this wonderful project. The low numbers you get I believe is most probably due to a faster processor than mine, but to me seems that the proportion remains the same. Your new results surprises me, I was expecting reduced times due to the good things that does the runtime when it sees an heavy usage, but I was not expecting to see that difference even more noticeable between the implementations.
Thanks for your info on the raw communication. Currently I don't need such kind of communication, but I was wondering what could be the right way if one day I could just randomly need to run a server, getting from it a boolean value, and close. For this reason I was asking if it was possible to expose the raw bytes and bypass all the layers. From what you say I'm much better off to just expose an interface containing the required function.
Uahh, it seems more difficult that I thought. Did you tried to give a look at MonoMod ? Not sure on how much staff you can do with it, but it may worth to take a look at it. You may also want to take a look at the source code of grpc-dotnet-namedpipes, it uses a custom wire protocol under the hood and may be you can find something interesting.
Dynamic code generation plays for sure a role, not sure on how much big it is, but I experienced such delay in all the libraries implemented in this way and I can see the delay happening directly in VS as soon as I see the loading of "Anonymously Hosted DynamicMethods Assembly".
How much staff does this little guy? Ah ah
Yes, I also think that protobuf may do some sort of pregeneration. Time ago I retried grpc over UnixDomainSocket and I was surprised on how much was fast. I wanted to include it in the sample to compare against dotnetNamedPipes and Lightweight server, but I have gone into this issue and I have been forced to abort (I could not find a way to reduce waiting time without going into severe exceptions and without the connection established immediately I can't measure elapsed time in a reliable manner and this time my eyes are not enough to give a judge). If you're interested, you can find there a sample project which contains also my failed attempts to reduce the polling time nearly to zero. I don't have the required skills to make this work, but you have and may be we will get surprised to see that the winner is another gamer.
However, even if faster, I am afraid that the memory usage will increase a lot and this will defeat the main purpose of LightWeightserver.
Keep me informed on your discoveries and if I can help you in some way.
Thanks
I just want to post this quick update. You may want to consider to compile the .Net 5 version of the libs with NativeAOT. I started experimenting with it only a couple of days ago until I decided to convert my projects because what I saw is amazing. Basically it converts C# into a native assembly and performance, reduced memory etc is on pair with native code. This was something exclusive to UWP (the day I will be forced to develop for UWP it will be the day I will change OS!), but now .Net 5 opened the doors to the rest of the world. It takes time to get used to it, but its really worth. It will compile everything in a single assembly, but you can play with assembly trimming and many other things (I have been able to obtain an assembly size nearly the same of the original Net 5 non AOT version and I could have gone even further). Of course, this will not let us to win if GrpcDotNetNamedPipes get compiled in the same way. We still need to understand why it is so fast (4 times faster) and, as you suspect, it may have to do with pre-generation, but I also suspect the author has done something else to the protocol. What exactly, I really don't know and I can't even discover by just looking at the source code because I don't have such deep knowledge. You talked about the issues you may have with IL Emit. You may want to take a look at this article to see NativeAOT in more deep details and how could help you (the article is 2 years old and still refers to the old archived repo, but it still contains interesting info).
Also, by looking at your published NuGet packages I saw that you published .Net Framework and .Net Standard versions (along with the old .Net Core). Not sure if this also applies to .Net Core, but if I were you I would not lose anymore time and efforts in development for such frameworks. The countdown for their dead has already started long time ago. In addition to that I am not sure if the .Net 5 version of Scitech packages are in some sort tied to such packages (or have some dependency to). If so, then I believe that you should remove any dependency and keep the packages as completely different frameworks rather than trying to reuse components. Apart from troubles, any sort of dependency may cause to load two runtimes at the same time.
I would like to retry the sample compiled in this way, but I suspect that your library will also need to be compiled in the same way (if not I suspect that it may end up running in its own managed runtime). So, better to wait for you news and see what you think. Thanks
P.s. In some test projects I am currently debated if I should run most of the code directly from the unmanaged assemblies or through C# wrappers. Then, I realized that if I go native I simply can't use scitech for IPC, but with NativeAOT this would be possible and even easy. In fact the functions can be exported with a simple decoration attribute and they will become available to the unmanaged world. It's just a matter of calling LoadLibrary and cast the exported function (not only it works fine, but I also experienced much quicker access times then pinvoking the same function from C# although I must also say that I haven't done any optimization in pinvoke calls). Interfaces are also supported, but I am not sure if there is something else you will have to deal with when talking about a complex framework like yours. There are some nice samples in their repos and you will find other sample projects in the issues I opened from them (it will save you time since it is likely that you will face some the problems I got at the beginning).
Hello, I want to congratulate for this ingenuous project. I have some questions...
I am wondering why I am the first one to post on such wonderful project. I am afraid is not well known because does not appear in github search results. Days ago I scanned all GH search results to find similar libraries and yours didn't come up. I discovered it on VS Nuget and just by accidence. It's a shame, a project like this should appear at the top of the search results! I am tempted to stop researching for other frameworks and stuck with yours, but I think is better to talk with the author in order to be sure that mine will be a conscious choice.