ni / grpc-labview

gRPC client and server support for LabVIEW
MIT License
89 stars 61 forks source link

User Workflow of just using the Library #208

Open drjdpowell opened 1 year ago

drjdpowell commented 1 year ago

In #203 , @gregr-ni wrote:

Our intended workflow is to start with a proto file and generate the client code. Isn't all the code you describe generated in this case? If so, then this seems invisible to end users and at most it is tech debt that simplifies the generated code. What is the end user workflow where this is important? If there is workflow that you think is underserved, then that should be the issue. This allows us to priorities based on the workflow and consider all solutions that might improve that workflow.

I believe there is a strong risk that your intended workflow will not serve are large proportion of Users, especially experienced Users. For the following reasons:

1) It is very hard to get people to trust a scripting black-box tool. No experience professional is going to risk potential project failure on your tool; they are going to have to be confident they have a fallback option of fixing issues manually. That means they are going to lot at the generated code to see if they understand it enough to hack it if needed. That means it will not be "invisible to end users", at least until you have demonstrated to your tool can be 100% trusted.

2) Your scripting tool doesn't script up an architecture that people want. Because it can't. Actor Framework people want an AF Actor that is a gRPC Server, not your generated design. DQMH people want a DQMH Module. I want a Messenger Library Server. Guy with a home-brew design he has worked well with for 10 years wants that home-brew design. People are going to have to figure out how to adapt your Server design into their own ways of working. That is expensive, and has the potential to break the ability of your scripting tool to update to a new Proto file definition.

To support the workflow of people who want to add gRPC to their existing architectures, I propose effort to be spent making the code API Library clean and easy to use (and understand). I can help you with that. I believe you can make a library easier to use in an existing architecture than adapting your generated code. This will also support the original Workflow, as its scripting tools will be simpler, and the generated code will be more readable and clean, and something Users will have confidence that they could fix or extend, should there be a problem with the scripting tool (addressing that problem of trust).

AB#2273294

neilpate commented 1 year ago

I do agree with this comment. Being new to gRPC it took me quite a bit of puzzling to unpack the LabVIEW code that was auto generated for the simple Greeter example. Eventually I stripped everything down to just the gRPC wrapper VIs and it became relatively understandable.

The code generated by the scripting tool is going to be of very little use to me, I would prefer effort to be spent making the lowest layer really robust and nice to use.

AlbertGeven commented 1 year ago

I do not use any scripting tool if an alternative lib handles the low level details. Don't waste time on this anymore, spend your money/time in developing a nice API.

drjdpowell commented 1 year ago

I've taken the liberty of trying to get input from other LabVIEW experts, both on LAVA and in the Champions private forum. This has helped me get my thoughts together enough for me to try and get my point across in a better way than all these issues I have been making. Here is what I see as the different parts of your gRPC project and my opinion on them:

1) Using an existing C++ gRPC library underneath: Good, like it. 2) Wrapping shared library with Message datatypes as Clusters and Malleable VIs to pass the strict-type clusters. Love it. Really good. 3) Script up Message Cluster Typedefs from Proto file. Good. 4) Script up defining messages and Procedures Great idea, but really awkward execution.
5) Script up an entire architecture to be the Server. This is awful. Very poor readability and far too much boilerplate obscuring everything. Partly this is because of the problems at (4).

Basically, I want you to improve (2) so you can use it to fix the issues of (4), and put aside (5) until the rest of the issues are better developed.

drjdpowell commented 1 year ago

Here is a proof of principle project (attached, requires only gRPC Library version 0.5.2.1). The project: 2022-12-13 18_04_12-JDP gRPC lvproj - Project Explorer

Here I am trying to address issue (4), in that the entire Proto definition is traslated into a single VI, "gRPC Proto.vi", that is dropped by the User into their Client or Server design (same VI in both clients and server). No Issue (5) scripted architecture; instead it is easy for the User to create their own arcitecture (including using existing packages like DQMH or Actor Framework). 2022-12-13 18_23_41-Client vi Block Diagram on JDP gRPC lvproj_My Computer

JDP gRPC 2.zip

drjdpowell commented 1 year ago

Last comments before I forget everything. This is on (5), an eventual scripted server.

I would expect such a server to inject User code via some kind of OOP extenability (which I think is how most languages gRPC servers work). So I would expect to see, in such a solution, a scripted server that I don't need to modify, plus a parent class that impliments the methods (that I override in a child class). Like this: 2022-12-14 09_28_30-JDP gRPC lvproj _ - Project Explorer

The Dynamic Dispatch methods would have Message Typedefs as input and output: 2022-12-14 09_29_37-Route Guide Service Methods lvclass_GetFeature vi Front Panel on JDP gRPC lvproj

This would be quite good for a synchronous server, I think. Could work with your limited-async design possibly, but note that this would not be good enough for a true async interface to an actor-oriented messaging framework like Messenger Library, Actor Framework, or DQMH.

drjdpowell commented 1 year ago

Minor Comments:

JamesMc86 commented 1 year ago

I would expect such a server to inject User code via some kind of OOP extenability (which I think is how most languages gRPC servers work). So I would expect to see, in such a solution, a scripted server that I don't need to modify, plus a parent class that impliments the methods (that I override in a child class).

Yes - would be my preferred workflow with this as it lets the library focus on server structure and handling and neatly decouples the gRPC library and the user code

drjdpowell commented 1 year ago

Any thoughts from NI on this?

gregr-ni commented 1 year ago

While I agree with most of the points brought up in this thread, I don't feel an API first approach is in line with the guiding principles of gRPC. gRPC was designed from the start around the concept of code generation to multiple languages from a single definition. I expect this to continue to be the top priority. However, that doesn't mean we will ignore the API.

This also aligns with a general philosophy that while the main use cases must be accessible to all users, it is frequently acceptable to make advanced use cases possible for advanced users. Advanced users are more capable of dealing with more complex workflows. I want to see gRPC as the dominant mechanism for interprocess communication. That allows us to focus our investment in one thing but it puts a premium on usability for whatever is considered the main use cases.

drjdpowell commented 1 year ago

In that case, you're main current problem is that the thing you deliver for the main server use case is quite poor and is very difficult for a User to effectively understand and use. So you need to rethink the mechanism for how the User is going to interact with the scripted code. Could be the OOP method I described above or some other method of cleanly separating the User code from the scripted server. You might want to get @JamesMc86 involved as he is particularly interested in that and (I believe) has some gRPC experience in other languages.

AndrewHeim commented 11 months ago

In general, I'm an advocate for everything else @drjdpowell has to say on this thread and similar issues here. But on forcing actors or any other architecture, I would like to advocate for RT as well as a use case of equal weight. The current implementation (user events) is subject to jitter problems already - like #235 . It's easy to say that TCP already dynamically allocates memory, there's still a big difference between having one layer of this versus having several.

I would much prefer a Queue to user events. We can choose what architecture to use to wrap it - as well has have more control over the queue itself (as opposed to event queues which barely have basic functionality to interact with it as a queue).

Whatever the implementation, this is a plea for keeping the intervening layers as fast and light as possible. I'm fine rewriting boilerplate servers or stripping out a layer of higher level code, but there needs to be access to something fast and light. Otherwise all of this work for grpc gets wasted when it's not good enough for use on RT. We could take an extra few percent making this library great for that use case too, or we have to throw it out altogether (which has happened far too much in the past).

JamesMc86 commented 11 months ago

I agree that RT should be a first class use for this but I think in advocating for such a fundamental change of approach from user events more evidence would be needed to justify it.

There are many potential sources for jitter in a gRPC/HTTP stack running in a standard priority task in LinuxRT and I'm not aware of any reason that events are worst than queues for jitter.

As you mention, you get a richer API with queues and perhaps that is worth the change. But I doubt you will see any change in those jitter tests by converting to a queue. gRPC is always going to have some jitter, especially with multiple clients. Looking at the test code there are other changes which are more likely to show improvements like running the handler as a re-entrant VI in a higher priority execution system which will reduce jitter from the handler.

I suspect the reason user events are used is because there is a C API for them but there isn't a public C API for queues.

AndrewHeim commented 11 months ago

I suspect the reason user events are used is because there is a C API for them but there isn't a public C API for queues.

Ah - I hadn't thought about that, but that makes sense. Events are often a preferred style in C.

In LabVIEW, events are adequate as long as messages are sparse. But as soon as they cease being sparse enough, the tooling starts to become inadequate compared to Queues. But enough about that.

Jitter happens in RT code, and I get that. Critical code will be off in another loop. This just has to be good enough to stay responsive. Yes there are things that could be done in that example, but again this shouldn't be the process of highest concern. It's probably unreasonable to design something expecting turnaround times in the sub-10ms range in such situations.

And yet, seeing jumps of 6ms with a on an 8 core PXI-8881 (using loopback and a dead simple server reply, no less) is enough to be concerning as a bellweather. There's simply not much going on there - how bad will it get when there is a lot going on (and a time-critical thread too)? But like you said, there might not be many other meaningful options to consider as alternative.