SharePoint / PnP-JS-Core

Code moved to https://github.com/pnp/pnpjs. This repository is archived.
Other
379 stars 231 forks source link

Graph Support #396

Closed patrick-rodgers closed 7 years ago

patrick-rodgers commented 7 years ago

Category

[X] Enhancement

[ ] Bug

[ ] Question

Expected / Desired Behavior / Question

This is a thread to discuss the idea of supporting MS Graph in sp-pnp-js. This was always a planned support (we have pnp.sp. would have room for pnp.graph.). So is there interest in helping develop this functionality?

Areas for discussion:

russgove commented 7 years ago

are there plans for spfx to expose any graph hooks?

Sent from my iPad

On Mar 30, 2017, at 11:24 AM, Patrick Rodgers notifications@github.com wrote:

Category

[X] Enhancement

[ ] Bug

[ ] Question

Expected / Desired Behavior / Question

This is a thread to discuss the idea of supporting MS Graph in sp-pnp-js. This was always a planned support (we have pnp.sp. would have room for pnp.graph.). So is there interest in helping develop this functionality?

Areas for discussion:

What support does SPFx have and how can we leverage it for getting a graph auth context - how do we handle this outside of SPFx? Do we want to try and write a generator to take in the file https://graph.microsoft.com/v1.0/$metadata and create the API - explore the idea of class/namespace merging in TypeScript. Similar to partial classes in C#. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

joelfmrodrigues commented 7 years ago

The need for the library is definitely real, and can only increase as deeper integration between apps is required. Question: Can we rely 200% on the metadata file? I assume so, but...

KEMiCZA commented 7 years ago

Great idea. I've used this project https://github.com/microsoftgraph/msgraph-sdk-javascript last week. You have to specify the method yourself as a string, but other odata methods (like filter) are built in.

SPDoctor commented 7 years ago

I talked to Vesa about this in Helsinki, and my thinking was that it would be nice to wrap the MS Graph using the same approach as PnP-JS-Core which is a "by design" API. The problem with API's generated from metadata, (and these already exist, presumably autogenerated using VIPR) is you may not get an optimal API design, so perhaps okay for a first pass only.

I also think this could give us an opportunity to shim out anything that is missing from MS Graph by calling the SharePoint REST API, for example, until such time as the MS Graph supports it. That would give us an API that supports future MS Graph functionality but without requiring code changes - it would continue to work but you would potentially get less auth prompts as the MS Graph surface increases.

patrick-rodgers commented 7 years ago

@russgove - yes, but no public details available currently

@SPDoctor , @KEMiCZA , @joelfmrodrigues - agree with all these comments. If we did have a generator I envision that it would create some version of partial classes that we would then augment with non-generated code. So get the objects, basic properties, and relationships from the metadata - and then "wrap" or "combine" that with code we have written to provide a lot of the operations and value.

The graph API will never 100% contain the functionality for SharePoint so we'll just have to decide as we go where we add SP graph functionality or just point to the native API. I'd like to avoid supporting two ways to do everything.

As I said in the call I think we should wait a bit to start development on this - but I might have a pass at a generator just to see how hard that would be. Unsure when I'll have time, so if someone wants to have a go, please do.

johnnliu commented 7 years ago

I feel Graph team will build their own API. Which means this work may not get the contribution it can benefit from.

https://techcommunity.microsoft.com/t5/Tech-Summit-All-Sessions/BRK3113-Dive-into-the-power-of-the-Microsoft-Graph-API/m-p/20058#M43

Slide 20 - looks like C# with the await syntax. But I can easily see work as TypeScript.

SPDoctor commented 7 years ago

I think they already do, but these are generated so not as easy to use. I like the way Patrick has designed PnP-JS-Core. Also I think Graph team will be supportive.

VelinGeorgiev commented 7 years ago

Agree with @SPDoctor, some previous experience shows that js SDKs created by some of the Microsoft teams are too generic and definitely not that fluent as pnp js is.

I also think we should wait, until we have stable SharePoint graph release. I would like to see some solid stuff related to SharePoint in the Graph (not just few endpoints) and then give it a go.

patrick-rodgers commented 7 years ago

Talked to some folks today and the way to do this "correctly" will be to use the graph http client once it is available in SharePoint Framework. It will handle all the plumbing to correctly get the needed tokens - and that plumbing might change, so not great for us to try and write and maintain our own thing. But this means these features will only work from inside the SharePoint Framework.

I have to agree it doesn't make sense to duplicate any work here. Not working outside of SharePoint Framework is OK. Graph doesn't have a hybrid story yet, and as things evolve likely most SP Dev will be done within the SPFx, though this is a long tail.

I do want to see what we can do around support for node, but would require registering the node app as a AAD app. Not a big deal but a thing we can't automate (or can we...?)

Anyway, a few more thoughts to throw in the mix.

SPDoctor commented 7 years ago

Yet more thoughts: Dan Silver has written an MS Graph JavaScript wrapper library. I think he has done a great job, but it is really just an abstraction of REST - you still have to supply the REST path as a string so it doesn't give you a contract in the same way as PnP-JS-Core.

The Vipr API generation tool has a writer for C# and a template-based writer that is used to support Java and Objective C. The best option would be to create a TypeScript writer, but this looks like quite a big task and in terms of documentation Vipr has the feel of a project written to solve a specific problem rather than as a general purpose tool. There isn't much evidence of active development - I suspect the authors have moved on to other projects at Microsoft.

I think it makes sense to use the metadata from MS Graph because this will be higher quality and more consistent than the metadata from underlying APIs (I mean underlying MS Graph). But it is probably easier to create our own tooling, possibly using OpenAPI, that will generate a first rough cast of the API and then do manual polishing as @patrick-rodgers suggests.

One thing is clear - the proposed tool will need to be called Serialization Protocol for Normalized Transcoding, abbreviated to 'Serpnt' ;-)

phillipharding commented 7 years ago

Not sure that an API/Interface generated by a metadata tool adds much to the developer experience, except that you'd get to use a tool with the coolest name ever; Serpnt :-)

I've written a number of wrappers (C#) across many of the Graph surfaces, which attempted to abstract away the REST'ness of the underlying API and present an abstract data model, Videos, Groups, Messages etc.

I think the PnP JS fluent 'model' as you've done it can add an enormous amount to the Dev experience, and the advantage of doing it against the Graph is that to a large extent, the underlying base data model for the Graph surfaces are known. Dealing with some of the trickier parts of the API can be difficult as, particularly with Groups and its sub-surfaces (Messages, Drive, Conversations etc), the URL addressing gets shall we say "verbose", the fluent style interface can help alot.

Dealing with data-extensions again is also tricky, these pains points and others like them I think is where PnP-Graph could add great value.

How far PnP-Graph should go to "aid" the developer in gaining access-tokens I'm not sure, you run the risk of duplicating effort with other libraries, though given that there's probably 2 basic flows (impilcit grant and client-credentials) that may not be so bad if there was some kind of plugin arrangement whereby the PnP-Graph could ask of such a plugin "give me a token please". Then perhaps there could be some interface wrappers around popular existing Auth libraries (MSADAL, node-sp-auth etc)

@patrick-rodgers I do want to see what we can do around support for node, but would require registering the node app as a AAD app. An AAD app registration would be required regardless wouldn't it?

SPDoctor commented 7 years ago

@phillipharding a metadata tool can take a lot of the grunt work out of building and maintaining the API, but I agree that the PnP JS model is a good one that we can build on. I see the tool as just being a way to reduce the amount of work. And the potential for time saving is large when you consider the size of the beta MS Graph REST API surface area.

On Auth, these were also my thoughts. We could have a default token fetch or http client that works in page context as and when that is implemented. We can have a way to override the token fetch and/or transport method using ADAL libraries appropriate to the situation or custom method for OAuth ninjas. Not sure where the correct abstraction layer is, maybe both token and transport layer.

Agree that in addition to SPFx the library should support Node, but also HTML5 hybrid apps (e.g. Cordova, Electron), client-side web applications and Office Add-ins. That would be a pretty useful library wouldn't it?

patrick-rodgers commented 7 years ago

This library looks very interesting. Perhaps when it is ready that solves our auth challenges. Inside SPFx we default to use their client, outside SPFx we can use this library to handle auth. Obviously I just hand waved over a lot of detail - but seems a viable path to start thinking about. We just leave a stub for "get auth token" and can start writing the API wrapper parts. I'll try and start a prototype PR to start looking at code.

SPDoctor commented 7 years ago

Out of curiosity I spent a couple of hours to see how hard it would be to scaffold something using TypeScript. It's at https://github.com/SPDoctor/Serpnt if you want to have a look.

patrick-rodgers commented 7 years ago

This is cool @SPDoctor - just forked it and will play with it some tomorrow time permitting. Looks like a great start from stepping through the code once. If I have any ideas I'll submit a PR.

patrick-rodgers commented 7 years ago

As discussed on the call today - have made some first steps on this. Right now big question is how do we want the shape of the below? I've added a few options, of course welcome more.

   // this would be explicit about which version you want to use
   // easy to gather up the appropriate method into the v1 object and beta object, can share methods as well across versions, no need to duplicate full code base
    pnp.graph.v1.groups.get().then(r => {

      // ...

    });

    // this could just use the "latest" stable version
    pnp.graph.groups.get().then(r => {

      // ...

    });

    // this could specify a version and other properties
    pnp.graph.configure({ v: "blah" }).groups.get().then(r => {

      // ...

    });

   // could also use pnp.setup to global set graph version & other properties matching sharepoint
s-KaiNet commented 7 years ago

Personally I like the idea with pnp.graph.v1.goups.etc
From my POV there might be an issue when using "latest stable version" approach, i.e. pnp.graph.groups, because a new version of graph might had some changes which are not compatible with previous one or has some side effects on your code. In that case, when you upgrade pnp-js, your application will effectively implement latest version of graph, instead of v1 for example, that might lead to unpredictable results.
I think It's better from stability perspective, if your app always relies on a given API version.

russgove commented 7 years ago

I like the configuration option better. Specify the version number in the pnp setup for global option.

Then allow overriding for an individual call:

pnp.graph.groups.get({ v: "blah" }).then(r => {

I like that option better for overriding the options on a per-call basis in 'old fashioned' pnp sharepoint calls as well. I know that is being discussed on another thread.

The "blah' above is just a string, so a developer could use any number of methods to determine which version to use if he chose to do so.

KEMiCZA commented 7 years ago

If you pass the graph version through these methods (get(), configure(), etc), how are we sure the object returned would be parsed correctly as a type. What if the pnp.graph.groups.get method returns another structure/object in v2 than in v1. Or are we just returning the raw response and let the user deal with the object as any?

So what I'm thinking is that I agree with @s-KaiNet in using pnp.graph.v1.goups because in this case I assume we can manage the data structures/types/objects that are returned?

SPDoctor commented 7 years ago

Returning type any would rather defeat the point.

Not sure why we can't just version the library. I don't like the idea of versions being explicitly part of the API. The semantic versioning should give us enough flexibility to keep track of changes to the underlying REST API. I like the idea of a custom code generator that embodies the thinking behind PnP-JS-core so that it should be a case of re-running the generator and running the tests. Easy to say, of course. We will learn in time how practical that is in reality :-)

VelinGeorgiev commented 7 years ago

I like the configuration option better. Specify the version number in the pnp setup for global option. Then allow overriding for an individual call: pnp.graph.groups.get({ v: "blah" }).then(r => {

If we go with explicit v1 (pnp.graph.v1.goups) and we have to move the code base from v1 to v2 (knowing that nothing bad would happen) then this will result in find and replace all from "v1" to "v2" in the code. That does not seem right to me.

giuleon commented 7 years ago

Agree with @VelinGeorgiev I think that the configuration option gives more flexibility from every perspective.

s-KaiNet commented 7 years ago

I can see only one drawback when using pnp.graph.configure method (something @KEMiCZA talked about).

Imagine that in v2 a new method will be added to groups endpoint, say groups.blah. Then when trying to use pnp.graph.conigure( {v:'1'} ).groups.blah() I will receive execution time error (not compile time).

Another situation, what if some method will be removed? Say method1() removed in v2 version. Then in order to support v1 version, we still need this method available on groups object, but when trying to write pnp.graph.conigure( {v:'2'} ).groups.method1() you will receive execution time error.

What if some method is renamed? In that case we need to hold two methods on groups object - with old name for v1 and with a new name for v2.

What if some method receives additional parameters in a newer version? As a result our groups object will hold all possible combinations of methods and parameters between different API version. I'm not sure this is good a option.

And the last point mentioned by @KEMiCZA - what if data structures are changing between versions?

From my POV with configure approach you need always to think about which method (and which params) in which version is available. When using pnp.graph.v1 you should have an intellisense about concrete methods for concrete version.

patrick-rodgers commented 7 years ago

My thinking on this matches @s-KaiNet for the same reasons. Doing a global replace of .v1. to .v2. isn't really an issue - and moving to a new version of an API should be more than a config change as you would (or should) retest every call to ensure it performs the same. It would also allow you to mix your calls across versions while you transition. There is the added benefit of what I think of as "self documenting code" - meaning I can tell exactly what API version is called with pnp.graph.v1. vs pnp.graph..

The .v1. way also provides an easy way to group objects behind the scenes and maintain proper calls if/when there are differences.

We also may have different return values coming back from methods, alluded to by @KEMiCZA. So if the code was simply done with configuration on the library side we would have to potentially maintain various code paths and return types - with no way to expose them to callers (everything becomes any). The .v1. way locks in a return type.

Think about this scenario: I have written a giant application that calls ms graph and using the configuration option I have selected v1. Something like pnp.setup({ graph: v1}) which I call globally somewhere at app start. Now v2 is released and I want to migrate. v2 changes the return types as well as some method signatures. If I change the global config option to v2 my application now has some # of bugs. Ok, so I can go to each method call and add a local configure - but now the library has no way to change its exposed API, meaning you don't get updated return types or intellisense.

OK you say - I see but then we can just version pnp js to track the graph versions. Well I think we are left with the same issue, how do I do a partial upgrade? My company (Contoso in this context) isn't going to create a project to upgrade all this code that is currently working because we want to call a method not in v1 so we have to update everything to v2.

My main points in favor of .v1.* approach are:


@SPDoctor has another idea entirely, we don't maintain a library as much as we create a library generator that will read the API and create for us some code. This idea is interesting, but makes it much harder to add value beyond wrapping the API.

This is a great conversation and I really appreciate everyone's thoughts and ideas! We've built such a fantastic community.

SPDoctor commented 7 years ago

Does this scale though? What if we get to pnp.graph.v37.*? Does the library include 37 versions of everything?

If we can rely on semantic versioning in Graph itself then they shouldn't break backwards compatibility within the same major version, so the same should be true for the API. Then you can reference the library version that matches the endpoint version. Match a specific version if you need that level of control, or latest minor version, etc.

I think the key to keeping a generator achievable is to limit the scope to generating this specific API, as opposed to trying to build a general purpose generator for any REST API metadata or schema, which is what tools like VIPR are trying to do. A more realistic approach might be to use a generator for heavy lifting and then do the fine tuning manually.

patrick-rodgers commented 7 years ago

I would think if we get to that point we would establish a deprecation scheme of some sort. I.e. in the next release .v1 will no longer be supported. Or maybe not, as you say much of the code should remain shared. But we still have the typing needs that we couldn't maybe support. Perhaps I need to go a little further with what I imagine code wise as something to show more of the story. Not too far, but enough to show my real implementation thoughts.

VelinGeorgiev commented 7 years ago

Both approaches are fine for me.

In the case of not having implicit .v1. we have to build common interfaces that would not change that much therefore we are not that flexible when new endpoint is added for example, but it is somehow consistent. This is how I did it in the past and I did not take in consideration the new api cloud era where every api can change at any moment. If we go with the implicit .v1. we are more flexible and maybe more fluent, but not that consistent and maybe have to maintain a bit bigger code base. This is just fine for me.

If we know more on the process of adding new endpoint to the graph and if there is a common process, naming conventions and rules e.g. OData like standard. Then the common interfaces may not be a big pain because we know what to expect, otherwise the implicit .v1. is a good option.

The idea of generating code with tool is also good. If the Graph APIs are documented (swagger) then we can grab the endpoint definitions and turn them into code, but I thing it may be a complex challenge to parse the APIs definitions/metadata and then adjust that to a fluent js library.