Closed jeddeloh closed 4 years ago
Hey @jeddeloh, off to a great start! Here some feedback on some of the questions:
About namespacing. I think not using namespacing, and exposing a single user-facing module ApolloClient
, is fine. With the rest as ApolloClient__Query
aliased in the library ApolloClient.Query
(or whatever the submodules are called). If we use namespacing we cannot expose functions or values to the main module, so I think that is a big downside in the user-facing API, and even packages like reason-react
don't use a namespace for that reason.
Raw is the raw datatype without conversion. It is used because graphql-ppx
also targets native. Also it maps to the JSON GraphQL data and not specifically JavaScript.
The upside is that raw types, parse and serialize all shouldn't specifically be exposed to the user in the main apollo API.
I think useMemo
is used just as a performance optimization in the library (in order not to parse the data on every render), so I think it's used correctly. If you use refs, you have to invalidate them, and with memo that is done for you with the dependency array.
I think using records is preferable for mapping to typescript objects. Usually optional keys are compatible with a key that has an undefined value (unless the implementation specifically checks for key existence, if that is the case we can send a PR to apollo-client).
It seems one challenge we have is creating new objects when converting types. I don't think this is an issue, but something to be aware of. If we were to, say, wrap the onComplete option passed to useQuery such that it received parsed data, we would be creating a new function on every render. Obviously we could memoize, but I think this complicates things because toRaw for queryOptions would have to become a hook? Again, I don't think this matters in practice, but I don't love it. My strategy on this pass was to not wrap stuff like onComplete. You need to parse it yourself. All the types are there.
Can you elaborate on this (I don't really get this example).
We are currently catching the parse exception and returning None in that case. I'm sure there are good reasons for this, but a parse failure, given all the type assurance I have, seems like a catastrophic failure I'd personally want to know about. For this pass, I've used straight parse that throws an exception to reduce work until I understand the history of this or we decide on an error handling strategy.
I think you can even not catch a parse exception, as it is exceptional (it shouldn't happen).
I made the assumption that variables coming in will have already been converted to Raw types through
makeVariables
especially given the upcoming work arounduse
ingraphql-ppx-re
.
Yes that is a good assumption to make.
I wonder if it's worthwhile to declare any internal stuff that can be uncurried. I would worry only if these are functions that are called very often in a short amount of time, like in a loop, for it to be a meaningful optimization. Usually, data fetching and parsing, etc. doesn't happen that often.
By the way. I think that we don't have to religiously stick to mapping 1-1 to the @apollo/client
API. I guess for some patterns we can opt for more idiomatic Reason approaches.
Another nice thing that we can make use of is that graphql-ppx
can now output tagged template literals. This makes some compile time optimizations available to @apollo/client
with graphql-tag
.
And I would suggest to collaborate tightly so we can offer the best experience. For some things there might need to be some changes/additions to the ppx
. I suggest it would be best to release the 1.0 of both graphql-ppx
and reason-apollo-hooks
at the same time. So it would be the best experience migrating for users.
Naming wise perhaps we can try to get this published under the npm package @reasonml-community/apollo-client
?
hey there @jfrolich, I haven't been following up close the whole discussion about graphql-ppx@1. I'm catching up right now with everything. About moving the lib to reasonml-community, it is something I have been thinking about lately. As the lib is important for the community, and also used for onboarding Reason newcomers it looked like a good idea! Let's discuss the details about the moving
About namespacing. I think not using namespacing, and exposing a single user-facing module ApolloClient, is fine. With the rest as ApolloClient__Query aliased in the library ApolloClient.Query (or whatever the submodules are called). If we use namespacing we cannot expose functions or values to the main module, so I think that is a big downside in the user-facing API, and even packages like reason-react don't use a namespace for that reason.
Okay, so the proposal is this: keep it like is, but our top-level module would be ApolloClient
not Apollo_Client
? Sounds good to me 👍
Raw is the raw datatype without conversion. It is used because graphql-ppx also targets native. Also it maps to the JSON GraphQL data and not specifically JavaScript. The upside is that raw types, parse and serialize all shouldn't specifically be exposed to the user in the main apollo API.
I was excited about having more agnostic language around conversion at the edges, but reading this, I’m now back to thinking we should stick with toJs
and fromJs
.
I think useMemo is used just as a performance optimization in the library (in order not to parse the data on every render), so I think it's used correctly. If you use refs, you have to invalidate them, and with memo that is done for you with the dependency array.
Agreed on the performance/parsing part, but looking through some of the PRs/issues that led to the addition of useMemo
it looked like people were specifically looking to use the output in useEffect
. If it were critical that someone's effect were run once and only once for each change, we force users to be very careful about the values they check against since they can't rely on useMemo
, don’t we?
FWIW, if I’m understanding you correctly, the useGuaranteedMemo
hook does do the ref invalidation (I think we only ever have one dependency). I had this same discussion over in reason-urql
and now I’m beginning to feel like I just have some huge misunderstanding about useMemo
and am being an annoyance about something that doesn’t matter. 😅If people are confident useMemo
is what we want here, I won't press the issue.
It seems one challenge we have is creating new objects when converting types. I don't think this is an issue, but something to be aware of. If we were to, say, wrap the onComplete option passed to useQuery such that it received parsed data, we would be creating a new function on every render. Obviously we could memoize, but I think this complicates things because toRaw for queryOptions would have to become a hook? Again, I don't think this matters in practice, but I don't love it. My strategy on this pass was to not wrap stuff like onComplete. You need to parse it yourself. All the types are there.
(jfrolich): Can you elaborate on this (I don't really get this example).
I was just trying to point out that if we're recreating functions on every render that are passed into the Js Apollo Client (onComplete
option in this example), we have to be okay with whatever internal re-computations occur there. I think it's not a big deal, but was trying to get at some conceptual purity of toJs
. I'm now of the opinion I shouldn't have brought it up. I think we can ignore it.
I think you can even not catch a parse exception, as it is exceptional (it shouldn't happen).
I think that is also my preferred error handling strategy here.
Ah ok. You are right that you can’t necessarily be certain it changed when you use it in useEffect as a dependency. I don’t really know when that should be an issue though? But I might be missing something!
By the way. I think that we don't have to religiously stick to mapping 1-1 to the @apollo/client API. I guess for some patterns we can opt for more idiomatic Reason approaches.
Actually, this is a big one. I completely agree here and think we should remove that as a goal and struck it through in the description.
However, it exposes to me the true reason I wanted it in the first place. I feel like reason libraries mix two concerns: binding to the Js library, and the extra, reason-specific library code around those bindings. It’s the Js library bindings that I’m really proposing have a 1:1 mapping. Those bindings don’t have to be complete, but the idea is that you should be able to describe some repeatable patterns anyone could use to contribute to bindings in a consistent way.
Because those two concerns are mixed might be one of the reasons the community is littered with abandoned half-finished bindings to libraries I rarely feel like contributing to. If AstroCoders/reason-apollo-hooks
moves to reasonml-community
this might be an opportunity to iterate toward better patterns. I propose we think about the possibility of publishing three reason packages @bindings/graphql
, @bindings/apollo-client
, and @reasonml-community/apollo-client
(npm namespaces made up). Maybe that’s unrealistic, I haven’t thought that much about it yet, but would love to hear if there are any thoughts.
Another nice thing that we can make use of is that graphql-ppx can now output tagged template literals. This makes some compile time optimizations available to @apollo/client with graphql-tag.
And I would suggest to collaborate tightly so we can offer the best experience. For some things there might need to be some changes/additions to the ppx.
Agreed and very excited about this! But I’m deferring entirely to the maintainers at this point.
Ah ok. You are right that you can’t necessarily be certain it changed when you use it in useEffect as a dependency. I don’t really know when that should be an issue though? But I might be missing something!
I had this same question! I haven't had a need for this since I've been able to be very specific with effect dependencies, especially with ones I need to ensure run only once. However, context from an issue here about mutations, and from reason-urql
is that people have the expectation to be able to use the output here to trigger hooks in a reliable way. Having had this discussion, I'm personally fine with useMemo
but I want to make sure we're not introducing booby-traps for people. 🤷
@fakenickels, any thoughts on how to proceed here? I don't mean to rush anyone, but I've been (perhaps incorrectly) waiting on confirmation from you if this general direction looks good before moving further. After that, next steps might look like:
apollo-client-3
(name?) branch upapollo-client-3
tag for issuesapollo-client-3
apollo-client-3
going forwardI can also just do everything in this PR if you'd rather.
Hey there @jeddeloh! Thanks for working on this. We were figuring out the lib move to reasonml-community. Now we are here finally!
Create a new branch and keeping pushing there along with a npm beta tag for it (as @apollo/client@3 is still too a beta) seems the way to go. I still need to take a better look at the new modifications of the client API.
As the lib is now at reasonml-community, creating a separated package or turning this one into a monorepo which would include the hooks lib and the client one seems to be a more maintainable approach.
Okay, so the proposal is this: keep it like is, but our top-level module would be ApolloClient not Apollo_Client? Sounds good to me 👍
ApolloClient for sure
I was excited about having more agnostic language around conversion at the edges, but reading this, I’m now back to thinking we should stick with toJs and fromJs.
toJs
and fromJS
as well
About the hooks and client as separate packages. I am in favor of having a single package because that is also equivalent to the @apollo/client
package, and it makes maintenance easier (people don't need to keep two libraries in sync).
@jeddeloh In the main time I am working on getting #117 working with the new graphql-ppx
API that was discussed and decided on last week.
Hey there @jeddeloh! Thanks for working on this. We were figuring out the lib move to reasonml-community. Now we are here finally!
Nice! Thanks for doing that!
@jeddeloh In the main time I am working on getting #117 working with the new graphql-ppx API that was discussed and decided on last week.
@jfrolich Sounds great. Is this just switching the hooks to take the FCM self
instead of definition
or does it include the "extends" API as well?
My plan is to update this PR to account for comments, then experiment quickly with putting the js bindings in their own module, and finally move on to ApolloClient
(meaning the actual class, not this library) bindings because they shouldn't be affected at all by the new graphql-ppx API.
@jfrolich Sounds great. Is this just switching the hooks to take the FCM
self
instead ofdefinition
or does it include the "extends" API as well?
Supporting the self
FCM
I just read more carefully the PR description comment and saw you linked the #3 issue and I believe that is a confusion in here. The #3 was about not using a Apollo Hooks package provided by the community as dependency because ReasonApolloHooks was created before the official @apollo/hooks
was out. So it was not about namespacing. Just clarifying that.
About the folder structure we need to take care to not make things too verbose. I don’t think TS types defs out there try to mimic the original lib folder structure as well
About the extend
I still need to finish my review with care to give an opinion about this one.
Looks good to me
Still need to finish my review to give an opinion.
Will still take a look
I never used this one too, but I think with JS.Dict.t we loose the point of having type safety. Maybe leave it out for now and figure that out in the future. Not sure if there are a lot of people out there using it.
Records look good to me and keep the type safety as well. I’m not sure if using JS.Nullable.t is necessary as option types are now represented as undefined
by BS.
It was basically introduced because it wouldn’t be ideal to catch the exception in a component render. Maybe returning None
isn’t the best approach as we hide the actual failure from our consumer, perhaps we can consider using a result type now that is is available by default?
The library code has grown a bit disorganized indeed and doesn’t feel very pretty. Thanks a lot for pushing this out and trying to tidy things up.
My only concern is that the Reason ecosystem can push beginners away because we keep changing things and introducing breaking changes, even when it’s for the better. We should offer a way to at least gracefully upgrade from the past version if all the mods get merged.
Those are big changes and I don’t think we should rush them a lot. I’ll still finish my review about the other points and the code itself ASAP this week.
I made a few quick updates based on feedback so far:
raw
to js
[@bs.deriving jsConverter]
for string enumsJs.nullable
to option
where appropriate@fakenickels
I just read more carefully the PR description comment and saw you linked the #3 issue and I believe that is a confusion in here. The #3 was about not using a Apollo Hooks package provided by the community as dependency because ReasonApolloHooks was created before the official @apollo/hooks was out. So it was not about namespacing. Just clarifying that.
OMG, sorry. I was referring to goal number 3 in the original list of goals in the PR description. I didn’t even notice it linked to the issue. How unfortunate that it could be considered relevant! 😞
I never used this one too, but I think with JS.Dict.t we loose the point of having type safety. Maybe leave it out for now and figure that out in the future. Not sure if there are a lot of people out there using it.
Did you see I used Js.Json.t
? It was previously typed as Js.Dict.t(string)
. Regardless, my recommendation would be to definitely include it.
It was basically introduced because it wouldn’t be ideal to catch the exception in a component render. Maybe returning None isn’t the best approach as we hide the actual failure from our consumer, perhaps we can consider using a result type now that is is available by default?
Maybe so. I do feel like it might be important to figure out what to do about truly exceptional situations, though (maybe this is an exceptional circumstance). Take for example that I used Belt.Option.getExn
(temporarily) in Apollo_Client__Core_NetworkStatus.re
instead of returning something like Unknown
. Unknown
doesn’t feel like a valid value for a user to consider handling. Instead, I think this library should have some mechanism for returning an ApolloError
that can say “Reason-specific library exception: things are broken, submit an issue!” or something. :shrug:
About the folder structure we need to take care to not make things too verbose. I don’t think TS types defs out there try to mimic the original lib folder structure as well
I think this is a big one! This was the main thing I was trying to explore in this PR. Personally, coming into this, I felt I wouldn't have a chance of maintaining this manually without a very specific method to the madness and this was the most sane thing I could come up with. If you’re not liking how that’s turning out, this is probably a good place to bow out and let you form your own opinion of how to carry things forward.
Thanks for all the feedback and consideration! I think you’re right that things should proceed carefully. I’m going to hold off on doing the ApolloClient
class bindings until the path forward becomes clearer.
Context
I never used this one too, but I think with JS.Dict.t we loose the point of having type safety. Maybe leave it out for now and figure that out in the future. Not sure if there are a lot of people out there using it.
Did you see I used
Js.Json.t
? It was previously typed asJs.Dict.t(string)
. Regardless, my recommendation would be to definitely include it.
Hello! Just butting in with a thought here. Really happy that you've got this started @jeddeloh!
I'm not sure either Js.Json.t
or Js.Dict.t(string)
are good choices. Js.Json.t
forces decoding the json. That would ofc give safety (not really type-safety) but would not be particularly ergonomic. Js.Dict.t(string)
would give safety and have better ergonomics, but would constrain the values too much (might want to pass all manner of things which aren't strings).
The typescript type is export type Context = Record<string, any>
which I think for most Reason users would be a record that can be defined in whatever way suits the needs of the application. A use case I've been thinking about is tracing, where a query sets a transaction id as a header in the network request, which can be used for request tracing if there are errors.
The only way I can think of right now to get both flexibility, safety, and ergonomics is to use functors
module Context = {
type t = ...
};
module Apollo = ReasonApolloHooks.MakeApollo(Context);
Apollo.Client.make( ... ) //or Client.new( ... )
To make sure the same Context.t
is used everywhere, all hooks, links, consumers, and providers, etc., need to come from the created Apollo
modules
Apollo.Hooks.useQuery(...
Apollo.ErrorLink.onError(...
and so on.
Yes that is probably a great approach. Also it would be good to have a default Apollo
module with an empty context.
@mbirkegaard Let's try it and see how it works out! My choice of Js.Json.t
was one of expediency. My plan (and it certainly doesn't have to be me) would be to do one pass at the api as complete as we want to make it, and then come back on a second pass at stuff like this to get the best experience possible. Thoughts? In the short term, if it's really inconvenient and you know the relationship, I might just cheat:
module Context = {
type t = {transactionId: string};
external toJs: t => Js.Json.t = "%identity";
external fromJs: Js.Json.t => t = "%identity";
};
Maybe it would be interesting to look at this: https://github.com/jsiebern/re-typescript
@mbirkegaard Let's try it and see how it works out! My choice of
Js.Json.t
was one of expediency. My plan (and it certainly doesn't have to be me) would be to do one pass at the api as complete as we want to make it, and then come back on a second pass at stuff like this to get the best experience possible. Thoughts?
Sounds fine to me. I'm good with whatever makes progress.
Maybe it would be interesting to look at this: https://github.com/jsiebern/re-typescript
At first, trying it with a random sample of types, I didn't think it would really be all that helpful, but my brain must have been doing some background processing and I'm beginning to feel like there is something here if we just go about it right. I'll try and take a more serious look this weekend.
I was trying out re-typescript, it's a great tool but doesn't cover all cases just yet! Not sure if it would be able to parse all the complicated types from Apollo
Actually, I'm gonna keep forging ahead here because it's already useful to me. I think this PR may have evolved into more of a proposal for an entirely new repo of bindings to @apollo/client
rather than a refactor to reason-apollo-hooks
especially with #117 getting things working. Let me know if that's a direction people would rather see happen and I'm happy to reduce the noise here.
@jfrolich Now that you have the new API going, I added a few bindings on the client as well as useSubscription
. Biting off a little larger chunk of work this time, I found that the amount of parsing and serializing was more than I expected, but I found it doable with a consistent pattern. This new API is so great!
I was trying out re-typescript, it's a great tool but doesn't cover all cases just yet! Not sure if it would be able to parse all the complicated types from Apollo
Yeah, my workflow has been very procedural so far, though, and given how much Apollo uses extends, I hoped that copying/pasting types into groups as I came across them and tweaking a few things could still be pretty nice. Ultimately, you were right though. As it stands, the inconsistency (doing simple types one way and avoiding the complex ones) wasn't worth it. 😭
@jeddeloh: I would suggest to remove the simple
variant response from useQuery
and useMutation
. The useQuery
response doesn't neatly map to variants (it can return data and still be loading more for instance). Now we have records we can simply pattern match records (was not possible with objects, so that is the reason to have the simple variant response). This makes the API nicer because we don't have two of the same result types in a tuple.
In my PR I created the useQueryDeprecated
and useMutationDeprecated
as a migration path.
I would suggest to remove the simple variant response from useQuery and useMutation.
Oh, nice. I much prefer this. I guess I always took the pattern matching for granted and assumed it was for aesthetics and exhaustiveness (in the common case).
We had this discussion in the past! I do think the variant mode is a keeper still. Most of the time it is used as a demo of ReasonML out there because of its exhaustiveness and aesthetics as @jeddeloh
https://twitter.com/alexgomesdev/status/1264327721397649409?s=21
It is an important onboarding and advertising feature
It’s just not really a fit for this data structure, not mutually exclusive states. And with suspense we lose the loading variant and error variant. So then it would just be Data and NoData, which is better represented by an option type.
But if we choose to add it, we can have a useQueryVariant hook?
I thought a bit more about this. We could make the variants correct by:
Loading
=> Loading(option(data))
NoData
=> why is this variant even there, if I read the spec correctly the empty data should only happen when there are errors
Error(e)
=> Errors(e, option(data))
, errors is an array + it can also still return partial data.
If you like this more we can get rid of the record to contain the state completely.
The advice I've given to anyone wanting to do anything (semi-)serious with reason-apollo-hooks
is to steer clear of the simple
variants and always use the full
records. As @jfrolich mentions the states are not mutually exclusive and they abstract away too many cases that you want to be able to handle. I think the variant is a trap and invites people to learn bad patterns and write code that is a) wrong (the representation is not correct) and b) harder to rewrite, expand, and refactor later. (Also this)
I've even started wrapping useQuery
and useMutation
with a small helper module that always throws away the variant (and sets errorPolicy to All
and converts the mutation promise to Promise.t).
If you like this more we can get rid of the record to contain the state completely.
I think we should definitely keep record for the state. Even if we make the variant correct, something like this (a modified version of the FilterByAgeErrorHandling.re
component) seems pretty cumbersome to do with the variant.
[@react.component]
let make = (~age) => {
let (_, full) =
useQuery(
~fetchPolicy=CacheAndNetwork,
~errorPolicy=All,
~variables=PersonsOlderThanQuery.makeVariables(~age, ()),
PersonsOlderThanQuery.definition,
);
<div>
{switch (full) {
| {loading: true, data: None} => <p> "Loading"->React.string </p>
| {loading, data: Some(data), error} =>
<>
{loading ? <p> "Refreshing..."->React.string </p> : React.null}
{switch (error) {
| Some(_) =>
<p>
"The query went wrong, data may be incomplete"->React.string
</p>
| None => React.null
}}
<h3>
{"There are "
++ (data##allPersons->Belt.Array.length |> string_of_int)
++ " people older than "
++ string_of_int(age)
|> React.string}
</h3>
</>
| {loading: false, data: None} =>
<p> "Error loading data"->React.string </p>
}}
</div>;
};
The compiler will still enforce exhaustiveness when using the record and I think the simpler aesthetics is a siren song.
If we want to keep it, I think we're better off either having separate hooks for the simple and full types (perhaps in different modules, much like bs-decode
does for the result types) or a helper function to convert full to simple.
A bit of history FYI, we discussed this in the past when modelling the current variant API. That if we were to model all possible state combinations with the variant it would be very cumbersome to use and to maintain, so we analysed the most common way JS users consumed the value returned by apollo/hooks
and modelled the simple variant while offering the full usage along with. It was something I insisted on quite a bit in the past.
Now I do see your point. Removing the variant mode would make the this lib worry about less stuff to maintain and to be a more direct 1:1 bind to apollo/hooks
.
My concern was to lose DX ergonomics doing so, but sinking in more about it I think it's the best solution. So let's go for it.
We can offer the legacy API as useQueryLegacy
/useQueryDeprecated
for a less painful migration and also a codemode script. I can handle that as I have a huge production codebase using it.
We can offer the legacy API as
useQueryLegacy
/useQueryDeprecated
for a less painful migration and also a codemode script. I can handle that as I have a huge production codebase using it.
Same here.
I will be very sorry to see the simple variant go, but my experience has been almost exactly the same as mbirkegaard. I've removed it from this PR and not added in the legacy functions yet.
I also added a CONTRIBUTING file that is a distillation of the patterns I've been following in case anyone wants this to move faster :)
At some point I switched back to preferring normal variants over polymorphic because I thought they were more approachable to newbies and I wanted to be consistent across the library. Feel free to steer me back in the direction of mixing/matching and jsConverter.
I'm leaning toward wrapping all our stuff in a try block and converting exceptions to an ApolloError. Thoughts? I have definitely worked with backends that have problems and parse errors were not uncommon.
@jeddeloh: In parallel I am trying to get most reason-apollo
functionality and a bit more in my PR (basically syncing my PR with the custom bindings we have). So you can have a look and reuse. Especially the Observable bindings were a bit tricky to get right.
Status update:
ApolloClient
creation and @apollo/link-ws
, the branch is in a good enough state to support me on personal projects, but there is a ton here that may be forever untested by me. :shrug: I've included a basic client example if anyone wants to help or play around.useMutation
useClient
useLazyQuery
, etc. They should be a comparatively small effort now that a lot of types are in, though.link-ws
that are not in the @apollo/client
package are missing bindingsuseQuery
just to see how bad it would be. It seems problematic for a number of reasons, but I'm gonna keep it there for a sec.option
on a record property to represent a property that may not exist did indeed bite me for InMemoryCache
as well as when the variables type coming out of graphql-ppx is unit
. Some(())
does not compile to undefined
:) It looks like we might get more annotations, but it's worth noting you should be very careful when doing this.At this point I'm tempted to start exploring the configuration idea as mbirkegaard suggested for context. Personally, and when considering new users, I would love this library to provide some sane promise transforms out of the box. We could chose reason-promise
by default and people can just provide an identity function for promiseTransform
if they want to stick with vanilla Js promises, or could use a stream library or whatever they want.
See #121 and #117. This is an exploration intended for discussion and alignment. Nothing has been tested at this point (#117 is required and it's missing entirely).
Goals
@apollo/client
3.0 andgraphql-ppx-re
1.0@apollo/client
structureStyle Choices
Module & Submodule Naming
The style I most often see is to prefix submodules with the parent name
MyModule_Utils.re
(but maybe my selection is biased). Given the many modules and directories contained with@apollo/client
this seemed like the sanest strategy for managing module names.Namespacing
In an ideal world all the
@apollo
modules could be grouped under the same top-level module. Modules are not extensible...so what can we do? It doesn't seem totally crazy that we recommend that be done manually. If we follow the submodule naming pattern from above, our module should be namedApollo_Client
and someone could do this:There are other views, of course. Taking some inspiration from yawaramin's article, I found it rather helpful (at least for readability) to use a double underscore to prefix the modules with the namespace I would have liked:
Apollo_Client__ApolloClient
.I would much prefer to use Bucklescript's namespacing as this would cut down on the verbosity of file names and not dirty up the global module namespace. However, I wouldn't be able to add things to the namespace module for goal 3 (make it easy to consume). Also, AFAIK, the namespaces are camel-cased from the hypenated bsconfig "name" which would make it impossible to have a namespace of
Apollo_Client
. Maybe there will be bucklescript namespacing options for submodules in the future.The other option would be to abandon goal 3 and namespace under
ApolloClient
and expect people to find stuff in other modules likeApolloClient.React.useQuery
,ApolloClient.ApolloClient.make
or recommend they make their own module again:Raw vs Js Naming
The convention in Bucklescript is to use
toJs
andfromJs
and I personally tend to useJs_
module to tuck all that stuff away. However,Raw
is the pattern that has been established withgraphql-ppx-re
and I think I like it. I've chosen to go withtoRaw
andfromRaw
here, but definitely seems worth a discussion.parse
andserialize
are also options, but they feel like they have special weight and meaning coming out ofgraphql-ppx
. Maybe if the need fordefinition
is removed, this won't be an issue.Directory Structure
I mirrored the
@apollo/client
directory structure exactly.index.js
files are should be represented by a module of the parent folder's name.@apollo/client/react/index.js
->Apollo_Client__React.re
, for example.Types
Type definitions also mirror where they are defined in the
@apollo/client
directory structure exactly. The only wrinkle is that of course we can't model TypeScriptextends
(at least to my knowledge) so we ignore representing any of those as a separate types.Subtypes
Sometimes multiple types were required to represent a single type in TypeScript. In order to help make it clear what is a binding to an actual type and what is just needed by Reason, I've taken a similar naming approach to the modules. For instance,
Apollo_Client__React_Types.QueryResult.Raw
has atype t
that usest_fetchMoreOptions
which in turn usest_fetchMoreOptions_updateQueryOptions
.Prefer Modules for Types
Most types need some sort of conversion one way or the other, so I personally favor wrapping the type up in a module with a
type t
and the conversion functions inside.Prefer Annotating the Whole Function Vs. Arguments
I actually have no preference on this, but I went with the approach of annotating the entire function with a type when certain relationships needed to be enforced rather than annotating the arguments and introducing a type in the function arguments themselves. I actually kinda like the latter since you can't mistype it, but I didn't know that syntax for a long time. :)
TypeScript
Error
I used
Js.Exn.t
despite it not having quite the same shape.Prefer Grouping
Raw
StuffI found putting the Raw/Js stuff in its own module preferrable to scattered throughout the code since it's unlikely to ever be accessed directly. This should make it easier to parse and create interface files as well.
dependencies
andpeerDependencies
I prefixed these with the namespace, but otherwise approached these like they were separate modules taking the same mirroring approach. Honestly, from a community perspective I'd like to see these as independent projects even if it's just a
type t
in reasonml-community, but maybe it's not worth it.Changes to Current Behavior
useMemo
We are using
useMemo
on the output of hooks and it has no semantic guarantee. From React docs:I really wish they'd called it something else. I think the intent is to have a semantic guarantee rather than a performance optimization where we use it so I added a helper hook that uses refs instead (please check me on this).
Context.t = Js.Dict.t(string)?
I've never used it, but my assumption is this is meant to be anything and not just strings? Anyway, I typed it as
Js.Json.t
for this pass since I didn't know the effort/value of threading a type variable everywhere.General Questions/Comments
Binding to Objects
With every Bucklescript release we get better and better options to bind to stuff and I become more and more confused about which one to use. I opted to use straight records here because they're simple and it seems like this is the direction things are going over the long term. However, the typescript bindings explicitly define some objects with optional keys, not just optional values, which we can't model with this. Does it matter? It's javascript, probably not. If we exposed these types to the end user I would lean harder toward a
@bs.obj
or@bs.deriving
approach.Specific examples:
Should we say
data:
isoption('tDAta)
since it's specifically undefined? Or should we type as Js.nullable just to be safe? FWIW, I chose to be safe. The same question applies for error, but in the case that we're going in the direction from Reason to Apollo, do we want to approximate{data}
(no error) with{data, error: undefined}
? I doubt it's going to cause any issues, but it's also not a big deal to use[@bs.obj]
or[@bs.deriving abstract]
either.toRaw
andfromRaw
It seems one challenge we have is creating new objects when converting types. I don't think this is an issue, but something to be aware of. If we were to, say, wrap the
onComplete
option passed touseQuery
such that it received parsed data, we would be creating a new function on every render. Obviously we could memoize, but I think this complicates things becausetoRaw
forqueryOptions
would have to become a hook? Again, I don't think this matters in practice, but I don't love it. My strategy on this pass was to not wrap stuff likeonComplete
. You need to parse it yourself. All the types are there.Proposal: the properties on the output of a
toRaw
should always be able to be checked for referential equality?parse
Exception HandlingWe are currently catching the parse exception and returning
None
in that case. I'm sure there are good reasons for this, but a parse failure, given all the type assurance I have, seems like a catastrophic failure I'd personally want to know about. For this pass, I've used straight parse that throws an exception to reduce work until I understand the history of this or we decide on an error handling strategy.'raw_t_variables
I made the assumption that variables coming in will have already been converted to Raw types through
makeVariables
especially given the upcoming work arounduse
ingraphql-ppx-re
.Uncurrying?
I wonder if its worthwhile to declare any internal stuff that can be uncurried.
Summary
It's pretty ugly! But the Stockholm syndrome is already starting to set in. :) While it's helpful to have the current code for a sanity check, there's enough attention to detail that needs to be paid here, you can't really just copy/paste and tweak some things. I wouldn't commit to this direction lightly.
Sorry for the length of this PR. I just documented whatever decisions I made over the course binding to
useQuery
and this is the result. It's possible some of these topics are better discussed in an issue than directly in this PR. Anyway, I'm looking forward to hearing if people think this gets at the goals and if it's worth the effort.