Closed derek-rmd closed 5 years ago
I found an alternative approach as a workaround, that is really simple. Rather than a new Lifetime
and having to lookup the family tree, I just memoized the resolver. I'm not certain but I think only TRANSIENT
should be used to avoid other caching, SINGLETON
would break overrides.
container.register({
inherited: resolveOnce(asFunction(() => scopedCounter++)),
});
// Helpers
function resolveOnce(resolver) {
resolver.resolve = once(resolver.resolve);
return resolver;
}
function once(func) {
let ran = false;
let memo;
return function() {
if (ran) return memo;
ran = true;
memo = func.apply(this, arguments);
func = null;
return memo;
};
}
However, I'm not sure, but I don't think this will work with dispose
, without some additional ref counting.
I'm not entirely sure what you are trying to achieve that the SINGLETON
lifetime doesn't do? Do you have a concrete example of a use case where the current set of lifetime managers fall short?
If I am not mistaken (I haven't used Unity in years):
Lifetime.SINGLETON
is the same as ContainerControlledLifetimeManager
Lifetime.SCOPED
is the same as HierarchicalLifetimeManager
Lifetime.TRANSIENT
is the same as TransientLifetimeManager
Hi @jeffijoe, I just updated the proposal, as the test given would of passed with singleton and the definitions weren't clear.
The problem with singletons is that sibling or child containers can't register their own instances (using singleton, so that their children also inherit). Children can register their own instances if not singletons, but then any grandchildren won't be able to inherit.
it('inherits instances in child containers', function() {
const root = createContainer();
let scopedCounter = 1;
const container1 = root.createScope();
container1.register({
inherited: asFunction(() => scopedCounter++).singleton(),
});
expect(container1.cradle.inherited).toBe(1);
const child1 = container2.createScope();
expect(child1.cradle.inherited).toBe(1); // FAILS if transient or scoped
const container2 = root.createScope();
container2.register({
inherited: asFunction(() => scopedCounter++).singleton(),
});
const child2 = container2.createScope();
// one of these will fail
expect(container2.cradle.inherited).toBe(2); // FAILS if singleton. receives 1
expect(child2.cradle.inherited).toBe(2); // FAILS if transient or scoped. receives 3
});
Yeah this is because the resolutions are cached by registration name.
Implementation-wise, do you have any ideas how this would work?
More importantly, do you have a concrete use case where this makes sense? π Adding another lifetime makes the complexity go up and learning curve steeper.
Lifetime.SINGLETON
is the same asContainerControlledLifetimeManager
Lifetime.SCOPED
is the same asHierarchicalLifetimeManager
Lifetime.TRANSIENT
is the same asTransientLifetimeManager
SINGLETON is different from ContainerControlledLifetimeManager because it doesn't allow for nesting singletons. It is actually more like SingletonLifetimeManager -- except in awilix you can override with transient or scoped, but just not singleton, and in unity overrides will replace the singleton for all. (both approaches have their pros/cons imo)
Child or parent containers could have their own instances registered for the same contract.
(from Unity.Lifetime.ContainerControlledLifetimeManager - Remarks section)
I get that you want to avoid added complexity. I was actually surprised this issue wasn't raised already since I it follows a common approach to structuring web/ui applications.
usecase: module1
and module2
, have a common parent root
, and each have their own children that need to share the store
const root = createContainer().register('id',123);
// ...
const module1 = root.createScope().register('store', asFunction(store1Factory).inherit())
// ...
const module2 = root.createScope().register('store', asFunction(store2Factory).inherit())
Implementation wise it INHERIT
should be able to be done similarly to how SINGLETON
was done pre v4, by walking up the parent containers. I would need to think about it more, and i'm pretty curious about if the memorization technique i used might be more appropriate (ie. cache on the resolver instead of the container). One of the areas i've been thinking on is wrt to extending Awilix to do Modules like Angular/InversifyJS, where you would end up doing something like container.register(moduleContainer.registrations)
(container may already have a parent), and I've just started so nothing clear yet what approaches work and which don't.
btw @jeffijoe, what were your main influences for awilix? My interest is with the browser-side only and I really like how it has great utility but still manages to be simple (proxy ftw!). (great blog articles btw)
Thanks for the kind words! π
I've steered clear of "module" systems in DI frameworks personally. It's why I consider Awilix a library and not a framework β you can build the module system yourself on top using scopes probably. I wouldn't want to add an opinionated module system when I'm not using it myself.
I would argue that in your example, you should be registering them as different names instead of overriding. π
My main influence was in fact Unity! I started using Node coming from C# / ASP.NET Web API and I couldn't find a DI library that was simple but unobtrusive (didn't litter your code with annotations library annotations) so I had to build it myself.
Awilix v1 was literally just passing in the container object and dereferencing, all things were singleton because you registered instances, not resolvers. Then I saw the Proxy API and thought that would work and Awilix v2 was born.
I use Awilix for server-side JS only, and I use classic mode instead of Proxy because it's more idiomatic TypeScript.
Regarding your resolver approach, that's actually the main benefit of the resolver system, you can compose them! You might want to use the buildResolver
utility to make your resolver factory immutable.
Here's an example of a time-based caching resolver. I use this for my external data stores that cache everything internally but after a certain amount of time I create a new store with an empty cache.
Note that some resolvers will not be compatible with disposers. With a little extra work put into the resolver you can make it so, but not something that can be assumed by the library.
import {
Resolver,
BuildResolver,
DisposableResolver,
createDisposableResolver,
createBuildResolver
} from 'awilix'
const CachingMap = require('caching-map')
/**
* Options for the TTL Resolver.
*/
export interface TtlResolverOpts {
ttl: number
}
/**
* A resolver that caches the resolution of the inner resolver
* for a specific amount of time.
*
* This does not alter the behavior of Scoped or Singleton lifetimes, that is
* if the resolved value is still cached by the lifetime manager by the time the TTL
* runs out, the lifetime-cached value is still used.
*
* This resolver is useful for caching stores for short periods of time.
*
* @param resolver the inner resolver.
* @param opts.ttl time to live in milliseconds.
*/
export function withTimeToLive<T>(
resolver: Resolver<T>,
opts: TtlResolverOpts
): BuildResolver<T> & DisposableResolver<T> {
const ttl = opts.ttl
const cache = new CachingMap(Infinity)
return createDisposableResolver(
createBuildResolver({
resolve(container) {
if (cache.has('value')) {
return cache.get('value')
}
const value = container.build(resolver)
cache.set('value', value, { ttl })
return value
}
} as Resolver<T>)
)
}
@derek-rmd I'm going to close this for now, without a common use case I'm not comfortable adding another lifetime type, simply because of adding more cognitive overhead for people new to the concept of DI containers β we already got a bad rep for "over complicating things" π
@jeffijoe
I'm working on an experimental code organization pattern for GraphQL applications following DDD principles, and this issue seems to be of particular interest to me.
I'm trying to integrate this library, which is essential for building scalable apis with GraphQL.
I will leave out the internals from here, you can glance over the docs, if interested, for the specifics, but in general, its used like this
You create Dataloaders for the entities that you wish to batch load, you put them within a factory function, and on every request to the GraphQL endpoint, they are initialized and injected through the GraphQL Context object. These dataloader objects are also supposed to be request scoped and short-lived ( after the request resolves, they are destroyed ).
Then, from within GraphQL Resolvers, they are used in a similar fashion
async resolve(_, __, context) {
const product = await context.dataloaders.productById.load(1)
return product
}
In the experiment that I am performing, this creates a little bit of an issue, because you are supposed to be using Repositories for queries, not these separate entities.
I would much rather have this variant
async resolver(_, __, context) {
const productRepository = context.cradle.productRepository
return productRepository.getProductById(1)
}
At first sight, it appears to me that, ideally, these Dataloader instances should be incorporated within the Repository, so that the batching functionality becomes an implementation detail of the Repository itself. So, each repository would have one or more instances of Dataloaders to work with.
This idea brings me here.
I am already using request scoped containers ( to provide the user object, and others).
If my Repository implementation follows similar pattern
class ProductRepository {
constructor({ ... }) {
// ...
this.dataLoader = new Dataloader(...)
}
}
And the DataLoader must be request-scoped, it appears that I want a Request-scoped singleton of the ProductRepository class.
Because the GraphQL resolvers are called once per entity, calling container.resolve("productRepository")
from a resolver would create a separate instance for each call, would create multiple instances, each with its own dataloader, but for the dataloader to actually work properly, you need to call .load
on the same instance during the lifetime of the request, otherwise it will not batch anything.
The way that I am thinking about approaching this situation, which according to simple tests works, but I'm a first-timer with your library, is like so
const diContainer = awilix.createContainer()
.register({ ... })
app.use("/api", graphqlHTTP(async (req, res) => {
const container = diContainer.createScope()
container
.register({
user: awilix.asValue(req.user),
})
.register({
// Here I am instantiating the repository myself, passing in the `cradle` proxy
// so that its constructor can resolve its dependencies, instead of passing them
// directly from here
productRepository: awilix.asValue(new ProductRepository(container.cradle)),
})
return {
...,
context: {
// Attach the request-scoped container to the context, so that
// graphql resolves have access to it through the context object.
cradle: container.cradle,
},
}
}))
Is this okay?
The lengthy explanation is for a potential use-case example.
You should be able to create a scope and use that scope to build your resolver instance the same way its done for Express/Koa
Do you mean this kind of scope?
I'm supposed to register all dependencies when the application initializes and then create a scoped container on every request, adding the current user? If that is so, it seems like it would be perfect for me.
PS: You have done an amazing work on this library and its maintenance.
Yes exactly as in that example! π
Why: to have behavior like
SINGLETON
, but to be able to introduce a newSINGLETON
for a container and all it's children.Proposal:
Lifetime.INHERIT
: The registration is scoped to the container - that means that the resolved value will be reused when resolved from the same scope (or a child scope). Child containers may override the inheritance by registering their own instances.Lifetime.SCOPED
: LikeLifetime.INHERIT
except Child containers each get their own instance.Possible Names:
Lifetime.INHERIT
,Lifetime.SCOPED_SINGLETON
, ???Background
Scoped Lifetime was a bit wacky before v4.0.0. Compared to UnityContainer (see below) it was a mix of
ContainerControlled
andHierarchical
if you resolve the dep on the container it was registered it behaves like
ContainerControlled
. All child containers will get the same instance.if you resolved on the children first, it behaves like
Hierarchical
The change in v4 (#92) makes the behavior consistent. Now
Lifetime.SCOPED
behaves likeHierarchical
. I think supporting behavior likeContainerControlled
is needed for completeness.Awesome library btw! Thanks!
UnityContainer
source: https://unitycontainer.github.io/api/Unity.Lifetime.html