Closed rowanmulder closed 3 years ago
Umm... that may need to be locked. Moving the flag fixes the crash, but could result separate instances of the service, depending on how they were registered. And that could lead to unexpected behavior.
If you want try pulling 1.4.2, currently in the develop branch. I was able to replicate the issue in my multi-threading test code and added some locking to fix it. Seems to work for me.
Tested it and can't reproduce the crash anymore. Looks good! 👍
@hmlongco Do you plan to release the 1.4.2 in the near future? I've just stumbled upon the same issue caused by LazyInject data races from concurrent threads. P.S. I'll probably use the develop branch for now, but still it would be good to rely on something officially released :)
Ever have something you've been meaning to do and keep forgetting to do it? ;)
1.4.2 released to master.
@hmlongco I just experience this crash during the nightly run test.. Great to hear there's fix coming up. Thanks!
Hi all,
In our own code we uncovered a crash on a potential race condition on @LazyInjected property wrapper (can also apply to @WeakLazyInjected).
mutating get { if initialize { self.initialize = false self.service = container?.resolve(Service.self, name: name, args: args) ?? Resolver.resolve(Service.self, name: name, args: args) } return service }
When the code enters the get, initialize is true, and right away set to false, while the self.service doesn't necessarily has been resolved yet. Another may enter the code, initialize is now false, service is returned as a nil resulting in a crash due. In an older Resolver 1.2 the crash didn't occur, as the order was differently (eg. in this case you could move the self.initialize = false below the assignment of self.service, preventing the crash, but it will mean an additional resolve).