Open pdblasi-google opened 1 year ago
I don't think this is something we can do half-way. We would probably have to remove the requirement that a subclass of a base
or final
superclass in the same library must again be marked base
, final
or sealed
, turning the error into (at most) a warning, which can be ignored. (Or which the analyzer can choose to ignore if it can see, say, that the violating class is not exported from the package, and is only used in tests, or simply use the @reopen
annotation.)
It's no longer a language requirement, it's just an analyzer-provided recommendation.
That would mean that you can have a library containing
library src_foo;
final class Foo {
// Something something
}
@reopen
abstract interface class FooForTesting implements Foo;
and then you would just only export the Foo
declaration from the package:
library foo;
export "src/foo.dart" show Foo;
and only use the interface FooForTesting
in your own tests,
by importing import "package:myself/src/foo.dart";
directly.
We can't, currently, prevent others from using import "package:myself/src/foo.dart";
(no package private libraries, yet!), but we can, in good conscience, ignore anyone who does so, and not care if their code breaks.
There is no technical requirement that you can't reopen your own final
or base
classes. If your class subclasses base
class from another library, you can't reopen, since it's not your interface to reopen, but if all restricted superclasses are in the same library, reopening is just not considered a good idea, but it's not a real, technical problem if you do so.
Reopning a base
or final
class might affect some possible optimizations, and maybe eventually some possible field promotions, but the requirement that you cannot reopen your own classes was added mainly to make it impossible to do so by mistake. To make final class C
or base class C
actually guarantee that all subtypes are sub-classes and inherit implementation from C
, e.g., of private members.
Breaking that guarantee yourself just means it's not guaranteed, but if you use the sub-interface only for testing (and assume nobody else reaches into your src/
directly to get to the interface), then your clients can break the guarantee.
the requirement that you cannot reopen your own classes was added mainly to make it impossible to do so by mistake
This is why I personally prefer the reopen
keyword option. It requires you to be explicit in breaking that contract, and you can then combine that with @visibleForTesting
to further narrow down the supported use cases of the reopened class.
It'd also allow for guaranteeing what capabilities a reopened class has, such as requiring them to only be implemented, narrowing the edge cases that would have to be supported by an author that decides to reopen a class.
There's an inherent question here of whether we want to encourage (the continuation of) an ecosystem where test code is considered outside the bounds of semver—and thus conversely, encourage writing convenient-in-the-short-term but fragile-in-the-longer-term tests.
Or to put it another way:
Many apps were written with the previous capabilities of Dart in mind, and as such they didn't wrap package apis locally to allow for mocking/faking in tests.
Do we consider continuing the status quo of people writing this kind of fragile test to be something we want to continue indefinitely? Making an escape hatch designed to perpetuate it almost guarantees that it will in fact continue to be the case.
This is why I personally prefer the
reopen
keyword option. It requires you to be explicit in breaking that contract, and you can then combine that with@visibleForTesting
to further narrow down the supported use cases of the reopened class.
Isn't @visibleForTesting
only supposed to be used from enclosing-package tests? That wouldn't cover the use case of apps mocking packages they use for test DI, which is the use case a lot of the discussion I've seen (including your comments here, I think?) are about.
Isn't @visibleForTesting only supposed to be used from enclosing-package tests?
Per that question specifically, it seems like it's intended to work for any tests, not just those for the defining package. At the very least, it behaves that way, whatever the actual intention. It will give a warning in a project that doesn't define the @visibleForTesting
member, but doesn't give a warning in the same project's tests.
For example, Flutter's MultiChildRenderObjectElement.children
is marked with @visibleForTesting
but you can reference it in tests in any project without getting a warning.
There's an inherent question here of whether we want to encourage [...] writing convenient-in-the-short-term but fragile-in-the-longer-term tests.
TLDR; Yes, I think we want to encourage fragile-in-the-longer-term tests over dissuading people from writing tests at all. Or at least make it an option for package authors to be able to provide.
I'm coming at this from a couple different angles. First being, do we want to encourage people to write tests? Second do we want to encourage package authors to use class modifiers?
I'd argue that a fragile (not flaky, but fragile) test is better than no test. Especially considering that a lot of mock libraries out there are code-generation and/or implemented with noSuchMethod
and therefore pretty simple to update on the whole.
In my experience, getting people to write tests is like pulling teeth. Even worse than getting people to write tests is trying to justify the time and effort needed to set up an architecture that ensures your app can be tested.
From C#/Xamarin land, every single dependency we had needed to be wrapped in it's own local API, which we then had to keep up to date with the backing dependency. That had to be done in two parts, an interface and an implementation that used the dependency. Then you had to set up dependency injection such that you could easily inject a different version of the interface when you were writing tests.
None of these things are bad. In fact, it encourages loose coupling, relatively simple refactoring, and some interesting ways to do feature logic. That said, they are tedious. Especially when most of the benefits are only going to pay off in the long term if ever. And not every project is given the go-ahead to make that architecture, generally leading to a lack of tests, or hacks in place for testing that make them flaky (e.g. static instances).
Moving to Dart from C#-land, testing became almost easy in comparison. I wasn't prevented from using any of the above patterns and could still get the benefits out of it when needed, but I no longer had to wrap every dependency. I no longer had to ensure that my API was completely interface based, nor split out multiple interfaces for specific implementations when additional functionality was needed.
Creating and injecting fakes/mocks started to feel like a feature of the language, rather than something you had to force yourself and your team to do in order to have effective tests. With class modifiers, it's back to the same boilerplate requirements that other languages enforce by design. Which brings us to the second perspective:
As it stands currently, many package authors are likely faced with a few options when it comes to testing with class modifiers:
The first option I don't like very much. Class modifiers were added for many very good reasons, and forgoing all of those benefits in the production use case just to make testing easier leaves a sour taste in my mouth personally.
While the second option could potentially make the code more "technically correct", not every package out there actually needs the ability to swap out implementations outside of a testing context. It's a whole lot of work to make things testable, and I think many authors will be driven to options 1 or 3 because of it.
The last option I think is bad for the overall community. I think it'll either encourage people not to use specific packages if those packages use class modifiers, or make it so people can't feasibly write tests when packages they depend on introduce class modifiers. Yes they can go through the work to be able to write the tests, but similarly to option 2, I worry that the expected work to rework everything will just drive people to not write tests.
Contrasting:
Especially considering that a lot of mock libraries out there are code-generation [...] and therefore pretty simple to update on the whole.
with:
From C#/Xamarin land, every single dependency we had needed to be wrapped in it's own local API, which we then had to keep up to date with the backing dependency. That had to be done in two parts, an interface and an implementation that used the dependency. [...] None of these things are bad. [...] That said, they are tedious.
it seems like one solution would be to make a tool that makes generating the initial wrapper the way mockito generates mocks. If making a wrapper and default implementation were as simple to create as mocks is now, then they wouldn't be tedious. But unlike generated mocks, they wouldn't need updating unless you actually needed to change your usage; they wouldn't be broken by every new parameter or method. (And when you do choose to update them, it's just re-running, like mockito.)
- Add class modifiers and restructure the entire package with interfaces in such a way that users can create mocks of the interfaces to use, but not extend/implement the actual implementations that they want to have locked down.
I don't think this is actually a very attractive option in general; if your actual API is the interface, then you're right back to not being able to add methods or optional parameters without it being a breaking change. That seems like the much bigger win for final
than locking down the implementation class.
I think if we can make option 3 easy, then that would be a win for test health. If we can't, I agree it's a tricky tradeoff.
it seems like one solution would be to make a tool that makes generating the initial wrapper the way mockito generates mocks
I like this idea! I think it'd be good on it's own whether or not the language allows exposing a final/base class implementation for testing. I still think having something akin to reopen
is useful as a first-level language feature as opposed to a separate tool that works around not having something like reopen
though.
My concern with offloading testing architecture to end users is that the additional effort dissuades users from writing tests. Adding a tool to make that a significantly easier would help, but not bring us back around to what Dart supported in Dart 2. Something like reopen
would bring us a lot closer to previous Dart 2 capabilities while still providing the benefits of class modifiers.
Personally, I'd be fine with requiring it to only be used in a testing context like @visibleForTesting
does, even escalating it to an error instead of a warning. I think that might be a bit heavy handed though. I can see some potentially useful cases for production code as well, but I am quite biased in wanting this specifically for supporting testing like Dart 2 could without dissuading package authors from using class modifiers.
I disagree that your package should have special testing capabilities. Anyone can write tests for your APIs. It's important to make sure the external packages are working. Especially for legacy and large systems.
Have you considered this approach?:
sealed class MyClass {
// Implementation here.
}
sealed class SubClass extends MyClass { // or `implements`
// Implementation here.
}
@visibleForTesting
abstract interface class MockableMyClass implements MyClass {}
@visibleForTesting
abstract interface class MockableSubClass implements SubClass {}
[Edit, I forgot to mention: sealed
implies abstract, so you'll need to use factory constructors returning a trivial subclass in order to pretend that a class like SubClass
is concrete.]
I'm assuming that the final
modifier was well justified, in the sense that the maintainers of MyClass
actually want to ensure that other libraries can't create any subtypes of MyClass
—except via MockableMyClass
, which is only @visibleForTesting
.
The approach generalizes to hierarchies, based on declarations like SubClass
. Each class in the hierarchy can only be subtyped in other libraries via those @visibleForTesting
classes, and only via implements
.
The sealed
modifier is not used for exhaustiveness checks in this case (although there's nothing wrong in switching on those classes). The point is that we want to prevent direct subtypes from being declared in other libraries, and we take care to maintain the same restrictions with all subtypes of MyClass
.
The Mockable...
classes could also be unrestricted (class
rather than interface class
), or they could be base
, and they could contain some kind of implementation, if that turns out to be helpful in some scenario. For instance, they could have mock-friendly implementations of certain private members. In any case, they'd need to be abstract
if any member isn't implemented (which is probably always true).
Of course, as mentioned here, there is also the possibility that the strict control on implements
subtypes isn't really necessary, in which case you could just replace final
by interface
:
interface class MyClass {
// Implementation here.
}
interface class SubClass extends MyClass { // or `implements`
// Implementation here.
}
Those classes can be freely mocked or reimplemented in other libraries, which might be OK.
Of course, as mentioned here, there is also the possibility that the strict control on
implements
subtypes isn't really necessary
If you ever want to be able to add anything to your class (new methods, new optional parameters) without a breaking change, which almost everyone does for package API surfaces, and you want to actually strictly follow semver, then you do need to prevent implements
. So in a world where people use qualifiers to express the actual intent of their API surface, final
would be extremely common.
If you prevent implements
in a class, other libraries cannot mock using implements
-based mocks for that class. That includes your own test libraries, which are other libraries. (We do not have friend-libraries, and if we did, we also don't want dependencies from lib/
code to test/
code, if that's even possible to express..)
What I think would be sufficient is preventing implements
to others in general, while still allowing implements
for your own tests. Or for others' tests too, perhaps.
That is, you're not allowed to implement without adding a noSuchMethod
to ensure adding or changing members doesn't fail. Which sounds like the description of a base
subclass with a non-trivial noSuchMethod
.
You might consider exposing a final class FooMock extends Mock implements Foo {}
declared in the same library as final class Foo
. But that requires making make mockito
a non-dev-dependency, which is probably a no-go.
So alternatively, declare
library src_foo;
final class Foo { ... } // Exported by package.
base mixin FooMockHelper implements Foo {} // Not exported by package.
in the same library as Foo
, but don't export it in the package's public API, and instead provide a library package:mypackage/src/mocks.dart
which declares
library src_mocks;
import "package:mockito/mockito.dart";
import "foo.dart";
final class FooMock extends Mock with FooMock {} // "Implements" through mixing in an empty subclass.
which relies on mockito
even though it's only a dev-dependency for your own package.
If you can mark the entire library as "public for testing", so you don't accidentally use it in production code, that would be nice too.
Just to pedant, the phrase "to strictly follow semver" assumes a specific interpretation of semver: That a non-major version increment cannot cause any errors in any existing program. That's not actually what it promises.
Semantic versioning is an API versioning system, where API can cover anything externally visible, including behavior. You have to define what the API you are versioning is, and what it isn't. Whether a change is breaking (what semver calls "incompatible") depends on what you consider the API that you are versioning.
If a sort
method doesn't promise which algorithm to use or whether it's stable (or better, explicitly states that it doesn't guarantee anything), changing that behavior is not an incompatible change, even if someone might have chosen to depend on the specific implementation and their code stops working. They're depending on something which is not part of the versioned API, it's implementation specific behavior, not guaranteed to be stable at all.
If someone uses declarations that are part of your package's API, which in Dart have so far had to be very permissive, it's still fair to say that uses that are not intended (or, preferably, actively discouraged) are not covered by semver versioning.
If a class is documented as not intended to be implemented, it actually being implementable, or any concrete implementation being valid, is not part of the versioned API. It can be broken in a minor version increment. Heck, even in a patch.
So if your package has a class which is defined as
/// Must not be implemented outside of tests, and only as a mock.
interface class Foo { ... }
then adding members to the interface in a minor version upgrade is strictly following semver. Even if it might break code which uses the class in ways that are not part of the guaranteed and versioned API.
The edge cases are when a usage is not documented as intended, not insinuated to be intended, and not actually intended, but it also doesn't say that you can't. That's where you have to decide with yourself whether such usage is reasonable or unreasonable. If reasonable, you should probably consider it part of the API until the next major version. If not, don't.
Semver doesn't guarantee that no downstream code can break in a minor increment. It just guarantees that all currently promised usages, by themselves, will stay valid. You can still get integration errors, naming conflicts with other packages, subtle changes in timing, or even fixing of bugs that breaks someone's workaround (the bugged behavior was actually not the promised behavior, relying on it is not guaranteed).
As @eernstg mentions, and was further discussed in the discord, using sealed
you can get a version of this behavior with what is available now. It requires a decent amount of boilerplate and IMO feels more like taking advantage of a quirk of sealed
than it feels like an intended design of the language, but it is possible and the external facing API is clean.
I created a gist that expands a bit on it for a basic, but working, example of the workaround.
https://gist.github.com/pdblasi-google/c287ada45464ef381812fd9b7771fc5e
Also as discussed on discord, this would be something we'd only want to work within the library that defined the final/base
class. That was not made clear in the original proposal, so I've updated that comment as well to make that (hopefully) more clear.
You have to define what the API you are versioning is, and what it isn't.
Thanks, I'd missed the part about the docs explicitly being part of the API, and was assuming the API was defined by the code itself.
That said, if we thought comments saying "Hey don't do that" were actually a good way of declaring restrictions on class usage, we probably wouldn't have a new language feature for formally defining them.
So if it does end up that everyone hates losing being able to mock things in tests so much that there's enough pressure on package authors from clients that basically nobody uses the new language feature—and I'm not saying that's a foregone conclusion, but it's at least a possible outcome based on the discussions that have cropped up—that seems like it would be a bad outcome.
You can't really enforce class restrictions. Anyone can fork the project, remove the modifier and use the project or re-export the classes to other users of their package. In the end, people might circunvent your precious contract and extend your final class.
"In object-oriented programming, the open/closed principle states that "software entities should be open to extension, but closed to modification"; that is, the entity can allow its behavior to be modified without modifying its source code." - Wikipedia
final and base could behave like annotations in my opinion. Considering all classes are known at compile time, these modifiers won't have any significant performance impact.
According to Baeldung, even in Java, "There are no reported performance benefits of applying final to classes and methods."
That said, if we thought comments saying "Hey don't do that" were actually a good way of declaring restrictions on class usage, we probably wouldn't have a new language feature for formally defining them.
How strictly you enforce or respect "prose restrictions" depends on the risks you're willing to take. The platform libraries have to be very careful about not changing anything (to the point where it's almost impossible to optimize async code if it changes timing in the slightest way). Other libraries can choose to care less about breaking downstream users doing things they're not supposed to do.
We also don't prevent importing libraries in lib/src/
of other packages, but I'd personally not care if changing such code breaks someone.
The reason for adding the restrictions to the language is not just to give ourselves more ability to enforce intentions, it's also to give the compiler more information.
The sealed
modifier is special, it is recognized at the language level as a guarantee that can be used for exhaustiveness checking switches.
The other modifiers can still allow a smart compiler to do local optimizations based on analyses of just the code in the same library, because it can know that there is no class anywhere else which can violate the properties. Maybe it can even implement method dispatch more efficiently for a base
type, because (in Java terms) it's an instance method invocation, not an interface invocation. So, there is that too, but it's not something that a whole-program compiler can't deduce without annotations. (But unlike Java, Dart is ahead-of-time compiled, so it's not something we can just fix in JIT. There might be a real benefit to a class being final
, if not being final
means that someone, somewhere, implements it and makes member access slower for everybody.)
But it's true that the new modifiers are mainly about enforcing restrictions that could otherwise be written in prose, and mainly for people who have to be extra careful about not breaking downstream users. There should be no real need to add modifiers to application code, except to keep yourself, and your co-workers, honest.
I think the biggest use-case for base
is to allow calling private members on other objects of the same type. Which is safe if everybody extends the type, and nobody implements it. If you don't need that, probably don't use base
.
And it's definitely something that could just be written with words, because the code only crashes if someone ignores the recommendation.
The biggest use-cases for final
are classes that are not built to be extended (in Java-land they say "Design for extension, or prevent it."). I've used it for implementation classes, which are just implementing another interface, and you should always be using the interface type in APIs anyway, and value classes which don't need to be mocked, because you can just create a real object with any value you want.
Again, you could just tell people to not implement (and prevent extending by not having a public constructor). That's what we've done so far - and it has occasionally been a pain when we forgot to prevent extending, and someone relied on the default constructor.
(It's quite possible that interface
is a valid alternative to final
for data-classes. For implementation classes not designed for extension, there is no benefit in interface
, just implement the actually useful superinterface.)
And the main usage for interface
so far is abstract interface class
with no concrete members, which is just the interface. It's possible that people will use interface class Foo { complete implementation }
to have a class that's not intended to be subclassed in production, but which can still be mocked. (If that's what people choose to do, I expect an @interfaceForTesting
annotation to show up.) Or as an alternative to final
for value classes, where it doesn't matter that someone provides a new implementation of their interface, the current implementation is just not designed to be extended.
I think the biggest use-case for
base
is to allow calling private members on other objects of the same type. Which is safe if everybody extends the type, and nobody implements it. If you don't need that, probably don't usebase
.
Why don't you think people should use base
for the relatively common case where a class is fine to extend, but not implement, because new methods can be added at any time?
You can't really enforce class restrictions. Anyone can fork the project, remove the modifier and use the project or re-export the classes to other users of their package.
In Dart, what you are describing creates a new, unrelated type.
In the end, people might circunvent your precious contract and extend your final class.
No, they would be extending their, non-final class. Which is fine, because it has no effect on any code using the original class, since they aren't the same type.
The title of this issue includes 'for testing', and mocks are mentioned many times. So I wrote a comment which is focusing on being more flexible with mocks, in an issue with a related topic: https://github.com/dart-lang/language/issues/3108#issuecomment-1562778341.
Why don't you think people should use
base
for the relatively common case where a class is fine to extend, but not implement, because new methods can be added at any time?
It's a perfectly reasonable use. I just haven't, personally, seen it much, so I tend to forget it. Maybe just because I'm working with fairly old APIs that don't change much any more. I don't write framework code or widget superclasses. The few times the platform libraries have tried saying that you "should extend this class", we haven't actually added any methods to the class later.
(Also, adding methods to a base
class can still break subclasses, if they used the name for something else. Only final
is completely safe!)
Also, adding methods to a
base
class can still break subclasses, if they used the name for something else. Onlyfinal
is completely safe!
Sure, but there's a huge difference in practical risk between "if we happen to add a method that collides with one of your methods, you'll be broken" and "you will be broken every single time we add a method".
According to Baeldung, even in Java, "There are no reported performance benefits of applying final to classes and methods."
Minor point, but the reality is a lot more nuanced than the above suggests. It is true, that in a whole program compiler (and a JIT for this purpose is a whole program compiler), there should be no performance benefit to adding or removing final to/from a class which is not already subclassed/implemented. However, even in whole program compiler/JIT, there is a significant performance difference between a class which has no subtypes (and hence for which all typed calls can be statically dispatched and/or inlined) and a class which has one or more subtypes (and hence for which every call site is polymorphic until proven otherwise). And the point is that marking a class final prevents people from adding subclasses and thereby causing a performance regression. This was precisely the reason that we recently chose to make the typed data classes ad hoc final, and likely (one of) the reason(s) that types like int
, double
and String
were ad hoc final from the beginning
In a modular compilation world (which we do not currently live in, but which I think it is highly likely we will eventually have to deal with if we are successful as a language), there are, of course, significant performance benefits to adding final regardless of whether subclasses actually exist.
All of this is not to say that people should go around sprinkling final on all of their classes for performance reasons - in most cases it really isn't going to matter. But it is to say that there are existing, legitimate reasons, in at least some narrow cases, where having the ability to make a class provably final
is valuable for performance reasons.
Thank you very much, Leaf. Good to know the performance degrades when the class is subclassed or implemented. I now understand why some SDK classes must not be inherited.
@pdblasi-google, @stuartmorgan, @Wdestroier, you have created / commented on this issue. In https://github.com/dart-lang/language/issues/3111 I explored the idea that mock classes could be allowed to have implements BaseClass
where BaseClass
is a base
class in a different library.
(It's a separate discussion how we define 'mock class', but I'm using "has a non-trivial noSuchMethod" as the criterion.)
Basically nobody is happy about allowing these relaxations of the current class modifier rules. If you wish to argue against or in favor of that idea then it would be interesting to get your input in a comment on #3111.
So if it does end up that everyone hates losing being able to mock things in tests so much that there's enough pressure on package authors from clients that basically nobody uses the new language feature—and I'm not saying that's a foregone conclusion, but it's at least a possible outcome based on the discussions that have cropped up—that seems like it would be a bad outcome.
Balancing mocking with encapsulation is a hard problem, and I suspect there are no silver bullets. The goals of "I can author a class with invariants I control" and "My tests can do whatever they want with a class" are opposed.
So, yes, the class modifiers make it harder to mock everything to your heart's content. Conversely, mocking makes it harder to encapsulate everything to your heart's content. Which of those facts is feature and which is bug is mostly a question of perspective: are you the class author or mock author? :)
We chose the default behavior for class modifiers deliberately and not just for backwards compatibility. Classes are permissive and mockable by default and that's probably the right default for the majority of application developers. Don't spend a second thinking about modifiers and just write your code.
Likewise, if you're a package author and you're fine with people extending, implementing, and mocking your stuff, just leave the classes as is and don't put any modifiers on them.
But if you are a package maintainer and you do want to be able to enforce some constraints or invariants on how your API is used, we have given you a few opt in tools to let you do that. Yes, when a package author uses those tools, they take away some flexibility from the consumers of that package. API design always involves negotiating liberties like this. Every time a package author makes a function private, or stores a variable instead of accepting it as a parameter everywhere it's used, or, heck, defines a method with a certain name, they are giving themselves some flexibility at the expense of their consumers.
In well-designed packages, this is a net benefit, because when package maintainers have more flexibility and control, they are able to evolve and maintain their packages more quickly and package consumers get bug fixes and new features faster.
I don't know what best practices we'll settle on for when to use class modifiers in public APIs. It may be that the best practices are somewhat domain-specific. But I do think giving package maintainers the ability to express these invariants is better than we had before. I've seen many cases where a package maintainer wants to add some new feature or clean up the API in some way but have been unable to because the change might be breaking for people using the package in ways it was never intended. If you aren't a maintainer of a widely used package, it's easy to be blind to this friction, but it's very real.
I think many classes can be locked down in ways that make packages easier to evolve, understand, and use without sacrificing much in the way of useful flexibility. You don't necessarily need to be able to mock everything in order to make it easy to write tests.
I've seen many cases where a package maintainer wants to add some new feature or clean up the API in some way but have been unable to because the change might be breaking for people using the package in ways it was never intended.
But the primary driver I've seen for this conversation is that package maintainers are getting a lot of back-pressure from clients not to use these modifiers because it breaks mocking. So my comment that you quoted there stands: are we okay with a potential outcome where package maintainers who want to solve the issue you are describing here feel that they can't because all their clients will yell at them, so it stays unsolved in practice?
Likewise, if you're a package author and you're fine with people extending, implementing, and mocking your stuff, just leave the classes as is and don't put any modifiers on them.
But what if you are a package author who is not fine with people extending and implementing your stuff in production code (for the reasons you describe above) but are okay with people mocking your stuff, because you don't mind breaking people's test code (if they choose to make inherently fragile tests, by mocking code they don't control) but don't want to creates cascades of production code breakage in the ecosystem?
That's the case that's being described by lots of people asking for this, and there's not a great answer for it.
@stuartmorgan wrote:
there's not a great answer for it
It is indeed a substantial clash of opposing design forces, but I believe the best answer that doesn't change anything about the language is the one that @lrhn mentioned:
// --- Library 'foo.dart'.
final class Foo { ... }
@visibleForTesting
base mixin MockFooMixin implements Foo {} // Could also be in 'src/foo.dart'.
// --- Library 'mock_foo.dart'.
import 'package:mockito/mockito.dart';
import 'package:something/foo.dart';
base class MockFoo extends Mock with MockFooMixin {}
This basically means that there is no protection beyond what you'd get with interface class Foo
, except that anyone who wants to implement Foo
will instead have to use with MockFooMixin
, and that's subject to checks outside the language Dart itself, because some tools will emit diagnostic messages based on @visibleForTesting
.
Background
Testing in Dart has generally been a more streamlined process in many ways than most languages, primarily due to the flexibility of a class. Up until class modifiers were introduced, classes could be treated as any or all of the following:
Because of this, testing was streamlined and required less boilerplate than many other languages. Instead of needing to create an interface separately from an implementation or requiring a user to wrap an API with their own set of classes, a user could
implement
a service or part of a package and inject that into their stack easily.The tradeoff to this was that this flexibility causes many potential issues for package creators, from implementations that completely subvert the implementation, to issues where libraries depend on unimplemented private members of classes. These issues are addressed quite well with class modifiers.
As a note, this was one of the options available in the original proposal either explicitly or implicitly in the Transitive restrictions section.
Proposal
We add a new keyword or annotation to allow a publicly visible class to expose a previously closed class that was closed through final or base, not through sealed. This keyword would only work within the same library where the
final/base
class was defined, not be generally available.Alternatively, we update compilation to allow a class marked with
//ignore: subtype_of_base_or_final_is_not_base_final_or_sealed
to actually compile. This would likely not be able to be limited to the same library as thefinal/base
class, so would likely be less desirable.This could be something that is used outside of testing, but I'd think the most common use case would be to create an easy entry for testing, so most cases where this is exposed would also include a
@visibleForTesting
annotation.This could be limited to an
abstract interface
thatimplements
a final/base class instead of allowingextends
to prevent incomplete replacements from being created, and to limit the number of known issues that would have to potentially be worked around by an author.In my opinion, the major differences between this and just not including the
final/base
keywords are twofold:Code example
Use case
Primarily, my use case is simplifying testing.
From a package author standpoint
Many packages were written with the previous capabilities of Dart in mind and therefore the migration to Dart 3 may require rewriting packages to include explicit
interface
classes and breaking changes to their APIs to handle addingfinal
orbase
modifiers while still allowing mocked/faked implementations for testing. This would allow them to minimize breaking changes to their packages while still allowing them to take advantage of class modifiers.From a user perspective
Many apps were written with the previous capabilities of Dart in mind, and as such they didn't wrap package apis locally to allow for mocking/faking in tests. As more packages begin to use class modifiers, more and more users are going to see migrating to Dart 3 as a task that requires heavy rewrites, and it may drive down adoption. Adding the capability for package authors to provide much of the benefits of unmodified classes for at least testing would greatly reduce churn around a feature that is objectively good for both authors and users.