Closed DanTup closed 2 years ago
Looks like both the linux and windows tests aren't yet passing
@jakemac53 where can I see the test results? It just shows "1 workflow awaiting approval" here.
nm - it was because I pushed another change =D
@jakemac53 where can I see the test results? It just shows "1 workflow awaiting approval" here.
I have to approve it running the bots each time since you are a new contributor to this package, but you should be able to see them now (may need to refresh the github UI sometimes is out of date).
@jakemac53 it seems like there's a difference in behaviour (prior to the change) when using the PollingDirectoryWatcher
- it seems to not generate errors, and it completes ready
when a directory doesn't exist:
import 'package:watcher/watcher.dart';
main() async {
// Ensure app doesn't quit for at least 30s
Future.delayed(const Duration(seconds: 10));
// Try to watch a folder that does not exist
final watcher = PollingDirectoryWatcher('/tmp/foo123');
// Print any events/errors from the stream.
final sub = watcher.events.listen(
(event) => print(event.type),
onError: (e) => print('ERROR ON STREAM: $e'),
);
// Wait for the watcher to become ready.
print('waiting for ready...');
try {
await watcher.ready;
print('watcher is ready!');
} catch (e) {
print('ERROR WAITING FOR READY: $e');
}
print('DONE!');
sub.cancel();
}
For DirectoryWatcher
, this never completes (without my change), but for DirectoryWatcher
it does. Is this a bug?
For
DirectoryWatcher
, this never completes (without my change), but forDirectoryWatcher
it does. Is this a bug?
I would say so yes, but its not clear and obvious which implementation is really wrong here. For the polling watcher it kind of makes sense because for that implementation empty directories don't pose a problem.
Regardless the inconsistency is bad though, and given we can't really implement it efficiently for the non-polling watcher I would say both should complete ready
with an error.
@jakemac53 on second thoughts, I'm not certain we can do that (suppress the errors from the stream and only use them to complete ready
) reliably. The errors are coming down an async stream, so it seems possible we'll get an error after we already completed the readyCompleter
and then have nowhere to put it (or, we put it on the stream only if ready
already completed, and it seems inconsistent).
As for the Linux failure, it seems like the error is generated from the error handled in the code:
_listen(innerStream, _onBatch, onError: _eventsController.addError);
The places where I'm completing ready
with an error all already called close()
but this code does not. That means I'm not sure if it's safe to call emitError
here (I don't know for sure it won't start closing on some errors that did not previously). The call to close()
in that case seems to come from the handleDone
call in the constructor (which seems like it would fire for non-error cases too). I'm not sure what's different between this and the other platforms.
I wonder if it may be simpler for ready
to always complete successfully, and regardless of whether there are any errors, and keep the errors in the stream (and fatal errors can be detected by the stream closing)? That would make this change non-breaking, it would just be that ready
is now guaranteed to complete when the watcher has been initialized (regardless of whether it will then error or not).
Regardless the inconsistency is bad though, and given we can't really implement it efficiently for the non-polling watcher I would say both should complete
ready
with an error.
Done this, however... there was explicit code to filter out directory-not-found errors, so the fix involved removing that. I'm not certain this isn't regressing something (I believe it was originally added at https://codereview.chromium.org//22999008 but I can't see an obvious description of the reasoning).
The errors are coming down an async stream, so it seems possible we'll get an error after we already completed the
readyCompleter
and then have nowhere to put it (or, we put it on the stream only ifready
already completed, and it seems inconsistent).
I was initially thinking we should put errors on the stream if the ready
completer is already completed. But I also realize now that you could just listen to the stream and never wait for ready
, so in that case we want to make sure the errors also go to the stream probably.
I wonder if it may be simpler for
ready
to always complete successfully, and regardless of whether there are any errors, and keep the errors in the stream (and fatal errors can be detected by the stream closing)? That would make this change non-breaking, it would just be thatready
is now guaranteed to complete when the watcher has been initialized (regardless of whether it will then error or not).
This also makes me wonder what the behavior is if the directory being watched gets deleted while it is being watched? In that case we could only send an error on the stream, and it may make sense to model this in the same way (just send error on the stream).
It doesn't feel right to just complete with success really when we know it failed but maybe it is ultimately the right choice.
It doesn't feel right to just complete with success really when we know it failed but maybe it is ultimately the right choice.
Yeah, it does seem odd, but I'm not sure we can be consistent the other way (at least, the code appears to allow for it if we get some types of fatal errors on the underlying stream after we fired ready
- but I don't know if that's possible or common).
I'll change it to this for now (and ensure ready
is described as working this way) to see if we can get everything working otherwise, but if you decide it's not right and have an idea for how to handle it, I'm happy to tweak it.
Pushed this changes. Also reverted my change to the Polling watcher that un-swallowed the directory-not-found errors since they're no longer necessary for this change (assuming we complete ready normally), although it still seems slightly odd this is different.
This also makes me wonder what the behavior is if the directory being watched gets deleted while it is being watched?
I tested like this:
main() async {
// Ensure app doesn't quit for at least 30s
Future.delayed(const Duration(seconds: 30));
final testDir = Directory.systemTemp.createTempSync('watcher_testing');
await Future.delayed(const Duration(seconds: 1));
// Try to watch a folder that does not exist
final watcher = DirectoryWatcher(testDir.path);
// Print any events/errors from the stream.
final sub = watcher.events.listen(
(event) => print(event.type),
onError: (e) => print('ERROR ON STREAM: $e'),
onDone: () => print('Watcher closed!'),
);
// Wait for the watcher to become ready.
print('waiting for ready...');
try {
await watcher.ready;
print('watcher is ready!');
} catch (e) {
print('ERROR WAITING FOR READY: $e');
}
await Future.delayed(const Duration(seconds: 1));
print('Deleting directory!');
testDir.deleteSync();
await Future.delayed(const Duration(seconds: 1));
print('Re-creating directory!');
testDir.createSync();
await Future.delayed(const Duration(seconds: 1));
print('Done!');
sub.cancel();
}
The output was:
waiting for ready...
watcher is ready!
Deleting directory!
Watcher closed!
Re-creating directory!
Done!
So as far as I can tell, you don't get the Delete event(!), but the watcher is closed. I tested on both macOS and in an Ubuntu Linux container (running in Docker on macOS) and got the same results. I'm surprised there's no delete event, though I think closing the watcher seems like reasonable behaviour.
@jakemac53 I noted above that there doesn't seem to be a Windows test file in the ready
folder. It does run the polling watcher tests, but doesn't have its own. Is this an error? I did try adding such a file, but the existing shared tests fail with strange errors but I didn't debug too much yet in case it was known and deliberate.
Regarding the windows test in the ready
folder I don't really know (I don't actually have really any historical context on this package, the original authors are no longer on the team).
If you can add a windows test and get it running that would be ideal, but it isn't a regression to not have them as they didn't exist before 🤷♂️.
ok, I'll make a note to have another look when I'm next on the PC, but probably not worth holding this up for (I have manually tested it, and the changes now are much simpler than originally too).
Thanks!
How complicated is it to get this into the SDK? Does it require a release? (I see it's listed in DEPS
but I'm not very familiar with how that all works).
Yes we need to update DEPS to get it into the SDK, which will also update it internally when we roll the SDK (happens multiple times a week usually).
Once that happens successfully we can also do a release on pub (which you may or may not actually be blocked on).
@DanTup did you want to try doing that process? You can send me the CL to the SDK. You just update the hash here https://github.com/dart-lang/sdk/blob/main/DEPS#L170 generally. There may be a required package config update also but it will warn you if so and should point to the script to do it.
You can also do a gclient sync
to actually pull down the package and make sure its working right. Sometimes you need to wait a few minutes for our github mirror to pick up the new hashes.
Sure, I'll have a go.
https://dart-review.googlesource.com/c/sdk/+/227822/
gclient sync
did pull the expected contents. Thanks!
See #115.
@jakemac53
ready
.. I tried adding one, but the existing tests fail with strange vague errors ("Access is denied") and I'm not sure what the expectations are hereWhen errors are emitted and the stream will be closed, the ready completed is completed with that error. As far as I can see, the original code (that completes it without an error) should not also fire in this case, but if you think that's a risk we could add a condition around that to not complete if it's already completed too.
I had to also pass the error through in resubscribable for the tests to pass, and I noticed there is some additional code an emit errors without closing the watcher (in children?), but I didn't touch them as I assumed in those cases the normal ready code is firing.