Closed jdtsmith closed 2 years ago
Yes, this is not a bad idea. I had something like this in mind in the longer term. For now I only implemented a very basic Company adapter, which forces synchronization by polling. It sounds reasonable to extend the capf specification with an asynchronous extension. In any case we should synchronize such efforts with Dmitry! cc @dgutov
Thanks. Not really a CAPE issue, but your TODO made me think of it. Plus I've been trying to get completions from a process and wishing I had an async approach available.
BTW, doesn't vertico have some async support for rgrep etc.?
BTW, doesn't vertico have some async support for rgrep etc.?
No, in Vertico it works differently. Vertico is explicitly refreshed by the backend. See https://github.com/minad/consult/blob/37096d4bc4031a6a3d020ae225b6c675fa5ec0b7/consult-vertico.el#L50.
We could implement something similar here, where the Capf asks the frontend to retry. See my comment in the code: https://github.com/minad/cape/blob/31acb66152f271357a54af47fcaa4b43f2c9220b/cape.el#L550-L553
The one addition we could consider is adding an :async-abort
property, that specifies a callback letting the UI notify the backend if it has "already moved on" and is no longer interested in the results. That would allow lsp-clients to abort their requests for example.
When the ACAPF eventually has some candidates (from a process or remote server, say), it can complete the CAPF operation by calling this callback with a regular CAPF response ((funcall callback `(,start ,end ,table ,@extra-props))), and then everything proceeds as a normal CAPF would have.
Leaving aside the fact that Company really wants to know start
and end
in advance before the full completion request is made, processed (with a certain latency) and the response is received, the table
is usually expected to be a lazy computation...
(list start end nil :async callback-receiver)
...this doesn't look ideal. First of all, aside from completion table, the @extra-props
tail contains operations that are themselves costly, per-item, and could as well be improved by an asynchronous way of working. E.g., some company-docsig
resolver could be made to abort on user input when the user scrolls down through the completions in the displayed popup.
And both completion-at-point
and company
are known to call the c-a-p-f
hook just to fetch the updated start
and end
after every user input. While the rest of the data might be stored longer.
Further, if we're talking about LSP, the delayed resolution of "correct" start and end aren't going to be a panacea since IIUC in that protocol every individual completion can use a different "prefix".
Completion table can also be more complicated than a list of strings, with "fields" and "boundaries", where it's not feasible to return all completions in all namespaces together (too many, too much memory used). And LSP also has a thing like "incomplete completion responses".
I would introduce a data type like "abortable promise", where the caller can set the callback, the errback, and abort the computation itself, and make it a possible return value for different operations. Completion table operations among them.
Thank you for your response, Dmitry! I haven't thought much about the async issue yet, therefore I am just forcibly synchronizing the company calls in my wrapper. Could you point me to a good and simple asynchronous company backend, which I can use for experimentation?
I may try to implement a prototype capf extension at some point. The abortable promise you proposed sounds like the right direction. A long term plan could look like this:
1. Prototype for asynchronous table. The table should still support fully synchronous operation for backward compatibility. 2. Add asynchronous support to cape-company-to-capf. 3. Support it in Corfu. 2. Refine the prototype. 3. Agree on the design. .... 40. Support it in company-capf. 50. Possibly port over some asynchronous Company backends to Capfs with asynchronous completion tables. ... 600. Very long term - deprecate company backends. (Not sure if feasible/realistic, since I've heard there is also Company specific fuzzy filtering etc.)
Could you point me to a good and simple asynchronous company backend, which I can use for experimentation?
That would be company-clang
, for example.
But the asynchronous convention as implemented in "native" Company backends is a more simplistic one, without the capability to abort computations, for example.
- Support it in company-capf.
M-x completion-at-point
, icomplete, etc?
- Very long term - deprecate company backends. (Not sure if feasible/realistic, since I've heard there is also Company specific fuzzy filtering etc.)
Yeah, capf completion tables also need native support for fuzzily matched completions. Maybe through an official "style", maybe in completion tables themselves, I'm not sure.
And completion sources' grouping.
The three features above should about cover the missing features compared to "native" company backends.
But the asynchronous convention as implemented in "native" Company backends is a more simplistic one, without the capability to abort computations, for example.
Yes, the final design would probably be more full featured.
Support in M-x completion-at-point, icomplete, etc?
I'd argue to implement this first outside Emacs itself. The implementation from Corfu should transfer to Icomplete at least.
Yeah, capf completion tables also need native support for fuzzily matched completions. Maybe through an official "style", maybe in completion tables themselves, I'm not sure.
Can you explain this a bit more in detail? What does this mean "native support"? This means that filtering is performed on the side of the backend?
And completion sources' grouping.
Is this really needed? I can already implement merging/grouping for Capfs, see ~cape-super-capf~. It is primitive and may have edge cases, but in the simple scenarios where one may want to use merging it works.
I'd argue to implement this first outside Emacs itself. The implementation from Corfu should transfer to Icomplete at least.
Sure, but deprecating company-backends should depend on the acceptance of the above inside Emacs proper too, right?
Can you explain this a bit more in detail? What does this mean "native support"? This means that filtering is performed on the side of the backend?
Yes, pretty much. Whether it's inside its Lisp code, or even in the external process, without additional (costly) post-filtering in any completion style.
Is this really needed? I can already implement merging/grouping for Capfs, see cape-super-capf. It is primitive and may have edge cases, but in the simple scenarios where one may want to use merging it works.
It's a fairly popular feature in Company.
Sure, but deprecating company-backends should depend on the acceptance of the above inside Emacs proper too, right?
Of course.
Yes, pretty much. Whether it's inside its Lisp code, or even in the external process, without additional (costly) post-filtering in any completion style.
From the side of the completion style this is no problem to implement this, one just needs a pass-through style. I am not exactly sure where the difficulty lies for the backend, since the backend can use the begin end positions and send the string to the backend for filtering?
It's a fairly popular feature in Company.
I don't doubt that. What I meant is that it seems to me that not all backends can be combined with each other. For example ~cape-super-capf~ can combine dict, ispell, keyword, symbol, dabbrev etc. But one cannot combine dabbrev+file for example due to the complexity of the file completion table and the completion boundaries. For which backends does grouping work well in Company?
From the side of the completion style this is no problem to implement this, one just needs a pass-through style. I am not exactly sure where the difficulty lies for the backend, since the backend can use the begin end positions and send the string to the backend for filtering?
I don't think this is technically difficult either. But it needs a well-defined way to use such style. Perhaps a built-in category in completion-category-defaults
. And the style itself to be in the core, of course.
For which backends does grouping work well in Company?
Those which return prefix strings equal
to each other. That's not an insurmountable limitation, however, but it raises questions of how to treat and display combinations that do not fit that rule. I've just written a comment about it: https://github.com/company-mode/company-mode/issues/1260#issuecomment-980480084
company-backends
is inherently simpler than completion tables, though (no "fields", or at least not yet), so that makes combination easier. We might want to add some similar feature, though, and I haven't really thought about all the possibilities there.
Okay, so cape-super-capf
has essentially the same limitation. It also requires the same prefix and it requires the sub capfs to not use boundaries. This means cape-super-capf
should already be powerful enough for the current Company-style grouping. This may change in the future when you implement support for mixed prefixes.
@dgutov great point about asyncifying the property functions as well. But those requests, like for documentation, are coming only after you select a candidate, yes? So they could include their own async versions of the property (:async-docsig
etc.), kicked off upon a show-doc command. I suppose you could "read ahead" and spawn some async requests for the first few docs/docsig/etc. in advance to be ready. By adding :async-
flavored properties, an ACAPF could asyncify only what it needs to.
But really the only reason to stick to the current CAPF specification format is legacy value. It seems to me the synchronous completion system is so deeply ingrained, that any CPS-style async CAPF extension will need to interoperate with it, or at least not break things if ACAPFs and CAPFs get mixed and matched.
Yeah, capf completion tables also need native support for fuzzily matched completions. Maybe through an official "style", maybe in completion tables themselves, I'm not sure.
I mean lsp servers already return ~fuzzy completions of their own design. By native support do you mean affecting the backend server's output? Or just advanced filtering on the emacs/client side?
So they could include their own async versions of the property (:async-docsig etc.), kicked off upon a show-doc command.
Possibly, yes, if those extra properties were the only problem.
OTOH, if we standardize on some "promise" value type, we wouldn't need to introduce any new properties. That value type could also be used by the completion function only when needed.
By native support do you mean affecting the backend server's output?
Yep. And without having additional post-filtering through a style like flex
which does add overhead, and might even filter out some completions unnecessarily, given sufficiently exotic filtering approach on the backend.
@dgutov @jdtsmith Can someone explain why asyncing all the functions is valuable? I only see a value in asyncing the call which initially checks if completion is possible at the current point and obtains the candidates. This only matters for the corfu-auto/company-idle mode, where these calls can be performed in the background. But as soon as the popup is visible it is better to use synchronous calls to obtain the additional metadata. Otherwise the UI would just have to synchronize everything itself. There is certainly a minor performance advantage if you can start the few required calls asynchronously and wait until all results have arrived, but does this matter and is it worth the complication? For the popup only a handful of calls are made (annotations, deprecation, kind, etc).
But as soon as the popup is visible it is better to use synchronous calls to obtain the additional metadata.
Both docsig
and doc-buffer
(for example) could be used in asynchronous-ish fashion: displaying the info when Emacs is idle, and aborting the request when the user continues typing.
Both docsig and doc-buffer (for example) could be used in asynchronous-ish fashion: displaying the info when Emacs is idle, and aborting the request when the user continues typing.
Okay, I agree with those two! Also for the others there is a theoretical but minor advantage, but I doubt it justifies the complication. I would prefer a design where only the functions which are relevant are actually made asynchronous. Maybe you prefer a uniform interface as in Company where every call can either return the value right away or a future? However most of the calls are forcibly synchronized, which then defeats any advantage.
Maybe you prefer a uniform interface as in Company where every call can either return the value right away or a future? However most of the calls are forcibly synchronized, which then defeats any advantage.
A uniform interface is what I usually opt for, yes. Most of the implementations are indeed synchronized, though I don't see that as much of a problem (the performance overhead is negligible).
In any case, whether every such function must support async or not, is not a very important decision, I think. It probably doesn't affect any of the other design aspects. So there's not much point in talking about it at this stage.
I even think the overall arrangement should work even if the set of async-supporting functions varied by frontend.
Yep. And without having additional post-filtering through a style like
flex
which does add overhead,
Interesting. So there would have to be some mapping between LSP "styles" and emacs "styles". Sounds challenging/moving target.
and might even filter out some completions unnecessarily, given sufficiently exotic filtering approach on the backend.
In fact I was just experiencing this a few minutes ago by setting the style to basic
. Many of the lsp-servers candidates are dropped. Orderless mostly retains everything. And then once you have the candidates and can operate from cache, it's much faster to select among them locally.
@jdtsmith
In fact I was just experiencing this a few minutes ago by setting the style to basic. Many of the lsp-servers candidates are dropped. Orderless mostly retains everything. And then once you have the candidates and can operate from cache, it's much faster to select among them locally.
See what I wrote in https://github.com/minad/corfu/issues/92#issuecomment-980490964. I guess you should either use flex or enable the orderless-flex matching style to retain most candidates. In order to retain all candidates one can also install an orderless style dispatcher which drops the first word entirely, since this is what triggered the completion and lead to filtering in lsp. I consider the split server/client filtering an almost solved issue with orderless.
I agree that getting anything working async on the CAPF side would be a good goal.
You beat me to it... I turned on orderless-flex for the 1st term only, and it works great. Honestly most of the candidates the server returns are pretty far-fetched — ar -> FLOATING_POINT_SUPPORT
? Sure.....
I'll probably personally demand at least substring matching. Dispatching first match term as "" also works and is wicked fast... you just don't get match highlighting (for that term). [Update: extending the first term after completion starts does not do any filtering, you have to add a space-separated match term... so "" is not ideal. Possibly could configure the dispatcher as a one-time latch which only returns "" on 1st call "during" completion; not sure what "during" marker to take].
All these async capf ideas sound nice but I keep wondering if it is not better to try to get away with a simple targeted extension. The problem with async tables is that they would require an async extension to all completion styles.
A possible simple extension could look like this:
Add a single :async-complete function to the plist returned by the capf. If this function is present, the capf is treated as matching successfully at the given point in the buffer. Only the results are not available now but in the future.
Then the frontend would call the :async-complete function with a callback. The :async-complete function will return another function/closure which can be called by the frontend to request cancellation.
When the refresh callback is called, processing continues as usual - synchronously. If the buffer is modified before the callback returns the frontend can cancel the async request. It seems that this extension is all that is needed to fit asynchronous completion into the current framework. (One may also add :async-docsig/:async-docbuffer functions which get a callback and return a cancellation function.)
Since querying the capf list must be fast one could always query the capfs immediately in the post command hook. The synchronous completion tables will be used after a timeout. The asynchronous tables can be used as soon as the asynchronous callback returns.
Basically one can write an adapter which transforms the synchronous to fake asynchronous backends. The fake asynchronous backend async function will return a function which cancels the timer and the callback will be called when the timer elapsed.
Now the problem is that this design does not fit well to the company design, since the candidates action of a company backend cannot be asked selectively to return a future. And it is not possible to ask a company backend if it is indeed asynchronous.
Reading through @jdtsmith's initial comment here - I think what I just described is mostly the same. But @jdtsmith's description sounds even more elegant and simpler. Downside is of course also that it does not fit nicely to Company.
I looked at the Company code and it seems to me that the whole async idea is pointless if one treats Capf completion tables as interruptible. As I argued in https://github.com/minad/corfu/issues/71#issuecomment-980150849 and the discussion there, completion tables should be treated as interruptible since this is the only way one can use them in a responsive UI. Icomplete and Vertico treat completion tables as interruptible. Problematic completion tables can be wrapped by cape--noninterruptible-table
and cape-noninterruptible-capf
.
Company seems to explicitly disallow interruption of completion backend calls. In order to still allow responsive UIs, async future return values have been introduced. The frontend can wait for these futures to complete while still allowing interruption. See the sit-for
in company--fetch-candidates
(https://github.com/company-mode/company-mode/blob/6abb232acde15e1a7bf402a57f7bb982edf2de7d/company.el#L1439-L1440).
Now I can just do the same in cape-company-to-capf
, such that the resulting capf is interruptible. I treat Company backend calls as non-interruptible, but still allow interruption during the async fetching/synchronization:
https://github.com/minad/cape/blob/e341f81a79be8d6880b9cf608432e6098af034f8/cape.el#L738-L753
Therefore it seems to me that the introduction of asynchronous Capfs is unnecessary. The Capfs produced by cape-company-to-capf
still profit from the asynchronous nature of Company backends. Furthermore usual Capf backends are just as efficient, as long as they are interruptible.
Copying my https://github.com/minad/corfu/issues/71#issuecomment-980150849 here for context:
while-no-input
. This is relevant if internal state is manipulated. Completion tables are treated as interruptible by Icomplete, Vertico and Corfu. See the Cape completion tables for reference.cape-symbol
and cape-keyword
table for reference.cape-ispell
for reference.basic
completion style and is incompatible with advanced completion styles like substring
, flex
or orderless
. This implies that completion-table-dynamic
and completion-table-with-cache
should not be used if the goal is to support advanced completion styles.So in summary, these are the differences between Company backends and Capfs:
cape-company-to-capf
.cape-super-capf
There is a Capf equivalent for every Company feature.
No. 3 (see above) — really, as I discovered, only if you can return ""
just once does this work, since extending the 1st term then does nothing; not sure what convenient "I'm in the same completion" marker we could rig for the dispatcher. orderless-flex works fine (if adding overhead, as @dgutov mentioned).
To me a difference between "interruptible sync CAPF" and async CAPF, is for the latter (if they can be made to work), you could continue to make small edits to the buffer that do not affect the validity of the original completion call, such that its results can still be shown, even after some input has happened. You could of course cancel the request as soon as you see that the buffer is not valid, but you don't have to do so on every new input.
And the other big difference — brute force interruption is really not so kind. Remember when I was mentioning that my ipython mode was suffering from non-local exits to accept-process-output, that I simply could not understand the source of? I even posted on emacs-devel about it (they said: run under gdb and set a break in Fthrow).
After a long investigation, I ended up giving each call to A-P-E a timeout, and looping up to a maximum delay to fix it. I had thought that corfu had helped me uncover the bug by turning down corfu-auto-delay to a very short time, like 0.05s. Well, it turns out corfu itself was responsible for the interruption and non-local exit! I could have fixed it trivially by setting (let (throw-on-input) ...
!
Why is a non-local throw a problem for a process-facing CAPF in the middle of its operation? Because process output is itself async, and will call the process filters function when new data arrive, whether or not you've pulled the rug out from under a corralling "output capturing" stanza (e.g. (while (A-P-E proc))
). If you've set up temporary process filters to redirect and gather that output, a non-local throw does not "magically halt" the process output. It just halts the loop you are using to gather that info, and sends it where you don't want it (straight into the shell buffer!). Put another way, while-no-input does not cancel the entire async process call, instead it just cancels the async -> sync "transducer" you have setup to capture the output of that call! I.e., the opposite of what you want.
So I hope we can come up with a cancellation scheme more forgiving than brute force interruption (like a cancel callback).
In terms of supporting company async backends (which I haven't used) vs. extending the CAPF format to include ACAPFs, is there one advantage not being mentioned? An ACAPF allows an "easy upgrade path" for current or future CAPFs which later realize they should be async for some of their functions.
Reading through @jdtsmith's initial comment here - I think what I just described is mostly the same.
You added a key feature in a rather simple way: "give a refresh callback, get a cancellation callback".
@jdtsmith I agree that cancellation is not nice and it will lead to brittle systems, in particular if the backend author is not prepared that interruption may happen. All I am saying is that if you use interruptible tables you are already equivalent to company in capabilities if we consider the status quo.
Now we could decide to generally treat completion tables as non-interruptible and avoid the potential nightmare. We could then introduce some opt-in mechanism for completion tables such that they declare themselves as interruptible. But besides that we wouldn't need no other API extensions whatsoever. An asynchronous table which starts a process in the background should then simply declare itself as interruptible. Internally it could operate using async futures to handle the interrupts safely. Company demonstrates how to do this in company--fetch-candidates
and I use the same procedure in cape--company-call
. It all boils down to a simple function which waits for a future in an interruptible way.
(defun wait-for-future (future delay timeout)
(let ((res 'trash)
(start (time-to-seconds)))
(let (throw-on-input) (funcall future (lambda (arg) (setq res arg))))
(while (eq res 'trash)
(sleep-for delay)
(when (> (- (time-to-seconds) start) timeout)
(error "Async timeout")))
res))
Regarding implementing asynchronous completion tables, I think this is an upgrade nightmare since the entire infrastructure which sits on top does not support the asynchronous future return values. Therefore I would suggest to not go this route. (EDIT: This argument only holds if the frontend, completion styles etc know anything about asynchronous tables. We can avoid this with cape-async-to-interruptible-table
, see later).
As an alternative to the aforementioned interruptible
flag for completion tables which opt-in to interrupts, one could add a side channel for a completion table where it can report that it has been interrupted internally. The completion table should then take care itself of wrapping certain parts of the code with while-no-input
, which is apparently what eglot and lsp-mode already do. But they do not report the interrupt, which is wrong, since the frontend gets a wrong list of results or no result at all due to the interrupt but is not made aware of this.
But I have to fiddle with this a bit longer. I may come up with something reasonable...
We can easily define asynchronous tables, which return an (:async . future)
or a plain value
in the style of Company backends. As an extension to the Company calling convention I added an explicit cancellation function. These asynchronous tables can be converted to synchronous interruptible tables as in the following.
(defun cape-async-call (fun &rest args)
"Call asynchronous FUN with ARGS."
(let ((old-toi throw-on-input)
;; Guard against interrupts
(throw-on-input nil))
(pcase (apply fun args)
(`(:async . ,future)
(let* ((res 'trash)
(start (time-to-seconds))
(cancel (funcall future (lambda (arg) (setq res arg)))))
(unwind-protect
;; Force synchronization. The synchronization is interruptible!
(let ((throw-on-input old-toi))
(while (eq res 'trash)
(sleep-for cape-async-wait)
(when (> (- (time-to-seconds) start) cape-async-timeout)
(error "Cape async timeout"))))
(when (and (eq res 'trash) (functionp cancel))
(ignore-errors (funcall cancel))))
res))
(res res))))
(defun cape-async-to-interruptible-table (table)
"Convert asynchronous completion TABLE to interruptible completion table."
(lambda (str pred action)
(cape-async-call table str pred action)))
The resulting synchronous interruptible tables can be used as is in Vertico, Corfu or Icomplete. Note that asynchronous extension functions are not supported by this primitive prototype. But this can be added easily. I could add the transformers cape-async-to-interruptible-table
and cape-async-to-interruptible-capf
to Cape.
Sorry, that I wrote so much and for flooding you with a wall of text.
@dgutov Would the design I described in my previous comment be satisfactory from your perspective? It is a straightforward extension of plain old capfs and completion tables. Every function (the table, the annotation, and the company-* functions) is simply allowed to return an (:async . function)
future (in the style of Company) additionally to the usual synchronous values. It is no problem to support this in Corfu. The async completion table is backward incompatible but can be transformed to a synchronous table by the transformer defined above. Furthermore thanks to the uniformity of the design this is very easy to understand.
To address a couple of older comments first:
@jdtsmith
So there would have to be some mapping between LSP "styles" and emacs "styles". Sounds challenging/moving target.
There needs to be one now (mapping), for it to work perfectly. Otherwise it works okay-ish, mostly by accident, most likely missing completions like the one you provided in an example.
You beat me to it... I turned on orderless-flex for the 1st term only, and it works great. Honestly most of the candidates the server returns are pretty far-fetched — ar -> FLOATING_POINT_SUPPORT? Sure..... <...> I'll probably personally demand at least substring matching.
I get that personal preference, and I haven't even transitioned to the flex
completion style in most of my usages yet... But as a designer of a completion package, of a completion table, I would rather have either complete control myself, or delegate said control to the external completion logic, rather than leave it basically up to chance that the user will see the expected completions, that their local configuration of completion-styles
is compatible enough. Especially if I'm writing a package like lsp-mode which works off the concept of being very compatible with VS Code in behavior.
@minad
I agree that cancellation is not nice and it will lead to brittle systems, in particular if the backend author is not prepared that interruption may happen. All I am saying is that if you use interruptible tables you are already equivalent to company in capabilities if we consider the status quo.
This doesn't sound right: after all, the current async backends of Company don't have similar correctness and debuggability problems resulting from having code interrupted in an arbitrary location, or even interrupting a random timer or a process sentinel belonging to another process that might be running at the present time (which is unlikely, but not unimaginable).
Furthermore, "real" asynchronous backends (with callbacks and stuff) allow multiple completion sources to run in parallel. So one could launch a request to a background language server then let the second one do the same or (more likely) do some in-process compulation (like company-dabbrev does) and then, when the response comes, combine their results. Grouping on top of async, boom. mike drop
Of course, I don't really think this is crucial now, but limiting ourselves to sequential processing of completions for the future evolution of the API seems wrong. Who knows what our descendants could manage to do with it.
Anyway, writing code that is resilient against being interrupted is hard (here's some essay on the subject: https://adamhooper.medium.com/in-ruby-dont-use-timeout-77d9d4e5a001). We can end up with unfinished assignments, connections (in the url
library) that are not closed properly and not returned to the "free" pool. Or one-off processes which don't need to run anymore (and could have been killed in a proper errback handler) but continue running because the execution thread which knows about it has been killed out. I'm not saying this approach couldn't work, but I'm definitely not brave enough to try that road myself.
Would the design I described in my previous comment be satisfactory from your perspective? It is a straightforward extension of plain old capfs and completion tables. Every function (the table, the annotation, and the company-* functions) is simply allowed to return an (:async . function) future (in the style of Company) additionally to the usual synchronous values.
It sounds very similar to what Company does already, so sure (though we'll probably end up with some other container for futures than an anonymous function). But like you mentioned previously, these values would need to be passed through the completion styles, which can definitely raise the complexity of their implementations. I don't see a way around it, though. Aside from someone going in and magically rewriting a lot of that stuff (completion styles) in some simpler way first.
Thank for your answer, Dmitry! I didn't mean to write it in an adversarial tone.
This doesn't sound right: after all, the current async backends of Company don't have similar correctness and debuggability problems resulting from having code interrupted in an arbitrary location, or even interrupting a random timer or a process sentinel belonging to another process that might be running at the present time (which is unlikely, but not unimaginable).
There is also manual interruption. If you want to write a robust process handler you should make sure that it is cleaned up afterwards.
Furthermore, "real" asynchronous backends (with callbacks and stuff) allow multiple completion sources to run in parallel. So one could launch a request to a background language server then let the second one do the same or (more likely) do some in-process compulation (like company-dabbrev does) and then, when the response comes, combine their results. Grouping on top of async, boom. mike drop
Of course, it is obvious that one should take advantage of parallel fetching of asynchronous sources. Do you take advantage of that in Company currently? Is it a common use case that people combine asynchronous sources and group them? I have looked at the Company code, but it is likely that I missed the code where you do this parallel fetching. Currently my cape-super-capf
does not support parallel fetching, since it is synchronous. But this could be added without an issue if an async calling convention is adopted.
Anyway, writing code that is resilient against being interrupted is hard.
Of course, I am aware of that. But anyways - I haven't observed significant issues in practice. Minibuffer completion tables are usually pure and interruptible. In Corfu the situation may be a bit different, since there communication with an external process is much more common. And for the few completion tables which make problems one could wrap them on a case by case basis with a capf transformer like cape-noninterruptible-capf
.
It sounds very similar to what Company does already, so sure (though we'll probably end up with some other container for futures than an anonymous function). But like you mentioned previously, these values would need to be passed through the completion styles, which can definitely raise the complexity of their implementations. I don't see a way around it, though. Aside from someone going in and magically rewriting a lot of that stuff (completion styles) in some simpler way first.
One could also use post filtering via the completion style. Fetch all candidates asynchronously and then run the completion style on the resulting list. Downside is that this approach is less efficient (in particular for regexp filtering) and does not work for any of the complicated dynamic tables. On the other hand there are probably either asynchronous completion tables or dynamic completion tables which make heavy use of boundaries. In any case such an asynchronous capf design seems okay, it does not have the potential correctness issues as interruptible tables have.
To bring the discussion back on track - I think I didn't manage to convey my main point. It does not matter if we agree or disagree on interruptible/noninterruptible tables, for what I am going to describe now. I am only using interruptibilty as a trick to reuse the existing synchronous infrastructure.
If you query a completion table one could use the following call chain:
while-no-input (make the call interruptible)
|
v
completion-all-completions (use the completion style)
|
v
<mystyle-all-completions>
|
v
cape-async-to-interruptible-table (add support for async, everything below is async, noninterruptible and safe)
|
v
the actual table, nice and safe, can return async futures
;; called like this in the frontend:
(while-no-input (completion-all-completions str (cape-async-to-interruptible-table the-actual-table) ...))
If you want to query multiple tables one could use:
while-no-input
|
v
completion-all-completions
|
v
<mystyle-all-completions>
|
v
cape-async-to-interruptible-table
|
v
cape-async-super-table (fetch in parallel)
| | |
v v v
table1 table2 ....
The only requirement here is that the frontend must execute the completion table with while-no-input to take advantage of the async cancellation. Maybe this is a minor issue for disagreement, however note that below completion-all-completions there is the async transformer which makes sure that we are safe. The huge advantage of this layered extension of the design is that completion styles can just be used as is.
Random reactions:
a single latching “pass-through-once” style that does nothing on 1st invocation but present “all the results”, then converts itself to flex/substring/orderless with further input against the same table could be very helpful with speed/completeness. The flex highlighting is basically a lie anyway against server-provided results. And an expensive lie.
We are speaking implicitly of different capabilities of an asynchronous setup:
@minad your wrapper design sounds like an interesting possible solution to 1. So it would be wrapped in while-no-input, use unwind-protect to catch non local exits, and call the cancellation callback(s)?
all of this would be infinitely easier if Elisp had some coroutine/cooperative multitask/non-thread yield primitives. Are we sure that’s not on the horizon?
@jdtsmith
a single latching “pass-through-once” style that does nothing on 1st invocation but present “all the results”, then converts itself to flex/substring/orderless with further input against the same table could be very helpful with speed/completeness.
Let's keep out the completion style discussion for now. As I see it, it does not affect the design of an async capf very much. I agree with you that there is improvement potential, but I think orderless with all its flexibility goes a long way.
your wrapper design sounds like an interesting possible solution to 1. So it would be wrapped in while-no-input, use unwind-protect to catch non local exits, and call the cancellation callback(s)? But what of 2,3?
(i) is supported by cape-async-to-interruptible-capf
, (ii) is supported by a potential cape-async-super-capf
, which can be written in a similar style as the existing cape-super-capf
, only difference being that it proceses futures and returns futures.
(iii) cannot be handled by this design and the capability also seems too unspecific. How can I determine if candidates are still relevant? I would like to exclude (iii). Both (i) and (ii) are at least well defined and as I've described also possible to implement without huge effort.
See cape--company-call
to achieve the interruptible synchronization. (For Company without unwind-protect future cancellation, but this is easy to add.)
all of this would be infinitely easier if Elisp had some coroutine/cooperative multi-proc/non-thread yield primitives. Are we sure that’s not on the horizon?
As far as I know it is not on the horizon. But I am also not sure how it is signficant to the discussion, we would still need some kind of async future to wait for.
Both (i) and (ii) are at least well defined and as I've described also possible to implement without huge effort.
i) and ii) alone would represent good progress. But I'm still not fully sure how using while-no-input to pull the plug out randomly in the middle of process interaction will be improved with a cancel callback. I guess you could keep a buffer-local variable which the cancel callback sets, and if found, just swallow all output, but this could get confused with "real" process output you do want to keep. The way I see it, while-no-input is not the cooperative multitasking we seek, it's preemptive interruption. Once you've already non-locally exited deep (blocking) process IO, your state is problematic and hard to recover gracefully from. It's certainly better to have a "cancel" callback, but a more cooperative approach from frontend to backend would be "I noticed some input arrived, please cancel your current operation at your earliest convenience".
Let's keep out the completion style discussion for now. As I see it, it does not affect the design of an async capf very much.
It may seem ancillary, but it impacts the way in which external async completion providers like LSP can be dealt with. As a proof of principle, here is an example orderless dispatcher which delivers one time pass-through, only on a new table or same 1st match pattern:
(defun my/orderless-dispatch-passthrough-first ()
(let (last-table initial-pattern)
(lambda (pattern index _total)
(if-let (((eq index 0))
(table (caddr completion-in-region--data)))
(if (not (eq table last-table))
(progn
(setq last-table table
initial-pattern pattern)
"")
(and initial-pattern
(string-equal pattern initial-pattern) ""))))))
I.e. it does no filtering on first results from the server, but then permits more filtering right away with that set. If you extend the 1st match, you can "get back to" the original full results by deleting back to it.
But I am also not sure how it is signficant to the discussion, we would still need some kind of async future to wait for.
Presumably those would be developed in short order for general Emacs use if yield-style async capabilities were built in natively.
i) and ii) alone would represent good progress.
Indeed.
But I'm still not fully sure how using while-no-input to pull the plug out randomly in the middle of process interaction will be improved with a cancel callback.
There won't be any pulling the plug randomly - no interrupts. You can kill the process in the cancel function.
The way I see it, while-no-input is not the cooperative multitasking we seek, it's preemptive interruption.
No, you misunderstood how this is supposed to work. There won't be any preemptive interrupts during the execution of the asynchronous completion table (except manual interrupt C-g). The cancel function is called the next time input arrives, but by the event loop. Not randomly, not preemptively. Please take a look at the following code. I commented it to be more explanatory.
(defun cape--async-call (fun &rest args)
"Call FUN with ARGS and handle async future return values "
;; Backends are non-interruptible. Disable interrupts!
(let ((toi throw-on-input)
(throw-on-input nil))
(pcase (apply fun args)
;; Handle future return value
(`(:async . ,future)
(let ((res 'cape--waiting)
(start (time-to-seconds))
cancel)
(unwind-protect
(progn
(setq cancel
(funcall future (lambda (arg)
;; Enqueue cape--done event
(when (eq res 'cape--waiting)
(push 'cape--done unread-command-events))
(setq res arg))))
;; Force synchronization.
(while (eq res 'cape--waiting)
;; When we've got input, interrupt the computation.
(when (and unread-command-events toi)
(throw toi nil))
(when (> (- (time-to-seconds) start) cape-company-timeout)
(error "Cape async timeout"))
;; Wait for input or cape--done
(sit-for 0.1 'noredisplay)))
;; If the future didn't finish, ask it for cancellation.
(when (and (eq res 'cape--waiting) (functionp cancel))
(ignore-errors (funcall cancel)))
;; Remove cape--done introduced by future callback
(setq unread-command-events (delq 'cape--done unread-command-events)))
res))
;; Plain old return value (synchronous)
(res res))))
(Also compare with cape--company-call
, which is essentially the same, only without cancellation support.)
It's certainly better to have a "cancel" callback, but a more cooperative approach from frontend to backend would be "I noticed some input arrived, please cancel your current operation at your earliest convenience".
This is what the cancel callback is - it asks the backend to cancel.
Hi Daniel,
Thank for your answer, Dmitry! I didn't mean to write it in an adversarial tone.
I didn't mean to imply that either. No more than would be sportsman-like, at least.
There is also manual interruption. If you want to write a robust process handler you should make sure that it is cleaned up afterwards.
This is a fair argument, but it probably just means that Emacs' commands are unreliable in the face of keyboard-quit
. Yes, it's usually a good idea to write Elisp in a way resilient against aborts, but as we've discussed already, that's rather hard to do. So relying on aborts as an alternative to asynchronous computations probably isn't going to fly.
Of course, it is obvious that one should take advantage of parallel fetching of asynchronous sources. Do you take advantage of that in Company currently? Is it a common use case that people combine asynchronous sources and group them? I have looked at the Company code, but it is likely that I missed the code where you do this parallel fetching. Currently my cape-super-capf does not support parallel fetching, since it is synchronous. But this could be added without an issue if an async calling convention is adopted.
Check out the definition of company--multi-backend-adapter-candidates
. It tries to support that scenario, at the very least.
But as for taking advantage -- alas, most of the popular backends out there are synchronous. The popular asynchronous ones fell to the wayside when LSP arrived.
One could also use post filtering via the completion style. Fetch all candidates asynchronously and then run the completion style on the resulting list. Downside is that this approach is less efficient (in particular for regexp filtering) and does not work for any of the complicated dynamic tables.
That sounds like it will require changes in how completion styles are organized. And anyway, yes, by design they are supposed to query the completion table themselves. But maybe it would still be able to retain the power and (most of) performance if things were done in a different way.
To bring the discussion back on track ... (while-no-input (completion-all-completions str (cape-async-to-interruptible-table the-actual-table) ...))
No, you misunderstood how this is supposed to work. There won't be any preemptive interrupts during the execution of the asynchronous completion table (except manual interrupt C-g). The cancel function is called the next time input arrives, but by the event loop. Not randomly, not preemptively. Please take a look at the following code. I commented it to be more explanatory.
I was going to ask the same question as @jdtsmith.
With the illustration, it becomes a little clearer. So the only times the "synchronized" async backends would be able to throw, is between their sit-for
calls, when manually detecting that some input has arrived? That might work, I guess. The design is a little exotic, I'd say, but some MVP and testing it in practice might as well prove it viable.
Only... perhaps don't use while-no-input
? And instead catch a certain predefined symbol that the asynchronous "synchronized" tables are supposed to throw? This way the "normal" completion tables won't receive arbitrary interruptions, since they're not always prepared for them.
I'm still not crazy about it (as opposed to consistently using an async primitive and some standard functions and/or macros to manipulate them), but the tradeoffs could be worth it.
@jdtsmith
Speaking of
all of this would be infinitely easier if Elisp had some coroutine/cooperative multitask/non-thread yield primitives. Are we sure that’s not on the horizon?
The "threads" in Emacs Lisp are actually more like cooperative coroutines than "real" threads. They might be lacking some convenience in the existing abstractions, but otherwise they are what you described.
Not sure how they are supposed to help, though.
Oh, I thought you were still set on throw-on-input
; this looks quite good.
In a backend you could be doing something like:
(while (and (not completion-cancelled)
output-filter-in-progress
(or (eq max-cnt t) (< cnt max-cnt)))
(accept-process-output process delay nil 1)
(cl-incf cnt))
and just set completion-cancelled
in your cancel-back. Then cleanup if you exited due to cancellation — no throwing. Would it perhaps be useful for the backend ACAPF to have the ability to specify its own timeout (including disabling it)? Some backends are slower than others, so probably no one-size-fits-all is possible.
@jdtsmith
Speaking of
all of this would be infinitely easier if Elisp had some coroutine/cooperative multitask/non-thread yield primitives. Are we sure that’s not on the horizon?
The "threads" in Emacs Lisp are actually more like cooperative coroutines than "real" threads. They might be lacking some convenience in the existing abstractions, but otherwise they are what you described.
Not sure how they are supposed to help, though.
Yeah, not very familiar, but what I've read about them hasn't been too encouraging. I know for example tramp had a thread branch for several years and hasn't been able to get it to work. Maybe there is some way to do async-completion in a thread. But the callback passing style @minad has proposed looks pretty flexible.
That sounds like it will require changes in how completion styles are organized.
You just need a self-morphing dispatching style like orderless ;). My "passthrough-once" approach above actually works surprisingly well, and cuts out all the overhead of flex-matching just to show all results.
@dgutov
I didn't mean to imply that either. No more than would be sportsman-like, at least.
Good, I wasn't sure - I also didn't mean it in any different way than sports-man like. Btw, today I learned about elp-instrument-package
:)
This is a fair argument, but it probably just means that Emacs' commands are unreliable in the face of keyboard-quit. Yes, it's usually a good idea to write Elisp in a way resilient against aborts, but as we've discussed already, that's rather hard to do. So relying on aborts as an alternative to asynchronous computations probably isn't going to fly.
I agree with that. It is different if quitting is slightly broken in comparison to hiccups because of new input. One could also make the async futures resilient against quitting, such that the cancel function would then be called. However this may lead to hangups, maybe one shouldn't do that.
Check out the definition of company--multi-backend-adapter-candidates. It tries to support that scenario, at the very least. But as for taking advantage -- alas, most of the popular backends out there are synchronous. The popular asynchronous ones fell to the wayside when LSP arrived.
Good to hear that you have this supported. But I expected that there are not many asynchronous backends around in widespread use.
With the illustration, it becomes a little clearer. So the only times the "synchronized" async backends would be able to throw, is between their sit-for calls, when manually detecting that some input has arrived? That might work, I guess. The design is a little exotic, I'd say, but some MVP and testing it in practice might as well prove it viable.
Yes, there won't be any interrupt possible within the async table calls. The only interrupts can happen after the sit-for
.
Only... perhaps don't use while-no-input? And instead catch a certain predefined symbol that the asynchronous "synchronized" tables are supposed to throw? This way the "normal" completion tables won't receive arbitrary interruptions, since they're not always prepared for them.
I see the problem with always wrapping with while-no-input
. My idea was that capfs could report an :interruptible
property and then the wrapping is applied. For other tables no while-no-input
etc wrapping is applied.
On the other hand one could just treat completion all completion tables as interruptible and always wrap. The advantage is that this improves responsiveness for existing static tables at the risk of breaking some other tables. Furthermore there is a small advantage if the completion style performs expensive work, it could also be interrupted. My current solution is to use a cape-noninterruptible-capf
wrapper. I don't know how often this is needed in practice.
But it seems the explicit symbol you propose is a good idea, maybe we should use that.
I'm still not crazy about it (as opposed to consistently using an async primitive and some standard functions and/or macros to manipulate them), but the tradeoffs could be worth it.
Well, I see it like this - the wrapping inside an interruptible completion table via the transformer allows us to basically keep all the existing infrastructure. And from the perspective of the backend, everything is asynchronous consistently.
See https://github.com/minad/cape/pull/14 for my recent version.
Do you envision backend authors need to tailor themselves to support either sync or async? Or is the idea that you only use a given async backend by wrapping it and presenting it as "synchronous, but unusually responsive given the slow process it connects to"?
I'm also not clear on why while-no-input would be much of an advantage, vs. the optional cooperative approach. This could easily be defeated by backend authors, and static in-process table aren't really the performance issue driving the need for async. I suspect many developers working on slow backends would be happy to have a simple way to increase responsiveness without breaking UI support, if things work as planned.
Do you envision backend authors need to tailor themselves to support either sync or async?
Backend authors can either write synchronous or asynchronous backends. Both could be supported by Corfu/Cape/Company. So the answer is yes, but I don't know what you are actually asking.
Or is the idea that you only use a given async backend by wrapping it and presenting it as "synchronous, but unusually responsive given the slow process it connects to"?
I am not sure I understand you correctly. Do you ask - who is doing the wrapping? Is it the backend author or is the wrapping always applied by the frontend? The status quo in Corfu/Cape is that Corfu only knows interruptible capfs. Async capfs can be supported by wrapping them with cape-async-capf
.
I'm also not clear on why while-no-input would be much of an advantage, vs. the optional cooperative approach. This could easily be defeated by backend authors.
For this I have the invincible cape-noninterruptible-capf
, however it must be applied on a case by case basis.
But I have a good argument which may convince backend authors to write interruptible backends - performance/responsiveness. Noninterruptible Capfs lead to decreased responsiveness. However if it is too difficult or problematic to write a safe interruptible backend we can point them to cape-async-capf
to the rescue. This is arguably a much nicer API. But note that for the async API the danger/difficulty lies elsewhere. Since this is a cooperative instead of a preemptive API it relies on the backend author to write code which does not hang up. Of course this is much less likely than writing code which behaves badly in the presence of interrupts.
and static in-process table aren't really the performance issue driving the need for async.
This is incorrect. For large static tables filtering can be slow. There is a reason why I use interruptible tables in Vertico/Corfu. In particular in Vertico C-h f
and C-h v
have large obarray tables with expensive predicates. Orderless filtering can also become slow if you use complicated regexps.
I suspect many developers working on slow backends would be happy to have a simple way to increase responsiveness without breaking UI support, if things work as planned.
Yes, they just have to return async values then, it should all be quite easy to use. Arbitrary frontends can be supported by wrapping the capf with cape-async-capf
. This would work everywhere, Company/Corfu/default-ui.
However the frontend actively has to opt-in to the interruption by either using while-no-input
like Corfu currently does or by setting the cape-async-throw-on-input
variable to a symbol which signals interruption. This means with my prototype, currently only Corfu profits from the interruptibility. But see https://github.com/minad/cape/pull/14 where I write something about how the issue could be solved in company-capf.
The problem I am seeing is really to convince anyone of the backend authors to adopt this calling convention.
I see you have as a TODO supporting company's async-flavoring. An interesting idea is to extend this to vanilla CAPFs as well, with a simple extra
:async
property, to make... async-capf's or ACAPFs.The idea would be to extend the CAPF mechanism to allow it to return (if it thinks it can eventually complete something there) a null table, but include an
:async callback-receiver
property in the return:This would "break nothing" for non-supporting UIs (though you'd get no completions). A supporting UI would pass the
callback-receiver
function it receives its own (unique!) callback function — a function which will accept just one argument:capf-result
. The unique callback could be for example a closure that includes a request-id or some other disambiguator, so the UI knows which request is giving it results.When the ACAPF eventually has some candidates (from a process or remote server, say), it can complete the CAPF operation by calling this callback with a regular CAPF response (
(funcall callback `(,start ,end ,table ,@extra-props))
), and then everything proceeds as a normal CAPF would have. Thetable
could be a function (if it wants to specify some metadata and do something fancy) or just a simple collection, like always. The UI can either drop this request if the buffer has in the meantime changed, such that the completions are "no longer relevant", or pop-up the UI, if they have arrived "in time". Or a UI could allow the ACAPF to itself make that call in its table function (e.g. if the original region text includes a prefix of the current probe text), and only provide "relevant" completions. I'd suspect the former would be more robust.To support both async-capable and "regular-old" completion UI's, the CAPF-author could wrap the async CAPF with a simple "polling" wrapper which supplies the ACAPF its own simple callback, and polls for completion (with timeout), before returning the full
capf-result
, seemingly synchronously.