prompt-toolkit / python-prompt-toolkit

Library for building powerful interactive command line applications in Python
https://python-prompt-toolkit.readthedocs.io/
BSD 3-Clause "New" or "Revised" License
9.27k stars 716 forks source link

heavy Processor called multiple times #1915

Open planetis-m opened 1 day ago

planetis-m commented 1 day ago

Hi,

I've developed a prompt that performs spellchecking by integrating the pyspellcheck library. However, I've noticed that underlining misspelled words causes a significant lag. I've optimized the spell-checking process by using caches, but the lag persists.

Upon further investigation, I added a print statement to the apply_transformation function and found that it is being called four times per keystroke. I attempted to create a ConditionalProcessor to address this, but there is no key_inserted event. Instead, I implemented hashing the buffer's text, but this approach breaks the underlining functionality entirely.

Any advice on how to resolve this issue would be greatly appreciated.

Thank you!

Sample: https://gist.github.com/planetis-m/ad078e0e184439e2712f3b853c192d01

planetis-m commented 7 hours ago

I tried creating an async Filter, this didn't work:

venv/lib/python3.12/site-packages/prompt_toolkit/completion/base.py:346: RuntimeWarning: coroutine 'AsyncSleepFilter.__call__' was never awaited
  if self.filter():
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
planetis-m commented 6 hours ago

I have implemented a rate limiting completer, however I could use a review:

class ThrottledCompleter(Completer):
  def __init__(self, completer: Completer, delay: float = 0.3):
    self.completer = completer
    self.delay = delay
    self.last_completion_time = 0
    self.pending_task = None

  def get_completions(
    self, document: Document, complete_event: CompleteEvent
  ) -> Iterable[Completion]:
    # This method is not used directly, but is required by the Completer base class
    return []

  async def get_completions_async(
    self, document: Document, complete_event: CompleteEvent
  ) -> AsyncGenerator[Completion, None]:
    if self.pending_task:
      # Cancel the pending completion
      self.pending_task.cancel()
    # User is still typing, discard previous pending completion
    if complete_event.text_inserted:
      current_time = asyncio.get_event_loop().time()
      # Schedule a new completion
      self.pending_task = get_app().create_background_task(
        self._delayed_completion(document, complete_event, current_time)
      )
      try:
        # Wait for the task to complete and yield its results
        completions = await self.pending_task
        for completion in completions:
          yield completion
      except asyncio.CancelledError:
        # Task was cancelled, do nothing
        print('Canceled!')
        pass
    # User explicitly requested completion (e.g., by pressing Tab)
    elif complete_event.completion_requested:
      # Run the completion immediately
      async for completion in self.completer.get_completions_async(
          document, complete_event):
        yield completion

  async def _delayed_completion(
    self, document: Document, complete_event: CompleteEvent, request_time: float
  ):
    # Wait for the delay
    await asyncio.sleep(self.delay)
    # Check if this is still the most recent completion request
    if request_time > self.last_completion_time:
      self.last_completion_time = request_time
      # Generate completions from the wrapped completer
      return list(self.completer.get_completions(document, complete_event))
    else:
      return []

And maybe this could be a good PR?

planetis-m commented 5 hours ago

Unfortunately the code above is not really rate limiting, and certainly feels that way, any suggestions welcome.