c0sogi / llama-api

An OpenAI-like LLaMA inference API
MIT License
111 stars 9 forks source link

Huggingface downloader & Simpler log message & InterruptMixin #2

Closed c0sogi closed 1 year ago

c0sogi commented 1 year ago

This pull request encompasses several enhancements to usability and code refactoring. The primary changes include:

  1. Automatic Model Downloader: In our previous implementation, the model_path attribute in model_definitions.py required an actual filename of a model. We have now upgraded this to accept the name of a HuggingFace repository instead. As a result, the specified model is automatically downloaded when needed. For instance, if you define TheBloke/NewHope-GPTQ as the model_path, the necessary files will be downloaded into models/gptq/thebloke_newhope_gptq. This functionality works similarly for GGML.

  2. Simpler Log Message: We've made our log messages more concise when using Completions, Chat Completions, or Embeddings endpoints. These logs will now fundamentally display elapsed time, token usage, and token generations per second.

  3. Improved Responsiveness for Job Cancellation: The Event object in SyncManager now sends an interrupt signal to worker processes. It checks the is_interrupted property at the most low-level accessible area and tries to cancel the operation.

These changes foster more intuitive use of our application and enhance its overall responsiveness. They streamline model handling by allowing automatic downloads from a repository, rather than relying on specific file names. The job cancellation process is now more reactive, potentially saving computing resources and time if a process needs to be halted. Finally, our log messages are now cleaner and more informative, providing essential information for monitoring performance and usage.