Update it so that a user may select a local model from their drive for usage with llamafile, and provide a means of launching/killing llamafile as well.
The user should also be able to configure the various settings available.
User should also be able to use HuggingFace Transformers instead if they wish.
If Transformers is selected as the inference engine:
Figure out API access
Allow the user to download a model based on URL/name from HF
Allow the user to select an already downloaded model for usage
Allow the user to modify/set the various configuration settings
Title.
Update it so that a user may select a local model from their drive for usage with llamafile, and provide a means of launching/killing llamafile as well.
The user should also be able to configure the various settings available.
User should also be able to use HuggingFace Transformers instead if they wish.
If Transformers is selected as the inference engine: