enable a user to launch llamafile from GUI with a selected model from either the script or from browsing the local file system.
In addition, expose CLI options through the GUI for llamafile execution.
Options are exposed to launch the llamafile through the llamafile tab, and there's the tab for inference as well, using an iframe of the hosted llama.cpp server.
enable a user to launch llamafile from GUI with a selected model from either the script or from browsing the local file system. In addition, expose CLI options through the GUI for llamafile execution.