GoogleCloudPlatform / localllm

Apache License 2.0
1.51k stars 113 forks source link

Suggest picking a name other than "llm" for the CLI tool #8

Closed simonw closed 6 months ago

simonw commented 7 months ago

https://github.com/GoogleCloudPlatform/localllm/blob/d27376fa3f6e6bcfcd3ae9c9c8f61e163a3c1899/llm-tool/setup.py#L19-L23

And:

https://github.com/GoogleCloudPlatform/localllm/blob/d27376fa3f6e6bcfcd3ae9c9c8f61e163a3c1899/llm-tool/setup.py#L34-L38

I'm the author of https://pypi.org/project/llm/ which installs a package called llm and a CLI tool called llm as well. My llm tool is similar to localllm in as much as my tool lets you execute prompts in the terminal, against both remote models and local models (using llama-cpp-python).

As it stands using my tool and this tool in the same environment won't work, because of the namespace clash.

If you pick a different name for this you can also publish it to PyPI, which would make for a more convenient installation experience for end users.

https://llm.datasette.io/ has more about how my tool works and what it does.

bobcatfish commented 7 months ago

Hey @simonw ! Whoops, thanks for pointing this out. Two things to figure out before renaming:

bobcatfish commented 7 months ago

is there any way we can do this in a backward compatible way so anyone who is already using it (or just reading the blog post talking about it) can use the previous command without trouble

Inside the Cloud Workstations image I'm thinking we could provide llm as a symlink to whatever the new name is; and it seems like it might be easy enough to update the blog post.

Just need to think of a new name now... 🤔 🤔 🤔

gfsmfk commented 7 months ago

localai / localgenai / localmodel ?

On Thu, Feb 8, 2024, 1:48 PM Christie Warwick (Wilson) < @.***> wrote:

is there any way we can do this in a backward compatible way so anyone who is already using it (or just reading the blog post talking about it) can use the previous command without trouble

Inside the Cloud Workstations image I'm thinking we could provide llm as a symlink to whatever the new name is; and it seems like it might be easy enough to update the blog post.

Just need to think of a new name now... 🤔 🤔 🤔

— Reply to this email directly, view it on GitHub https://github.com/GoogleCloudPlatform/localllm/issues/8#issuecomment-1934736382, or unsubscribe https://github.com/notifications/unsubscribe-auth/AESST3VZL6IGSOOG2OYQNBLYSUMX3AVCNFSM6AAAAABC6ZN3ESVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZUG4ZTMMZYGI . You are receiving this because you are subscribed to this thread.Message ID: @.***>

bobcatfish commented 7 months ago

Thanks @gfsmfk !

Did some brainstorming with some other folks on my team and here are all the options we've come up with:

From all these options I'm thinking localllm is the most clear and least confusing (and removes some weird inconsistency b/w the repo being called localllm and the command something else), but I'll want to make sure that locallm works too since there are going to be a lot of typos.

RyzeNGrind commented 7 months ago

if the tool is going to be dependent on platform or cloud vendor specific aspects, perhaps it wouldn't hurt to keep the naming along those lines as well to prevent misleading users? Just now came across this tool, but the naming and its capabilities made it confusing to understand whether this could be used in conjunction with other FLOSS tools or not.

i.e Workstation seems to be specific to Google Cloud or Compute Engine, and Huggingface isn't the only reference for models (I understand Google has a partnership with them though).

bobcatfish commented 7 months ago

Good point @RyzeNGrind ! There's nothing cloud specific in the tool itself - the motivation for making it initially was to have something in Cloud Workstations that would easily run local models, and the image building steps in the README build a Cloud Worktations image, but the command line tool doesn't have anything specific to this.

The Huggingface integration is pretty specific, though 🤔 (and some of the hackier parts are actually specific to TheBloke 🤭 ). So I'm almost tempted to use one of the hf names but I've talked myself out of it because if it's useful, there's no reason this couldn't be extended to support other model registries - pull run kill ps could work equally well for those too - whether there's enough interest to actually do that is another question though!

TL;DR I'm still leaning toward localllm as the best/least-worst option atm.

bobcatfish commented 6 months ago

Quick update: ended up going with local-llm instead of localllm since it is SO HARD to type localllm properly consistently.