Open jim123ture opened 1 year ago
By default, all our services now call external interfaces, which does not require anything. If you need to deploy LLM or embedded models locally, there may be requirements.
There's a dependency on PyTorch which can be pretty heavy, especially if you run it only on CPU.
is there any requirement to the server? I try to run it, but failed due to my server 2.5GB RAM, 2 CORES.