withcatai / node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
https://node-llama-cpp.withcat.ai
MIT License
829 stars 80 forks source link

fix package.json to use module instead of main #99

Closed mainfraame closed 9 months ago

mainfraame commented 9 months ago

fix: build

Description of change

Pull-Request Checklist

giladgd commented 9 months ago

@mainfraame The module key is not supported by nodejs, and without exports your proposed change will make importing this module fail, so I don't see any benefit of this change.

If you encounter any problem that this change was supposed to fix, please let me know so I can investigate.

Thanks anyway for your contribution :)

mainfraame commented 9 months ago

@giladgd you're correct. what i'm trying to fix seems to be a deeper issue with this library; ie. top-level awaits. unfortunately myself and others are not yet able to use modules.

I highly suggest you consider refactoring those dynamic inputs in llamaEvaluator/LlamaBins.js. With that, it'll never be possible use this library with any compiler like webpack.

giladgd commented 9 months ago

@mainfraame See my comment on the other issue that you've opened; I've detailed on the matter there

jshph commented 7 months ago

Hi @giladgd, I'm similarly constrained and using esbuild. My output file must be CJS because the application I'm writing for doesn't support ESM. I'd really appreciate if you could please refactor the dynamic inputs in LlamaBins.js.

Thanks for your work on this library, it's great and well documented :).

I might consider forking just to see if that's the only place needed to change.