-
- The deliverable here is to be able to run quantized models with the tinygrad inference engine
- Bonus (+$200) bounty as an easy follow up is to add support for MLX community models: https://github.…
-
I am trying to get this working on Windows 11 with GPT4All. There was an issue with setting the Port but I think I have resolved that by changing the package.json from
"proc_serve": "PORT=13000 next…
-
░▒▓ ~/g/chatgpt-api │ on main ▓▒░ node:events:491 ░▒▓ ✔ │ took 30s │ pyenv Py │ at 11:49:40 ▓▒░
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at W…
-
# Bug report
it doesnt start at all when ever i try to run it locally it says service not healthy
## Describe the bug
```
$ npx supabase start
0.28.1-alpine: Pulling from supabase/vector
d2610…
-
### Title: Custom model not being used in UI when modelOptions is changed in Node API setup
---
**Description:**
I changed `modelOptions.temperature` to `modelOptions` in the Node API setup, …
-
root@ASUSET2410-Ubuntu:/home/ron/workarea/openai/chatgpt-plugin-googlesearch# npx ts-node index.ts
/usr/local/lib/node_modules/npm/node_modules/nopt/lib/nopt-lib.js:64
const StringType = typeDefs.…
-
能不能打包成docker镜像,在抱抱脸空间部署?
-
# Asking
- [ ] Ask ChatGPT for help completing this homework
# Feeling, Writing, Thinking
- [x] Post an experience report (a paragraph or several) as a comment on this issue. This experience repo…
-
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
Am using `https://github.com/waylaidwanderer/node-chatgpt-api/pull/481/…
-
0 info it worked if it ends with ok
1 verbose cli [ '/usr/bin/node', '/usr/bin/npm', 'run', 'serve' ]
2 info using npm@6.14.18
3 info using node@v14.21.3
4 verbose run-script [ 'preserve', 'serve'…