coleam00 / bolt.new-any-llm

Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
https://bolt.new
MIT License
3.97k stars 1.63k forks source link

FEATURE: Allow to use a LAN/remote Ollama inference server #46

Open PieBru opened 1 month ago

PieBru commented 1 month ago

Is your feature request related to a problem? Please describe: Hi, I modified .env.local with:

# You only need this environment variable set if you want to use oLLAMA models
#EXAMPLE http://localhost:11434
OLLAMA_API_BASE_URL=http://10.4.0.100:11434

Then I started the localhost Bolt with:

pnpm install
pnpm run build
pnpm run start

The console log seems OK:

Lockfile is up to date, resolution step is skipped
Already up to date
Done in 3.6s

> bolt@ build /omitted/bolt.new-any-llm
> remix vite:build

vite v5.3.1 building for production...
✓ 1560 modules transformed.
Generated an empty chunk: "api.enhancer".
Generated an empty chunk: "api.models".
Generated an empty chunk: "api.chat".
build/client/.vite/manifest.json                            130.82 kB │ gzip:  10.22 kB
...
build/client/assets/_index-BUEZZinZ.js                    1,636.84 kB │ gzip: 503.53 kB

(!) Some chunks are larger than 500 kB after minification. Consider:
- Using dynamic import() to code-split the application
- Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/configuration-options/#output-manualchunks
- Adjust chunk size limit for this warning via build.chunkSizeWarningLimit.
✓ built in 16.93s
vite v5.3.1 building SSR bundle for production...
✓ 41 modules transformed.
build/server/.vite/manifest.json                   1.78 kB
build/server/assets/tailwind-compat-CC20SAMN.css   2.25 kB
build/server/assets/xterm-lQO2bNqs.css             4.08 kB
build/server/assets/ReactToastify-CYivYX3d.css    14.19 kB
build/server/assets/index-CPTzpSUP.css            17.01 kB
build/server/assets/server-build-BcV5Emg_.css     27.22 kB
build/server/index.js                             50.26 kB
✓ built in 464ms

> bolt@ start /mnt/00aadc36-3e91-4512-b272-3e84356ac527/Piero/AI_Lab/Github/bolt.new-any-llm
> bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings

 ⛅️ wrangler 3.63.2 (update available 3.81.0)
-------------------------------------------------------

✨ Compiled Worker successfully
Your worker has access to the following bindings:
- Vars:
  - GROQ_API_KEY: "(hidden)"
  - OPENAI_API_KEY: "(hidden)"
  - ANTHROPIC_API_KEY: "(hidden)"
  - OPEN_ROUTER_API_KEY: "(hidden)"
  - GOOGLE_GENERATIVE_AI_API_KEY: "(hidden)"
  - OLLAMA_API_BASE_URL: "(hidden)"
  - VITE_LOG_LEVEL: "(hidden)"
[wrangler:inf] Ready on http://localhost:8788
⎔ Starting local server...
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ [b] open a browser, [d] open Devtools, [c] clear console, [x] to exit                                                                          │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

But the localhost Bolt instance continue to use the localhost Ollama istance.

Did I miss something? Thank you.

Describe the solution you'd like: Use my LAN Ollama inference server defined in .env.local.

Describe alternatives you've considered:

Additional context:

fcools commented 1 month ago

See #43

hillct commented 3 weeks ago

Vite also tends to be quite picky with regard to reading from .env vs .env.local when building for production vs development. As such, if you expect to be using .env.local for your variable definitions then you should be running pnpm run dev and if you want to use variable definitions found in the file .env then you should be building for production using the command pnpm start