Open WavezeroLLC opened 1 month ago
Any error in console, client or Server? I have the same issue but I'm having header problems from the looks of it...
I've used this for my vite config:
import { cloudflareDevProxyVitePlugin as remixCloudflareDevProxy, vitePlugin as remixVitePlugin } from '@remix-run/dev';
import UnoCSS from 'unocss/vite';
import { defineConfig, type ViteDevServer } from 'vite';
import { nodePolyfills } from 'vite-plugin-node-polyfills';
import { optimizeCssModules } from 'vite-plugin-optimize-css-modules';
import tsconfigPaths from 'vite-tsconfig-paths';
export default defineConfig((config) => {
return {
...config,
build: {
target: 'esnext',
},
server: {
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS',
'Access-Control-Allow-Headers': '*'
}
},
plugins: [
{
name: 'add-cors',
configureServer(server) {
server.middlewares.use((_req, res, next) => {
res.setHeader('Cross-Origin-Opener-Policy', 'same-origin');
res.setHeader('Cross-Origin-Embedder-Policy', 'require-corp');
res.setHeader('Cross-Origin-Resource-Policy', 'cross-origin');
res.setHeader('Cross-Origin-Isolated-Policy', 'self');
next();
});
},
},
nodePolyfills({
include: ['path', 'buffer'],
}),
config.mode !== 'test' && remixCloudflareDevProxy(),
remixVitePlugin({
future: {
v3_fetcherPersist: true,
v3_relativeSplatPath: true,
v3_throwAbortReason: true,
},
}),
UnoCSS(),
tsconfigPaths(),
chrome129IssuePlugin(),
config.mode === 'production' && optimizeCssModules({ apply: 'build' }),
],
};
});
function chrome129IssuePlugin() {
return {
name: 'chrome129IssuePlugin',
configureServer(server: ViteDevServer) {
server.middlewares.use((req, res, next) => {
const raw = req.headers['user-agent']?.match(/Chrom(e|ium)\/([0-9]+)\./);
if (raw) {
const version = parseInt(raw[2], 10);
if (version === 129) {
res.setHeader('content-type', 'text/html');
res.end(
'<body><h1>Please use Chrome Canary for testing.</h1><p>Chrome 129 has an issue with JavaScript modules & Vite local development, see <a href="https://github.com/stackblitz/bolt.new/issues/86#issuecomment-2395519258">for more information.</a></p><p><b>Note:</b> This only impacts <u>local development</u>. `pnpm run build` and `pnpm run start` will work fine in this browser.</p></body>',
);
return;
}
}
next();
});
},
};
}
I have this same issue, unable to get any of the Ollama models to build within the webcontainer. Even after watching the youtube video around the 12min mark where Cole points out to stop the model, open the terminal/code window, and then re-send the prompt.
Having the same issue...
Currently I have Gemma2b, Codellama7b and llama3.1 installed that I had to do manually over cmd line.
My logs for ollama app.log server.log
Currently trying to find my logs for the local bolt.new instance.
Good to know I'm not the only one whose encountered this issue - At least we can all suffer together haha. I'll continue to poke around more tomorrow and update as I find possible solutions.
Also, @chip902 I'm not having any issues in my terminal that I can tell but I'll monitor to see if any arise. Not sure why that's happening to you.
After the changes implemented today (Thanks everyone for their contributions) I can now get Ollama (Deepseek-coder-v2:16b) to generate files within the code window! Still didn't run the code but appears that this is making progress!
@nowjon did you have to adjust anything in the config, I am still unable to get my ollama to write the code in the web container.
@nowjon did you have to adjust anything in the config, I am still unable to get my ollama to write the code in the web container.
No, but I did need to send a prompt, stop it, open the code workbench, and then re-send the prompt See Cole's video about how to do this: https://youtu.be/p1YvKuRfEhg?si=sK4IDpFad0qZwTMN&t=718
That didn't seem to fix my issue, I'm using ollama3.2 latest and it's generating the code files in the chat window, but not the actual files. Also, no console errors. :(
That didn't seem to fix my issue, I'm using ollama3.2 latest and it's generating the code files in the chat window, but not the actual files. Also, no console errors. :(
I think the code focused models are better for this, and I noticed that sometimes you need to let the prompt finish, as using deepseek-coder-v2:16b does show code within the chat view, but later sets up some stuff, so it may be hit or miss.
Did the pause and re-enter prompt after syncing changes from yesterday to my copy and I got some basic codebase generation with Deep-seek 16b
However it seems to be stuck on being able to run terminal commands. Getting farther but still not a good end user experience yet. I would suggest to follow what @nowjon is saying - its getting me closer. Going to see why I'm not able to get terminal commands working - will try a few different models
That didn't seem to fix my issue, I'm using ollama3.2 latest and it's generating the code files in the chat window, but not the actual files. Also, no console errors. :(
Try using a higher parameter model like CodeLlama 13b and/or deep-seek-coderv2 16b. That may help. Also pausing the reply before any code generation happens is helping me so far.
Only got it to work that one time. Now I'm unable to get anything working regarding code generation in IDE:
Very inconsistent. I even restarted my local dev server
Same, I have tried 3-4 different ollama versions and couldn't ever get any thing to generate the actual files.
I've have similar issues. I also tried the "stop trick". I think the issue is the context size... maybe even 128k is too small. I've had consistent success with Google's Gemini 1.5 pro and flash - these have context of 1 million.
I've tried also with Qwen 2 Coder. The main issue here where I was building an app with React is that it will sometimes forget to add import React from 'react';
I made some comments on #87, which turns out to be a duplicate of this in essence.
What I have found is that you can force the webcontainer to be utilized by the LLM, by telling it specifically in your prompt to use the Webcontainer. The output can be jumbled, and it does not always work.
As @YAY-3M-TA3 mentions in the comment above, it is definitely related to the context window of the LLM, but also reasoning capability plays a role. Maybe a distilled version of the prompt for smaller models is the answer ?
I think this is related to your issue? This is a note in one of Cole's releases on youtube: "ALSO we have figured out a fix for the Ollama LLMs not performing very well with Bolt.new! Thank you @alx8439 for providing this suggestion! I have tested it and it works phenomenally.
All you have to do is:
FROM [Ollama model ID such as qwen2.5-coder:7b] PARAMETER num_ctx 32768
Now you have a new Ollama model that isn't limited in the context length like Ollama models are by default for some reason. The whole reason we had this issue is because the Bolt.new prompt didn't fit into the context length for Ollama models!"
I did this and the Ollama model I used created the code/files. It successfully ran an npm install command that was needed, but it is stuck on "npm run dev" - just the spinning circle.
I tried this approach this morning still not able to get a 13b LLM to generate the files. Not sure what to do, my 24gb ram can't do much larger without it locking up.
I was able to get qwen-2.5-coder-7b to produce the code using the method above on a MacBook M1. But it wouldn't run the preview. It may be the model you are using? If you have Ollama, can you try with qwen-2.5-coder-7b after updating the context?
I was able to get qwen-2.5-coder-7b to produce the code using the method above on a MacBook M1. But it wouldn't run the preview. It may be the model you are using? If you have Ollama, can you try with qwen-2.5-coder-7b after updating the context?
Having this exact same issue on Ubuntu 22.04 and qwen-2.5-coder:7b. It built the app but preview is broken. Console shows a 404 @ GET https://xxxxxxxxxxxxxxxx.local-corp.webcontainer-api.io/ 404 (Not Found)
please help me team
any update please ??
I've been struggling with bolt,w any llm for the past three days, trying to get it to work properly. I've experimented with most of the available models and adjusted various parameters, but nothing seems to help. I carefully followed all the instructions in the documentation and even watched tutorial videos, but the issues persist. I've tried everything I could think of:
Interestingly, the original Bolt.new local installation shows a preview, but this version doesn't. I've also attempted to use different models from OpenAI, Anthropic, and Ollama, all without success.
At first, I thought I must be doing something wrong, which is why I spent so much time troubleshooting. However, after all these attempts, I'm starting to believe the problem might be with the application itself rather than my setup or configuration. This experience has been incredibly frustrating, and I hope the developers can look into these issues to improve the user experience for others.
Describe the bug
I'm attempting to use some of my local LLMs on Ollama in this fork and everything works great. There aren't any issue with the execution itself. However I'm running into an issue where the code will generate in the chat, but not in the virtual IDE automatically. Its essentially the same thing as if I were to prompt ChatGPT and copy/paste code into my IDE manually. Not sure why this is happening. Tried to debug myself a little and asked some of my LLMs about this issue and didn't have much luck. Hopefully someone can help me resolve this as I'm really excited about the possibility of using local LLMs on Bolt.new. Thank you. :-)
Link to the Bolt URL that caused the error
N/A
Steps to reproduce
Simply followed the Readme that was given step by step.
Expected behavior
To reiterate what I said above, just an issue where the Code and Preview windows don't automatically populate with the LLM generated code. I have tested with multiple LLMs: llama3.2:latest codellama:13b granite-code:20b codellama:34b deepseek-coder-v2:16b codellama:70b
Here is an example of what happens when I try to do a basic example prompt: (here I'm using codellama:13b)
Screen Recording / Screenshot
No response
Platform
Additional context
No response