coleam00 / bolt.new-any-llm

Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
https://bolt.new
MIT License
1.02k stars 571 forks source link

Codebase + Preview auto generation failing on my clone #63

Open WavezeroLLC opened 4 days ago

WavezeroLLC commented 4 days ago

Describe the bug

I'm attempting to use some of my local LLMs on Ollama in this fork and everything works great. There aren't any issue with the execution itself. However I'm running into an issue where the code will generate in the chat, but not in the virtual IDE automatically. Its essentially the same thing as if I were to prompt ChatGPT and copy/paste code into my IDE manually. Not sure why this is happening. Tried to debug myself a little and asked some of my LLMs about this issue and didn't have much luck. Hopefully someone can help me resolve this as I'm really excited about the possibility of using local LLMs on Bolt.new. Thank you. :-)

Link to the Bolt URL that caused the error

N/A

Steps to reproduce

Simply followed the Readme that was given step by step.

Expected behavior

To reiterate what I said above, just an issue where the Code and Preview windows don't automatically populate with the LLM generated code. I have tested with multiple LLMs: llama3.2:latest codellama:13b granite-code:20b codellama:34b deepseek-coder-v2:16b codellama:70b

Here is an example of what happens when I try to do a basic example prompt: (here I'm using codellama:13b)

image

Screen Recording / Screenshot

No response

Platform

Additional context

No response

chip902 commented 4 days ago

Any error in console, client or Server? I have the same issue but I'm having header problems from the looks of it...

Microsoft Remote Desktop - ARK - 2024-10-23 at 16 55 50

I've used this for my vite config:

import { cloudflareDevProxyVitePlugin as remixCloudflareDevProxy, vitePlugin as remixVitePlugin } from '@remix-run/dev';
import UnoCSS from 'unocss/vite';
import { defineConfig, type ViteDevServer } from 'vite';
import { nodePolyfills } from 'vite-plugin-node-polyfills';
import { optimizeCssModules } from 'vite-plugin-optimize-css-modules';
import tsconfigPaths from 'vite-tsconfig-paths';

export default defineConfig((config) => {
  return {
    ...config,
    build: {
      target: 'esnext',
    },
    server: {
      headers: {
        'Access-Control-Allow-Origin': '*', 
        'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS',
        'Access-Control-Allow-Headers': '*' 
      }
    },
    plugins: [
      {
        name: 'add-cors',

        configureServer(server) {
          server.middlewares.use((_req, res, next) => {
            res.setHeader('Cross-Origin-Opener-Policy', 'same-origin');
            res.setHeader('Cross-Origin-Embedder-Policy', 'require-corp');
            res.setHeader('Cross-Origin-Resource-Policy', 'cross-origin');
            res.setHeader('Cross-Origin-Isolated-Policy', 'self');
            next();
          });
        },
      },
      nodePolyfills({
        include: ['path', 'buffer'],
      }),
      config.mode !== 'test' && remixCloudflareDevProxy(),
      remixVitePlugin({
        future: {
          v3_fetcherPersist: true,
          v3_relativeSplatPath: true,
          v3_throwAbortReason: true,
        },
      }),
      UnoCSS(),
      tsconfigPaths(),
      chrome129IssuePlugin(),
      config.mode === 'production' && optimizeCssModules({ apply: 'build' }),
    ],
  };
});

function chrome129IssuePlugin() {
  return {
    name: 'chrome129IssuePlugin',
    configureServer(server: ViteDevServer) {
      server.middlewares.use((req, res, next) => {
        const raw = req.headers['user-agent']?.match(/Chrom(e|ium)\/([0-9]+)\./);

        if (raw) {
          const version = parseInt(raw[2], 10);

          if (version === 129) {
            res.setHeader('content-type', 'text/html');
            res.end(
              '<body><h1>Please use Chrome Canary for testing.</h1><p>Chrome 129 has an issue with JavaScript modules & Vite local development, see <a href="https://github.com/stackblitz/bolt.new/issues/86#issuecomment-2395519258">for more information.</a></p><p><b>Note:</b> This only impacts <u>local development</u>. `pnpm run build` and `pnpm run start` will work fine in this browser.</p></body>',
            );

            return;
          }
        }

        next();
      });
    },
  };
}
nowjon commented 4 days ago

I have this same issue, unable to get any of the Ollama models to build within the webcontainer. Even after watching the youtube video around the 12min mark where Cole points out to stop the model, open the terminal/code window, and then re-send the prompt.

drabspirit commented 4 days ago

Having the same issue...

Currently I have Gemma2b, Codellama7b and llama3.1 installed that I had to do manually over cmd line.

My logs for ollama app.log server.log

Currently trying to find my logs for the local bolt.new instance.

WavezeroLLC commented 4 days ago

Good to know I'm not the only one whose encountered this issue - At least we can all suffer together haha. I'll continue to poke around more tomorrow and update as I find possible solutions.

Also, @chip902 I'm not having any issues in my terminal that I can tell but I'll monitor to see if any arise. Not sure why that's happening to you.

nowjon commented 3 days ago

After the changes implemented today (Thanks everyone for their contributions) I can now get Ollama (Deepseek-coder-v2:16b) to generate files within the code window! Still didn't run the code but appears that this is making progress! image

lrush85 commented 3 days ago

@nowjon did you have to adjust anything in the config, I am still unable to get my ollama to write the code in the web container.

nowjon commented 3 days ago

@nowjon did you have to adjust anything in the config, I am still unable to get my ollama to write the code in the web container.

No, but I did need to send a prompt, stop it, open the code workbench, and then re-send the prompt See Cole's video about how to do this: https://youtu.be/p1YvKuRfEhg?si=sK4IDpFad0qZwTMN&t=718

lrush85 commented 3 days ago

That didn't seem to fix my issue, I'm using ollama3.2 latest and it's generating the code files in the chat window, but not the actual files. Also, no console errors. :(

nowjon commented 3 days ago

That didn't seem to fix my issue, I'm using ollama3.2 latest and it's generating the code files in the chat window, but not the actual files. Also, no console errors. :(

I think the code focused models are better for this, and I noticed that sometimes you need to let the prompt finish, as using deepseek-coder-v2:16b does show code within the chat view, but later sets up some stuff, so it may be hit or miss.

image

WavezeroLLC commented 3 days ago

Did the pause and re-enter prompt after syncing changes from yesterday to my copy and I got some basic codebase generation with Deep-seek 16b image

However it seems to be stuck on being able to run terminal commands. Getting farther but still not a good end user experience yet. I would suggest to follow what @nowjon is saying - its getting me closer. Going to see why I'm not able to get terminal commands working - will try a few different models

WavezeroLLC commented 3 days ago

That didn't seem to fix my issue, I'm using ollama3.2 latest and it's generating the code files in the chat window, but not the actual files. Also, no console errors. :(

Try using a higher parameter model like CodeLlama 13b and/or deep-seek-coderv2 16b. That may help. Also pausing the reply before any code generation happens is helping me so far.

WavezeroLLC commented 3 days ago

Only got it to work that one time. Now I'm unable to get anything working regarding code generation in IDE: image

Very inconsistent. I even restarted my local dev server

lrush85 commented 3 days ago

Same, I have tried 3-4 different ollama versions and couldn't ever get any thing to generate the actual files.

YAY-3M-TA3 commented 3 days ago

I've have similar issues. I also tried the "stop trick". I think the issue is the context size... maybe even 128k is too small. I've had consistent success with Google's Gemini 1.5 pro and flash - these have context of 1 million.

I've tried also with Qwen 2 Coder. The main issue here where I was building an app with React is that it will sometimes forget to add import React from 'react';

MisterTricky commented 17 hours ago

I made some comments on #87, which turns out to be a duplicate of this in essence.

What I have found is that you can force the webcontainer to be utilized by the LLM, by telling it specifically in your prompt to use the Webcontainer. The output can be jumbled, and it does not always work.

Bolt-webcontainer-issues-ollama

As @YAY-3M-TA3 mentions in the comment above, it is definitely related to the context window of the LLM, but also reasoning capability plays a role. Maybe a distilled version of the prompt for smaller models is the answer ?