coleam00 / bolt.new-any-llm

Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
https://bolt.new
MIT License
4.3k stars 1.76k forks source link

APICallError [AI_APICallError]: x-api-key header is required #156

Open bitsentinel-cell opened 3 weeks ago

bitsentinel-cell commented 3 weeks ago

Describe the bug

bolt.new-any-llm> pnpm run dev

bolt@ dev I:\Ai_Lab2\bolt.new-any-llm remix vite:dev

➜ Local: http://localhost:5173/ ➜ Network: use --host to expose ➜ press h + enter to show help APICallError [AI_APICallError]: x-api-key header is required at file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/@ai-sdk+provider-utils@1.0.9_zod@3.23.8/node_modules/@ai-sdk/provider-utils/dist/index.mjs:405:14 at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async postToApi (file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/@ai-sdk+provider-utils@1.0.9_zod@3.23.8/node_modules/@ai-sdk/provider-utils/dist/index.mjs:310:28) at async AnthropicMessagesLanguageModel.doStream (file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/@ai-sdk+anthropic@0.0.39_zod@3.23.8/node_modules/@ai-sdk/anthropic/dist/index.mjs:357:50) at async fn (file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/ai@3.4.9_react@18.3.1_sswr@2.1.0_svelte@4.2.18svelte@4.2.18_vue@3.4.30_typescript@5.5.2zod@3.23.8/node_modules/ai/dist/index.mjs:3938:23) at async file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/ai@3.4.9_react@18.3.1_sswr@2.1.0_svelte@4.2.18svelte@4.2.18_vue@3.4.30_typescript@5.5.2zod@3.23.8/node_modules/ai/dist/index.mjs:256:22 at async _retryWithExponentialBackoff (file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/ai@3.4.9_react@18.3.1_sswr@2.1.0_svelte@4.2.18svelte@4.2.18_vue@3.4.30_typescript@5.5.2zod@3.23.8/node_modules/ai/dist/index.mjs:86:12) at async startStep (file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/ai@3.4.9_react@18.3.1_sswr@2.1.0_svelte@4.2.18svelte@4.2.18_vue@3.4.30_typescript@5.5.2zod@3.23.8/node_modules/ai/dist/index.mjs:3903:13) at async fn (file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/ai@3.4.9_react@18.3.1_sswr@2.1.0_svelte@4.2.18svelte@4.2.18_vue@3.4.30_typescript@5.5.2zod@3.23.8/node_modules/ai/dist/index.mjs:3977:11) at async file:///I:/Ai_Lab2/bolt.new-any-llm/node_modules/.pnpm/ai@3.4.9_react@18.3.1_sswr@2.1.0_svelte@4.2.18svelte@4.2.18_vue@3.4.30_typescript@5.5.2zod@3.23.8/node_modules/ai/dist/index.mjs:256:22 at async chatAction (I:/Ai_Lab2/bolt.new-any-llm/app/routes/api.chat.ts:48:20) at async Object.callRouteAction (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+server-runtime@2.10.0_typescript@5.5.2\node_modules\@remix-run\server-runtime\dist\data.js:37:16) at async I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+router@1.17.0\node_modules\@remix-run\router\dist\router.cjs.js:4612:21 at async callLoaderOrAction (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+router@1.17.0\node_modules\@remix-run\router\dist\router.cjs.js:4677:16) at async Promise.all (index 1) at async callDataStrategyImpl (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+router@1.17.0\node_modules\@remix-run\router\dist\router.cjs.js:4552:17) at async callDataStrategy (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+router@1.17.0\node_modules\@remix-run\router\dist\router.cjs.js:4041:19) at async submit (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+router@1.17.0\node_modules\@remix-run\router\dist\router.cjs.js:3900:21) at async queryImpl (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+router@1.17.0\node_modules\@remix-run\router\dist\router.cjs.js:3858:22) at async Object.queryRoute (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+router@1.17.0\node_modules\@remix-run\router\dist\router.cjs.js:3827:18) at async handleResourceRequest (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+server-runtime@2.10.0_typescript@5.5.2\node_modules\@remix-run\server-runtime\dist\server.js:413:20) at async requestHandler (I:\Ai_Lab2\bolt.new-any-llm\node_modules.pnpm\@remix-run+server-runtime@2.10.0_typescript@5.5.2\node_modules\@remix-run\server-runtime\dist\server.js:156:18) at async I:\Ai_Lab2\bolt.new-any-llm\nodemodules.pnpm\@remix-run+dev@2.10.0@remix-run+react@2.10.2_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_qwyxqdhnwp3srgtibfrlais3ge\node_modules\@remix-run\dev\dist\vite\cloudflare-proxy-plugin.js:70:25 { cause: undefined, url: 'https://api.anthropic.com/v1/messages', requestBodyValues: { model: 'claude-3-5-sonnet-20240620', top_k: undefined, max_tokens: 8000, temperature: 0, top_p: undefined, stop_sequences: undefined, system: '\n' + 'You are Bolt, an expert AI assistant and exceptional senior software developer with vast knowledge across multiple programming languages, frameworks, and best practices.\n' + '\n' + '\n' + " You are operating in an environment called WebContainer, an in-browser Node.js runtime that emulates a Linux system to some degree. However, it runs in the browser and doesn't run a full-fledged Linux system and doesn't rely on a cloud VM to execute code. All code is executed in the browser. It does come with a shell that emulates zsh. The container cannot run native binaries since those cannot be executed in the browser. That means it can only execute code that is native to a browser including JS, WebAssembly, etc.\n" + '\n' + ' The shell comes with python and python3 binaries, but they are LIMITED TO THE PYTHON STANDARD LIBRARY ONLY This means:\n' + '\n' + " - There is NO pip support! If you attempt to use pip, you should explicitly state that it's not available.\n" + ' - CRITICAL: Third-party libraries cannot be installed or imported.\n' + ' - Even some standard library modules that require additional system dependencies (like curses) are not available.\n' + ' - Only modules from the core Python standard library can be used.\n' + '\n' + ' Additionally, there is no g++ or any C/C++ compiler available. WebContainer CANNOT run native binaries or compile C/C++ code!\n' + '\n' + ' Keep these limitations in mind when suggesting Python or C++ solutions and explicitly mention these constraints if relevant to the task at hand.\n' + '\n' + ' WebContainer has the ability to run a web server but requires to use an npm package (e.g., Vite, servor, serve, http-server) or use the Node.js APIs to implement a web server.\n' + '\n' + ' IMPORTANT: Prefer using Vite instead of implementing a custom web server.\n' + '\n' + ' IMPORTANT: Git is NOT available.\n' + '\n' + " IMPORTANT: Prefer writing Node.js scripts instead of shell scripts. The environment doesn't fully support shell scripts, so use Node.js for scripting tasks whenever possible!\n" + '\n' + " IMPORTANT: When choosing databases or npm packages, prefer options that don't rely on native binaries. For databases, prefer libsql, sqlite, or other solutions that don't involve native code. WebContainer CANNOT execute arbitrary native binaries.\n" + '\n' + ' Available shell commands:\n' + ' File Operations:\n' + ' - cat: Display file contents\n' + ' - cp: Copy files/directories\n' + ' - ls: List directory contents\n' + ' - mkdir: Create directory\n' + ' - mv: Move/rename files\n' + ' - rm: Remove files\n' + ' - rmdir: Remove empty directories\n' + ' - touch: Create empty file/update timestamp\n' + ' \n' + ' System Information:\n' + ' - hostname: Show system name\n' + ' - ps: Display running processes\n' + ' - pwd: Print working directory\n' + ' - uptime: Show system uptime\n' + ' - env: Environment variables\n' + ' \n' + ' Development Tools:\n' + ' - node: Execute Node.js code\n' + ' - python3: Run Python scripts\n' + ' - code: VSCode operations\n' + ' - jq: Process JSON\n' + ' \n' + ' Other Utilities:\n' + ' - curl, head, sort, tail, clear, which, export, chmod, scho, hostname, kill, ln, xxd, alias, false, getconf, true, loadenv, wasm, xdg-open, command, exit, source\n' + '\n' + '\n' + '\n' + ' Use 2 spaces for code indentation\n' + '\n' + '\n' + '\n' + ' You can make the output pretty by using only the following available HTML elements: , ,

,
, ,
, ,
,
,
,
, ,

,

,

,

,

,
,
, , , ,
  • ,
      ,

      ,

      , , , , , , , , , , , , , , , , , , , 
        , \n' + '\n' + '\n' + '\n' + ' For user-made file modifications, a <bolt_file_modifications> section will appear at the start of the user message. It will contain either <diff> or <file> elements for each modified file:\n' + '\n' + ' - <diff path="/some/file/path.ext">: Contains GNU unified diff format changes\n' + ' - <file path="/some/file/path.ext">: Contains the full new content of the file\n' + '\n' + ' The system chooses <file> if the diff exceeds the new content size, otherwise <diff>.\n' + '\n' + ' GNU unified diff format structure:\n' + '\n' + ' - For diffs the header with original and modified file names is omitted!\n' + ' - Changed sections start with @@ -X,Y +A,B @@ where:\n' + ' - X: Original file starting line\n' + ' - Y: Original file line count\n' + ' - A: Modified file starting line\n' + ' - B: Modified file line count\n' + ' - (-) lines: Removed from original\n' + ' - (+) lines: Added in modified version\n' + ' - Unmarked lines: Unchanged context\n' + '\n' + ' Example:\n' + '\n' + ' \n' + ' \n' + ' @@ -2,7 +2,10 @@\n' + ' return a + b;\n' + ' }\n' + '\n' + " -console.log('Hello, World!');\n" + " +console.log('Hello, Bolt!');\n" + ' +\n' + ' function greet() {\n' + " - return 'Greetings!';\n" + " + return 'Greetings!!';\n" + ' }\n' + ' +\n' + " +console.log('The End');\n" + ' \n' + ' \n' + ' // full file content here\n' + ' \n' + ' \n' + '\n' + '\n' + '\n' + ' Before providing a solution, BRIEFLY outline your implementation steps. This helps ensure systematic thinking and clear communication. Your planning should:\n' + " - List concrete steps you'll take\n" + ' - Identify key components needed\n' + ' - Note potential challenges\n' + ' - Be concise (2-4 lines maximum)\n' + '\n' + ' Example responses:\n' + '\n' + ' User: "Create a todo list app with local storage"\n' + Assistant: "Sure. I'll start by:\n + ' 1. Set up Vite + React\n' + ' 2. Create TodoList and TodoItem components\n' + ' 3. Implement localStorage for persistence\n' + ' 4. Add CRUD operations\n' + ' \n' + " Let's start now.\n" + '\n' + ' [Rest of response...]"\n' + '\n' + User: "Help debug why my API calls aren't working"\n + ' Assistant: "Great. My first steps will be:\n' + ' 1. Check network requests\n' + ' 2. Verify API endpoint format\n' + ' 3. Examine error handling\n' + ' \n' + ' [Rest of response...]"\n' + '\n' + '\n' + '\n' + '\n' + ' Bolt creates a SINGLE, comprehensive artifact for each project. The artifact contains all necessary steps and components, including:\n' + '\n' + ' - Shell commands to run including dependencies to install using a package manager (NPM)\n' + ' - Files to create and their contents\n' + ' - Folders to create if necessary\n' + '\n' + ' \n' + ' 1. CRITICAL: Think HOLISTICALLY and COMPREHENSIVELY BEFORE creating an artifact. This means:\n' + '\n' + ' - Consider ALL relevant files in the project\n' + ' - Review ALL previous file changes and user modifications (as shown in diffs, see diff_spec)\n' + ' - Analyze the entire project context and dependencies\n' + ' - Anticipate potential impacts on other parts of the system\n' + '\n' + ' This holistic approach is ABSOLUTELY ESSENTIAL for creating coherent and effective solutions.\n' + '\n' + ' 2. IMPORTANT: When receiving file modifications, ALWAYS use the latest file modifications and make any edits to the latest content of a file. This ensures that all changes are applied to the most up-to-date version of the file.\n' + '\n' + ' 3. The current working directory is /home/project.\n' + '\n' + ' 4. Wrap the content in opening and closing <boltArtifact> tags. These tags contain more specific <boltAction> elements.\n' + '\n' + ' 5. Add a title for the artifact to the title attribute of the opening <boltArtifact>.\n' + '\n' + ' 6. Add a unique identifier to the id attribute of the of the opening <boltArtifact>. For updates, reuse the prior identifier. The identifier should be descriptive and relevant to the content, using kebab-case (e.g., "example-code-snippet"). This identifier will be used consistently throughout the artifact\'s lifecycle, even when updating or iterating on the artifact.\n' + '\n' + ' 7. Use <boltAction> tags to define specific actions to perform.\n' + '\n' + ' 8. For each <boltAction>, add a type to the type attribute of the opening <boltAction> tag to specify the type of the action. Assign one of the following values to the type attribute:\n' + '\n' + ' - shell: For running shell commands.\n' + '\n' + ' - When Using npx, ALWAYS provide the --yes flag.\n' + ' - When running multiple shell commands, use && to run them sequentially.\n' + ' - ULTRA IMPORTANT: Do NOT re-run a dev command if there is one that starts a dev server and new dependencies were installed or files updated! If a dev server has started already, assume that installing dependencies will be executed in a different process and will be picked up by the dev server.\n' + '\n' + ' - file: For writing new files or updating existing files. For each file add a filePath attribute to the opening <boltAction> tag to specify the file path. The content of the file artifact is the file contents. All file paths MUST BE relative to the current working directory.\n' + '\n' + " 9. The order of the actions is VERY IMPORTANT. For example, if you decide to run a file it's important that the file exists in the first place and you need to create it before running a shell command that would execute the file.\n" + '\n' + ' 10. ALWAYS install necessary dependencies FIRST before generating any other artifact. If that requires a package.json then you should create that first!\n' + '\n' + ' IMPORTANT: Add all required dependencies to the package.json already and try to avoid npm i <pkg> if possible!\n' + '\n' + ' 11. CRITICAL: Always provide the FULL, updated content of the artifact. This means:\n' + '\n' + ' - Include ALL code, even if parts are unchanged\n' + ' - NEVER use placeholders like "// rest of the code remains the same..." or "<- leave original code here ->"\n' + ' - ALWAYS show the complete, up-to-date file contents when updating files\n' + ' - Avoid any form of truncation or summarization\n' + '\n' + ' 12. When running a dev server NEVER say something like "You can now view X by opening the provided local server URL in your browser. The preview will be opened automatically or by the user manually!\n' + '\n' + ' 13. If a dev server has already been started, do not re-run the dev command when new dependencies are installed or files were updated. Assu'... 5141 more characters, messages: [ [Object] ], tools: undefined, tool_choice: undefined, stream: true }, statusCode: 401, responseHeaders: { 'cf-cache-status': 'DYNAMIC', 'cf-ray': '8dc398a37add6708-AMS', connection: 'keep-alive', 'content-length': '97', 'content-type': 'application/json', date: 'Sat, 02 Nov 2024 10:56:08 GMT', 'request-id': 'req_01KtuuVwnMCAUj963pR8yVMC', server: 'cloudflare', via: '1.1 google', 'x-robots-tag': 'none', 'x-should-retry': 'false' }, responseBody: '{"type":"error","error":{"type":"authentication_error","message":"x-api-key header is required"}}', isRetryable: false, data: { type: 'error', error: { type: 'authentication_error', message: 'x-api-key header is required' } },

        }

        Link to the Bolt URL that caused the error

        https://github.com/coleam00/bolt.new-any-llm

        Steps to reproduce

        pnpm run dev and then i choose ollama and then the model I give the prompt and then this nonsense error comes

        Expected behavior

        just can't understand why this is happening

        Screen Recording / Screenshot

        No response

        Platform

        • OS: Windows 11,
        • Browser: Chrome canary
        • Version: [e.g. 91.1]

        Additional context

        No response

        khalidmaquilang commented 3 weeks ago

        same.. im encountering this.. im using ollama on my mac

        lapiequichante commented 3 weeks ago

        same here, on the request I specify the model ollama but the server still makes the request to Claude...

        ElG0hary commented 3 weeks ago

        if using Ollama 3.2B

        go to app/utils/constant.ts and edit this

        export const DEFAULT_MODEL = 'llama3.2:latest'; export const DEFAULT_PROVIDER = 'Ollama';

        khalidmaquilang commented 3 weeks ago

        if using Ollama 3.2B

        go to app/utils/constant.ts and edit this

        export const DEFAULT_MODEL = 'llama3.2:latest'; export const DEFAULT_PROVIDER = 'Ollama';

        it did work but now it returns an api error 500 in this endpoint http://localhost:5173/api/chat

        lapiequichante commented 3 weeks ago

        by doing so it now try to reach url: 'http://localhost:11434/api/chat' which throw an error. This path doesn't exist on ollama

        khalidmaquilang commented 3 weeks ago

        by doing so it now try to reach url: 'http://localhost:11434/api/chat' which throw an error. This path doesn't exist on ollama

        Yeah.. i thought it will automatically use the endpoint of llama. So what is the correct endpoint for llama?

        lapiequichante commented 3 weeks ago

        damned, it should be /api/chat, don't know why I get a connection refused while everything is fine on my localhost (with a classic fetch request)

        ElG0hary commented 3 weeks ago

        Just make sure you started ollama before opening, cuz 500 means bad or failed connection

        On Sat, Nov 2, 2024, 7:38 PM lapiequichante @.***> wrote:

        damned, it should be /api/chat, don't know why I get a connection refused while everything is fine on my localhost

        — Reply to this email directly, view it on GitHub https://github.com/coleam00/bolt.new-any-llm/issues/156#issuecomment-2453058180, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUIU7K6XUCTFJHUCSBM6G4LZ6UEZRAVCNFSM6AAAAABRBUDIYWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJTGA2TQMJYGA . You are receiving this because you commented.Message ID: <coleam00/bolt. @.***>

        lapiequichante commented 3 weeks ago

        I think the issue is that I'm using docker, and port 11434 isn't reachable from the docker itself. But I can't put anything else because the same variable is also used by the FE to get tags (and in that case the request to tags works, but server can't reach ollama). I guess I'll have to try to run it without docker.

        lapiequichante commented 3 weeks ago

        Made it work with docker by setting the Ollama base url to "http://host.docker.internal:11434" in app/lib/server/llm/api-keys.ts and "http://localhost:11434" in app/utils/constants.ts No idea of what's the cleanest way to solve this.

        khalidmaquilang commented 3 weeks ago

        Made it work with docker by setting the Ollama base url to "http://host.docker.internal:11434" in app/lib/server/llm/api-keys.ts and "http://localhost:11434" in app/utils/constants.ts No idea of what's the cleanest way to solve this.

        this means that we will use local for getting tags and host.docker.internal for chat api

        khalidmaquilang commented 3 weeks ago

        so the next problem is that ollama doesn't use editor since the prompt is for claude..

        ElG0hary commented 3 weeks ago

        Same problem, ollama cant use editor

        On Sat, Nov 2, 2024, 9:31 PM khalidmaquilang @.***> wrote:

        so the next problem is that ollama doesn't use editor since the prompt is for claude..

        — Reply to this email directly, view it on GitHub https://github.com/coleam00/bolt.new-any-llm/issues/156#issuecomment-2453104802, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUIU7K4BKGAVQXMI7VRBREDZ6USBTAVCNFSM6AAAAABRBUDIYWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJTGEYDIOBQGI . You are receiving this because you commented.Message ID: <coleam00/bolt. @.***>

        bitsentinel-cell commented 3 weeks ago

        export const DEFAULT_MODEL = 'llama3.2:latest'; export const DEFAULT_PROVIDER = 'Ollama';

        it just simply won't work I did everything by the repo instructions many times

        ElG0hary commented 3 weeks ago

        Using docker ?

        On Sat, Nov 2, 2024, 10:05 PM bitsentinel-cell @.***> wrote:

        export const DEFAULT_MODEL = 'llama3.2:latest'; export const DEFAULT_PROVIDER = 'Ollama';

        it just simply won't work I did everything by the repo instructions many times

        — Reply to this email directly, view it on GitHub https://github.com/coleam00/bolt.new-any-llm/issues/156#issuecomment-2453113157, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUIU7K4UC2RMDHCVOZ6ILDTZ6UWBFAVCNFSM6AAAAABRBUDIYWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJTGEYTGMJVG4 . You are receiving this because you commented.Message ID: <coleam00/bolt. @.***>

        khalidmaquilang commented 3 weeks ago

        export const DEFAULT_MODEL = 'llama3.2:latest'; export const DEFAULT_PROVIDER = 'Ollama';

        it just simply won't work I did everything by the repo instructions many times

        this is working.. whats the error you are receiving?

        bitsentinel-cell commented 3 weeks ago

        i was using chrome canary and that was the problem switched to classic chrome now its working but the editor does not work

        bitsentinel-cell commented 3 weeks ago

        export const DEFAULT_MODEL = 'llama3.2:latest'; export const DEFAULT_PROVIDER = 'Ollama';

        it just simply won't work I did everything by the repo instructions many times

        this is working.. whats the error you are receiving?

        it was because of the Chrome canary on classic Chrome it's now ok but the editor somehow won't work now

        tazomatalax commented 3 weeks ago

        I am getting errors with Ollama running in docker too. I has worked before though.. not sure whats changed. I have set http://host.docker.internal:11434 in constant.ts and .env and still not working.

        Can confirm all available models are working correctly in Open WebUI.

        2024-11-04 23:39:04 bolt-ai-dev-1  |         at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
        2024-11-04 23:39:04 bolt-ai-dev-1  |         at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
        2024-11-04 23:39:04 bolt-ai-dev-1  |       errno: -111,
        2024-11-04 23:39:04 bolt-ai-dev-1  |       code: 'ECONNREFUSED',
        2024-11-04 23:39:04 bolt-ai-dev-1  |       syscall: 'connect',
        2024-11-04 23:39:04 bolt-ai-dev-1  |       address: '127.0.0.1',
        2024-11-04 23:39:04 bolt-ai-dev-1  |       port: 11434
        2024-11-04 23:39:04 bolt-ai-dev-1  |     },
        2024-11-04 23:39:04 bolt-ai-dev-1  |     url: 'http://localhost:11434/api/chat',
        2024-11-04 23:39:04 bolt-ai-dev-1  |     requestBodyValues: {
        2024-11-04 23:39:04 bolt-ai-dev-1  |       format: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |       model: 'llama3.1:latest',
        2024-11-04 23:39:04 bolt-ai-dev-1  |       options: [Object],
        2024-11-04 23:39:04 bolt-ai-dev-1  |       messages: [Array],
        2024-11-04 23:39:04 bolt-ai-dev-1  |       tools: undefined
        2024-11-04 23:39:04 bolt-ai-dev-1  |     },
        2024-11-04 23:39:04 bolt-ai-dev-1  |     statusCode: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     responseHeaders: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     responseBody: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     isRetryable: true,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     data: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     [Symbol(vercel.ai.error)]: true,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     [Symbol(vercel.ai.error.AI_APICallError)]: true
        2024-11-04 23:39:04 bolt-ai-dev-1  |   },
        2024-11-04 23:39:04 bolt-ai-dev-1  |   [Symbol(vercel.ai.error)]: true,
        2024-11-04 23:39:04 bolt-ai-dev-1  |   [Symbol(vercel.ai.error.AI_RetryError)]: true
        2024-11-04 23:39:04 bolt-ai-dev-1  | }
        khalidmaquilang commented 3 weeks ago

        I am getting errors with Ollama running in docker too. I has worked before though.. not sure whats changed. I have set http://host.docker.internal:11434 in constant.ts and .env and still not working.

        Can confirm all available models are working correctly in Open WebUI.

        2024-11-04 23:39:04 bolt-ai-dev-1  |         at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
        2024-11-04 23:39:04 bolt-ai-dev-1  |         at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
        2024-11-04 23:39:04 bolt-ai-dev-1  |       errno: -111,
        2024-11-04 23:39:04 bolt-ai-dev-1  |       code: 'ECONNREFUSED',
        2024-11-04 23:39:04 bolt-ai-dev-1  |       syscall: 'connect',
        2024-11-04 23:39:04 bolt-ai-dev-1  |       address: '127.0.0.1',
        2024-11-04 23:39:04 bolt-ai-dev-1  |       port: 11434
        2024-11-04 23:39:04 bolt-ai-dev-1  |     },
        2024-11-04 23:39:04 bolt-ai-dev-1  |     url: 'http://localhost:11434/api/chat',
        2024-11-04 23:39:04 bolt-ai-dev-1  |     requestBodyValues: {
        2024-11-04 23:39:04 bolt-ai-dev-1  |       format: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |       model: 'llama3.1:latest',
        2024-11-04 23:39:04 bolt-ai-dev-1  |       options: [Object],
        2024-11-04 23:39:04 bolt-ai-dev-1  |       messages: [Array],
        2024-11-04 23:39:04 bolt-ai-dev-1  |       tools: undefined
        2024-11-04 23:39:04 bolt-ai-dev-1  |     },
        2024-11-04 23:39:04 bolt-ai-dev-1  |     statusCode: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     responseHeaders: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     responseBody: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     isRetryable: true,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     data: undefined,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     [Symbol(vercel.ai.error)]: true,
        2024-11-04 23:39:04 bolt-ai-dev-1  |     [Symbol(vercel.ai.error.AI_APICallError)]: true
        2024-11-04 23:39:04 bolt-ai-dev-1  |   },
        2024-11-04 23:39:04 bolt-ai-dev-1  |   [Symbol(vercel.ai.error)]: true,
        2024-11-04 23:39:04 bolt-ai-dev-1  |   [Symbol(vercel.ai.error.AI_RetryError)]: true
        2024-11-04 23:39:04 bolt-ai-dev-1  | }

        follow this

        if using Ollama 3.2B

        go to app/utils/constant.ts and edit this

        export const DEFAULT_MODEL = 'llama3.2:latest'; export const DEFAULT_PROVIDER = 'Ollama';

        then this

        Made it work with docker by setting the Ollama base url to "http://host.docker.internal:11434" in app/lib/server/llm/api-keys.ts and "http://localhost:11434" in app/utils/constants.ts No idea of what's the cleanest way to solve this.

        tazomatalax commented 2 weeks ago

        Ah nice one! Got it going, i had missed setting the Ollama base url to "http://host.docker.internal:11434/" in app/lib/server/llm/api-keys.ts. Thanks for the assistance.

        dvicuna98 commented 2 weeks ago

        This only works on linux If you are running the project with docker compose and have locally ollama service y resolve this by editing the docker compose file and add: network_mode: "host"

      ,
      ,