hdresearch / nolita

Work with web-enabled agents quickly — whether running a quick task or bootstrapping a full-stack product.
https://nolita.ai
MIT License
90 stars 6 forks source link

Ty/ollama local models #124

Closed AlephNotation closed 2 months ago

AlephNotation commented 3 months ago

adds ollama models

matildepark commented 2 months ago

If we can't get tiny models to .browse(), do we feel ok about them doing .get() and .do()? It would be nice to incorporate this, even with limitations.

matildepark commented 2 months ago

Current status (over phone):

matildepark commented 2 months ago

Been doing some experiments with examples/findEmail.ts and node-llama-cpp, and I needed to set the context and batch size in generateObjectLocal.ts:

  const context = new LlamaContext({ model, contextSize: 32168, batchSize: 32168 });

Otherwise, it would throw:

/Users/maru/git/hdr/nolita/node_modules/node-llama-cpp/llama/llama.cpp/src/llama.cpp:14466: GGML_ASSERT(n_tokens_all <= cparams.n_batch) failed

The model in question I picked was capybarahermes-2.5-mistral-7b. Once I set contextSize and batchSize, it did get the emails off the page. If you try to get it to browse, it won't generate a command.

  issues: [
    {
      code: 'too_small',
      minimum: 1,
      type: 'array',
      inclusive: true,
      exact: false,
      message: 'Array must contain at least 1 element(s)',
      path: [ 'command' ]
    }
  ],
  addIssue: [Function (anonymous)],
  addIssues: [Function (anonymous)],
  errors: [
    {
      code: 'too_small',
      minimum: 1,
      type: 'array',
      inclusive: true,
      exact: false,
      message: 'Array must contain at least 1 element(s)',
      path: [ 'command' ]
    }
  ]
}
AlephNotation commented 2 months ago

@matildepark good to go?