-
-
### Feature Description
Compatibility with
https://github.com/mlc-ai/web-llm
### Use Case
Running LLM in the browser, no need a server
### Additional context
I've been using `ai` with `mlc-llm…
-
### Issue with current documentation:
When using the quickstart code for local development:
```
import { RAGApplicationBuilder } from '@llm-tools/embedjs';
import { OllamaEmbeddings } from '@llm-t…
-
### 🐛 Describe the bug
My current code:
```js
import { RAGApplicationBuilder, LocalPathLoader } from '@llm-tools/embedjs';
import { OpenAiEmbeddings } from '@llm-tools/embedjs-openai';
import { …
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
# Architecture
This document outlines the architecture of the AI Nutrition-Pro application, including system context, containers, and deployment views. The architecture is depicted using C4 diagram…
-
# Description
Today, I use web scraping and LLM models to extract data from URLs. If, for some reason, I encounter an error with the LLM, my plan is to retry processing during the next scheduled ru…
-
### ⚠️ Please check that this feature request hasn't been suggested before.
- [X] I searched previous [Ideas in Discussions](https://github.com/homanp/superagent/discussions/categories/ideas) didn't …
-
Hi,
I'm trying to compile a llama-3.2. I have followed the setup instructions but before I can get to running the `mlc_llm compile` command, I am running `./web/prep_emcc_deps.sh` which fails with …