WASM would be a great technology for writing the same LLM tool once and using it in Python and TS.
However, the problem is that even a simple HTTP fetch is not standardized among WASM runtimes. This would be the minimum requirement for a runtime running LLM tools (tools usually do API calls).
Context
WASM would be a great technology for writing the same LLM tool once and using it in Python and TS.
However, the problem is that even a simple HTTP fetch is not standardized among WASM runtimes. This would be the minimum requirement for a runtime running LLM tools (tools usually do API calls).
Status
LlamaIndexTS can already use WASM tools; see https://github.com/run-llama/LlamaIndexTS/blob/main/packages/wasm-tools/README.md
Options
How can we do an HTTP fetch?
Extism looks more mature. (August 2024)
Next step
Try extism to implement a Wikipedia tool