rustformers / llm

[Unmaintained, see README] An ecosystem of Rust libraries for working with large language models
https://docs.rs/llm/latest/llm/
Apache License 2.0
6.07k stars 360 forks source link

Build and execute our own computation graph #137

Open philpax opened 1 year ago

philpax commented 1 year ago

At present, we are using GGML's computation graph. This works well, but it has a few flaws:

1) We're reliant on whatever support GGML has for threading; the Rust threading ecosystem is more versatile/OS-agnostic 2) Adding new operations requires patching GGML 3) We're coupled pretty tightly to GGML, so switching to an alternate backend would be quite difficult; this will only get worse as we support more models 4) Abstraction of shared pieces of functionality gets a little finicky with the exposed API

After reading https://github.com/ggerganov/llama.cpp/discussions/915, I had a flash of inspiration and realised we could address these problems by using our own computation graph.

The code would be fairly similar to what it is now - but instead of building up a GGML computation graph, we build up our own in Rust code with all of the usual strong-typing guarantees.

To begin with, this computation graph would then be "compiled" to a GGML computation graph, so that it works identically.

Once that's done, we would look at reimplementing the actual execution of the graph in Rust code and using GGML's operations to do so (e.g. we use its vec_dot_q4_0, etc).

This would allow us to decouple from GGML in the future (#3), and gives us freedom to implement new operations that aren't supported by GGML without having to maintain our own patched version.

Ideally, we would just use burn or something similar directly, but none of the existing libraries are in a position to serve our needs (GGML-like performance with quantization support). This lets us side-step that issue for now, and focus on describing models that could be executed by anything once support is available.


Constructing our own computation graph and compiling it to GGML should be fairly simple (this could be done with petgraph or our own graph implementation, it's not that difficult).

The main problem comes in the executor reimplementation - a lot of GGML's more complex operations are coupled to the executor, so we'd have to reimplement them (e.g. all the ggml_compute_forward_... functions). Additionally, a lot of the base operations are static void and not exposed to the outside world, so it's likely we'd have to patch GGML anyway.

An alternate approach to full graph reimplementation might be to add support for custom elementwise operations once (as @KerfuffleV2 has done in their fork), so that we can polyfill custom operations from our computation graph.

KerfuffleV2 commented 1 year ago

I think this is a great a idea. Also, it's probably even more of a reason to decouple llama-rs from the GGML crates, and I would think what you're talking about also should be its own crate. (Using "crate" pretty much interchangeably with "repo" here.)

You'd also be able to do something like I mentioned in #130.

This would allow us to decouple from GGML in the future (https://github.com/rustformers/llama-rs/issues/3), and gives us freedom to implement new operations that aren't supported by GGML without having to maintain our own patched version.

It looks like my mapping operations stuff is likely to get merged ( https://github.com/ggerganov/llama.cpp/pull/874 ), so at least for operations that work with unary/binary mapping it won't be necessary to do that. Maybe the only other thing missing would be fold or 3d operations (not sure what would even need the latter). You could emulate a fold (albeit inefficiently) using map + something like statics.

KerfuffleV2 commented 1 year ago

I found this crate which looks pretty interesting: https://crates.io/crates/dagga

It's for scheduling directed acyclic graphs (like GGML's graph, and I assume other ML type graphs would be similar). You can do stuff like give the nodes semantics reflecting uses of resources, borrowing, dependencies, etc.

If nothing else, it might be useful for stealing ideas.

9876691 commented 1 year ago

Is using Onnx runtime an option here?

There's a rust binding here https://github.com/microsoft/onnxruntime/tree/main/rust

The compute graph is basically formed from a protobuf definition. So using a rust protoc compiler you would get a bunch of rust structs auto generated. Then at runtime put the structs together to the compute graph and pass it to the runtime.

As far as I can see onnx runtime supports

It would perhaps be possible in the future to swap in the Wonnx rust version https://github.com/webonnx/wonnx

philpax commented 1 year ago

We're already in talks with wonnx to see if we can use them as a computation backend: https://github.com/webonnx/wonnx/issues/169

As for using onnxruntime directly... I don't know. Maybe, but we'd like to avoid having to synthesize an entire ONNX graph at runtime, especially as ONNX is quite an intricate format and has lots of details we don't care about.

9876691 commented 1 year ago

For reference there's some ongoing work in ggml for graph support https://github.com/ggerganov/ggml/pull/108

These are initial steps towards GPU support via computation graph export. Still figuring out the basics needed. Playing with the mnist example