dottxt-ai / outlines

Structured Text Generation
https://dottxt-ai.github.io/outlines/
Apache License 2.0
9.73k stars 498 forks source link

Outlines's cache not reusable across vllm startup #1130

Open Lap1n opened 2 months ago

Lap1n commented 2 months ago

Describe the issue as clearly as possible:

When using vllm and outlines, when running it from a VM, it seems that the diskcache functionality is not working correctly. Every time the server is startup, it doesn't seem to be able to reuse the previously computed FSM cache.

One way that can fix this issue is to serialize the cache key object as a string. The changes can be found in this PR that I submitted.

Steps/code to reproduce the bug:

- Start vllm server
- send a request
- FSM computation happens
- Stops and relaunch the server
- send a request
- FSM computation still happens

Expected result:

- Start vllm server
- send a request
- FSM computation happens
- Stops and relaunch the server
- send a request
- FSM computation does not happens as it is already in the cache

Error message:

No response

Outlines/Python version information:

Version information

``` (command output here) ```

Latest from main.

Context for the issue:

No response

brandonwillard commented 2 months ago

The source of this issue appears to be vllm's use of outlines.cache on functions that are ultimately used as class methods. Those functions include the class type instances in their signatures and that affects caching (e.g. equality doesn't necessarily hold after deserialization of types between Python sessions).

See #1145.