lambdaclass / cairo-vm

cairo-vm is a Rust implementation of the Cairo VM. Cairo (CPU Algebraic Intermediate Representation) is a programming language for writing provable programs, where one party can prove to another that a certain computation was executed correctly without the need for this party to re-execute the same program.
https://lambdaclass.github.io/cairo-vm
Apache License 2.0
514 stars 144 forks source link

Feature request: Enable Loading Arguments from File Path to Overcome Shell Argument Length Limitations #1728

Closed Okm165 closed 5 months ago

Okm165 commented 5 months ago

Problem Description: Currently, the cairo1-run tool can only receive arguments to the Cairo program entry code via shell arguments. This setup imposes limitations on passing large data due to restrictions on the length of shell arguments. This limitation became evident when attempting to supply zkproof serialized to Vec<Felt252>, resulting in shell errors due to the number of arguments being too large.

Proposed Solution: To overcome this limitation, the suggestion is to enable loading arguments directly from a file path. By doing so, the restriction imposed by the shell on argument length would be bypassed, and the only limitation would be the i16 offset size.

Additional Context: This feature request is especially relevant given the forthcoming use cases of L3 and zkML, which are expected to involve large data sets. Preparing for such scenarios by addressing limitations in argument passing would be beneficial.

Okm165 commented 5 months ago

My current understanding is that i16 offset will limit the number of possible data input to the ca1ro entrypoint up to 32768 felts, is that right? Is there any way increasing it's capacity? The memory segments for arrays won't help afaik coz they still increase i16 offset by num of elements inside the array.

pefontana commented 5 months ago

Hi @Okm165 ! Thanks for the issue. We will work on a way to pass inputs with a file Do you need more than 32768 inputs?

Okm165 commented 5 months ago

@pefontana not now, but i think it will be a bottleneck sooner or later