vyperlang / titanoboa

a vyper interpreter
https://titanoboa.readthedocs.io
Other
247 stars 49 forks source link

feat: `VVMDeployer` to deploy older vyper contract #271

Closed AlbertoCentonze closed 1 month ago

AlbertoCentonze commented 1 month ago

What I did

Added capability to compile and deploy code from older vyper versions using Vyper Version Manager to fetch the required version of the compiler.

How I did it

How to verify it

Unit tests are available

Description for the changelog

feat: VVMDeployer to deploy legacy vyper contracts

Cute Animal Picture

image

socket-security[bot] commented 1 month ago

New and removed dependencies detected. Learn more about Socket for GitHub ↗︎

Package New capabilities Transitives Size Publisher
pypi/vvm@0.2.1 environment, eval, filesystem, network, shell 0 36.4 kB charles-cooper, iamdefinitelyahuman

View full report↗︎

charles-cooper commented 1 month ago

with straight regex it could be much faster --

In [4]: r = re.compile(r"\s*#\s*pragma\s+version\s+\d+\.\d+\.\d+")
In [6]: r.match(s)
Out[6]: <re.Match object; span=(0, 23), match='# pragma version 0.3.10'>
In [7]: %timeit r.match(s)
341 ns ± 24.5 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
DanielSchiavini commented 1 month ago

with straight regex it could be much faster --

@charles-cooper Isn't the pragma guaranteed to be in the beginning of the file? As soon as we encounter anything but comments the rest of the file can be ignored.

charles-cooper commented 1 month ago

with straight regex it could be much faster --

@charles-cooper Isn't the pragma guaranteed to be in the beginning of the file? As soon as we encounter anything but comments the rest of the file can be ignored.

no it's not, the proper way is to use pre_parse. for example the regex method can produce a different result than the tokenization method. but i think it's ok, it's a fast way to get a heuristic of which compiler to use.

we could do something more failsafe like searching for all the regex matches, and in the rare case that the regex returns more than one match we fall back to the tokenization method. but i think it's overkill until somebody actually runs into the issue