lcompilers / lpython

Python compiler
https://lpython.org/
Other
1.5k stars 158 forks source link

`--jit` option to execute without creation of executable file #2564

Closed Vipul-Cariappa closed 5 months ago

Vipul-Cariappa commented 7 months ago

Using LLVM JIT.

Fixes #2563.

Example: With --jit flag

❯ time ./src/bin/lpython --jit ./examples/expr2.py
25
./src/bin/lpython --jit ./examples/expr2.py  0.01s user 0.00s system 97% cpu 0.013 total

Without

❯ time ./src/bin/lpython ./examples/expr2.py      
25
./src/bin/lpython ./examples/expr2.py  0.02s user 0.01s system 99% cpu 0.037 total
Vipul-Cariappa commented 7 months ago

Both of the failing workflows are WASM related. One of them says error: use of undeclared identifier 'execute_python_using_jit'. Even though it is declared and defined in the preceding lines.

Thirumalai-Shaktivel commented 7 months ago

Please mark this PR ready for review once it is ready.

Vipul-Cariappa commented 6 months ago

@Thirumalai-Shaktivel, Regarding testing out this feature. I was thinking to add the following

run_test(
                f"{filename} JIT",
                "llvm",
                "lpython --jit {infile}",
                filename,
                update_reference)

after llvm test at run_tests.py.

But I get

raise FileNotFoundError(
FileNotFoundError: The reference json file 'tests/reference/llvm-expr14-f21489f.json' for expr14.py JIT does not exist

when I run it.

I do not properly understand the methodology used to test existing features. Suggestion and guidance would be helpful here.

Thirumalai-Shaktivel commented 6 months ago

I think we need to test this using integration_tests/run_tests.py. It is used for testing most of our runtime tests. Checkout integration_tests/CMakeLists.txt. If you are blocked, do let me know.

Shaikh-Ubaid commented 6 months ago

We need an integration test for this. Please mark this as "Ready for review" when ready. Thanks for working on this! I appreciate it.

Vipul-Cariappa commented 6 months ago

We expect

exit_02_jit
exit_04_jit
exit_02b_jit
exit_02c_jit

expr_18_jit
if_03_jit

to fail (i.e. exit with non-zero code).

What should I do here? When the exit code is supposed to be non-zero?


const_03_jit
expr_13_jit
test_c_interop_02_jit
test_c_interop_03_jit
test_c_interop_05_jit
bindc_03_jit
bindc_05_jit
bindc_06_jit
structs_07_jit
structs_13_jit
structs_18_jit
structs_19_jit
structs_20_jit
sizeof_01_jit
enum_05_jit

fails because it has an external dependency.

I believe we should skip these tests under JIT.


test_argv_01_jit and print_list_tuple_03_jit look like proper errors. Debugging required.


EDITED:

I would also want to add that test_dict_nested1_jit fails in the GitHub Workflow and passes in my local computer. I suspect this is due to some memory-related bug. Return of element allocated on stack or use after free (this is my guess, I am not sure). print_list_tuple_03_jit may also be a bug in similar lines.

Vipul-Cariappa commented 6 months ago

@Thirumalai-Shaktivel, @Shaikh-Ubaid Look at #2595. If you could verify that they are definitely memory-related bugs it would be helpful. I guess progress on this PR would be halted till we figure out and resolve #2595.!?


I did not look at the AST and ASR. I will investigate them tomorrow.

Shaikh-Ubaid commented 6 months ago

Please mark it as "Ready for review" when ready.

Vipul-Cariappa commented 6 months ago

I believe this PR is ready for review/merge.

CC: @Shaikh-Ubaid

I will explain some aspects of the modifications I have made to run the tests:

There are only 2 tests that fail. They fail due to segfault. And related to #2595. They are not caused by any new bug I have introduced. My code happens to trigger it.

I tried investigating the segfault, but have not found its cause yet. It is very difficult to debug the LLVM IR directly. Presently I am trying to do that. Also fixing them should not be part of this PR.


Also, the Windows CI is not running to completion. Please read this comment I made in the zulip chat.

Shaikh-Ubaid commented 6 months ago

@Vipul-Cariappa From a high level it looks good. I will review it thoroughly tomorrow.

There are only 2 tests that fail. They fail due to segfault. And related to https://github.com/lcompilers/lpython/issues/2595. They are not caused by any new bug I have introduced. My code happens to trigger it.

We will need the CI to be green to merge this. So, we either need to get the two failing tests to work or we need to completely skip running these two tests with --jit(I think right now we just skip checking the return code).

certik commented 6 months ago

Thanks!

The advantages of this approach:

Disadvantages:

The handling of shared libraries is something we will need for interactive mode as well, so I think we need this. The current implementation works for Conda, but it will fail for, say, Ubuntu, which uses a lib64 (I think) prefix. We will need to add options to select where these libraries are installed. This code should be refactored into some utility file, so that we don't have these platform dependent ifdefs in the main driver.

To finish this, let's get all tests passing, and then let's polish the implementation, so that we can merge it. Then we can improve upon it.

Vipul-Cariappa commented 6 months ago

To finish this, let's get all tests passing

we need to completely skip running these two tests with --jit(I think right now we just skip checking the return code).

I am trying to debug and fix #2595. It would be better to fix the issues and then merge this PR. Skipping the failing tests will require quite a lot of modifications to the integration_tests/CMakeLists.txt. Which should later be undone, when the issue is fixed. Please have a look at the recent comments I have made at #2595 (Help Required).

Vipul-Cariappa commented 6 months ago

e7760b02c7fbfccba9f2384278fe8b28e0afe3b7 fixes #2595 I tested with both AddressSanitizer and Valgrind.

Vipul-Cariappa commented 6 months ago

Ready for review. @Shaikh-Ubaid @certik

Shaikh-Ubaid commented 5 months ago

@Vipul-Cariappa I think we can merge this once the above two comments are handled. Thanks for fixing this!

Shaikh-Ubaid commented 5 months ago

A slight concern I have is the printing of skipping of tests. Previously I think the output of ./integration_tests/run_tests.py used to be clean. Now it seems to be cluttered a bit.

Label Time Summary:
c           =  77.43 sec*proc (576 tests)
cpython     =  94.00 sec*proc (650 tests)
llvm        =  98.89 sec*proc (680 tests)
wasm        =   8.98 sec*proc (74 tests)
wasm_x64    =   2.18 sec*proc (44 tests)
wasm_x86    =   0.91 sec*proc (22 tests)

Total Test time (real) =  13.09 sec

The following tests did not run:
    100 - const_03_jit (Skipped)
    128 - expr_13_jit (Skipped)
    410 - test_c_interop_02_jit (Skipped)
    412 - test_c_interop_03_jit (Skipped)
    416 - test_c_interop_05_jit (Skipped)
    418 - bindc_03_jit (Skipped)
    420 - bindc_05_jit (Skipped)
    422 - bindc_06_jit (Skipped)
    480 - structs_07_jit (Skipped)
    492 - structs_13_jit (Skipped)
    502 - structs_18_jit (Skipped)
    504 - structs_19_jit (Skipped)
    506 - structs_20_jit (Skipped)
    538 - sizeof_01_jit (Skipped)
    550 - enum_05_jit (Skipped)
    650 - test_argv_01_jit (Skipped)

@certik Do you have any thoughts on this?

Shaikh-Ubaid commented 5 months ago

Overall I think the PR is good to merge.

Vipul-Cariappa commented 5 months ago

@Shaikh-Ubaid, I believe the log of skipping tests will be a reminder to implement the necessary changes required to support missing JIT features., mainly interoperability with C.

Shaikh-Ubaid commented 5 months ago

@Vipul-Cariappa Can you rebase the branch on top of latest main and resolve any conflicts?