Chia-Network / clvm

[Contract Language|Chia Lisp] Virtual Machine
Apache License 2.0
86 stars 35 forks source link

make op_div bug-compatible with clvm_rs #109

Closed arvidn closed 2 years ago

arvidn commented 2 years ago

here's a reference: https://github.com/Chia-Network/clvm_rs/blob/main/src/test_ops.rs#L468

ChiaMineJP commented 2 years ago

I think tests/brun/div-6.txt ~ div-9.txt seem just regression tests, because I've confirmed that clvm@0.9.2 already passed the same tests. I think we should have tests for this part.

    if q == -1 and r != 0:
        q += 1
arvidn commented 2 years ago

@ChiaMineJP how did you confirm that?

If I apply this patch:

index 43df6cb..62623c5 100644
--- a/clvm/more_ops.py
+++ b/clvm/more_ops.py
@@ -178,8 +178,8 @@ def op_div(args):

     # this is to preserve a buggy behavior from the initial implementation
     # of this operator.
-    if q == -1 and r != 0:
-        q += 1
+#    if q == -1 and r != 0:
+#        q += 1

     return malloc_cost(cost, args.to(q))

and rerun the tests, I get this result:

$ pytest tests/cmds_test.py 
=========================================================================================================================== test session starts ============================================================================================================================
platform linux -- Python 3.8.10, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /home/arvid/dev/clvm
collected 820 items                                                                                                                                                                                                                                                        

tests/cmds_test.py ...............................................................FF................................................................................................................................................................................ [ 29%]
.................................................................................................................................................................................................................................................................... [ 61%]
.................................................................................................................................................................................................................................................................... [ 92%]
...........................................................                                                                                                                                                                                                          [100%]

================================================================================================================================= FAILURES =================================================================================================================================
_______________________________________________________________________________________________________________________ TestCmds.test_brun_div-7_txt _______________________________________________________________________________________________________________________

self = <tests.cmds_test.TestCmds testMethod=test_brun_div-7_txt>

    def f(self):
        cmd = "".join(cmd_lines)
        for c in cmd.split(";"):
            r, actual_output, actual_stderr = self.invoke_tool(c)
        if actual_output != expected_output:
            print(path)
            print(cmd)
            print(actual_output)
            print(expected_output)
            if REPAIR:
                f = open(path, "w")
                f.write("".join(comments))
                for line in cmd_lines[:-1]:
                    f.write(line)
                    f.write("\\\n")
                f.write(cmd_lines[-1])
                f.write("\n")
                f.write(actual_output)
                f.close()
>       self.assertEqual(expected_output, actual_output)
E       AssertionError: 'cost = 1046\n()\n' != 'cost = 1056\n-1\n'
E       - cost = 1046
E       ?          ^
E       + cost = 1056
E       ?          ^
E       - ()
E       + -1

tests/cmds_test.py:90: AssertionError
--------------------------------------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------------------------------------
/home/arvid/dev/clvm/tests/brun/div-7.txt
brun -c '(/ (q . -3) (q . 10))'
cost = 1056
-1

cost = 1046
()

_______________________________________________________________________________________________________________________ TestCmds.test_brun_div-8_txt _______________________________________________________________________________________________________________________

self = <tests.cmds_test.TestCmds testMethod=test_brun_div-8_txt>

    def f(self):
        cmd = "".join(cmd_lines)
        for c in cmd.split(";"):
            r, actual_output, actual_stderr = self.invoke_tool(c)
        if actual_output != expected_output:
            print(path)
            print(cmd)
            print(actual_output)
            print(expected_output)
            if REPAIR:
                f = open(path, "w")
                f.write("".join(comments))
                for line in cmd_lines[:-1]:
                    f.write(line)
                    f.write("\\\n")
                f.write(cmd_lines[-1])
                f.write("\n")
                f.write(actual_output)
                f.close()
>       self.assertEqual(expected_output, actual_output)
E       AssertionError: 'cost = 1046\n()\n' != 'cost = 1056\n-1\n'
E       - cost = 1046
E       ?          ^
E       + cost = 1056
E       ?          ^
E       - ()
E       + -1

tests/cmds_test.py:90: AssertionError
--------------------------------------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------------------------------------
/home/arvid/dev/clvm/tests/brun/div-8.txt
brun -c '(/ (q . 3) (q . -10))'
cost = 1056
-1

cost = 1046
()

========================================================================================================================= short test summary info ==========================================================================================================================
FAILED tests/cmds_test.py::TestCmds::test_brun_div-7_txt - AssertionError: 'cost = 1046\n()\n' != 'cost = 1056\n-1\n'
FAILED tests/cmds_test.py::TestCmds::test_brun_div-8_txt - AssertionError: 'cost = 1046\n()\n' != 'cost = 1056\n-1\n'
====================================================================================================================== 2 failed, 818 passed in 5.32s =======================================================================================================================
ChiaMineJP commented 2 years ago

@arvidn This is really interesting. I though it was my easy mistake. But as I investigated further, it turned out combinations of different clvm/clvm_tools version change the actual output.

The steps I followed:

  1. I did a simple test in clvm_tools repository.

    cd <my local clvm_tools git repos dir>
    brun -c '(/ (q . -3) (q . 10))'
    # Got following result
    # cost = 1046
    # ()

    I ran the above simple test on Windows10. I looked into venv\Lib\site-packages folder and it seems clvm-0.9.7 was installed(because clvm-0.9.7.dist-info was also there). The commit hash of the clvm_tools repository is d15593699d4e83579d1e883511314a4d0bcb9735

  2. Then I tried to note steps to reproduce this behavior. (In this case, I used Ubuntu 20.04.2)

    cd <some directory>
    git clone https://github.com/Chia-Network/clvm_tools
    cd clvm_tools
    python3 -m venv venv
    . ./venv/bin/activate
    pip install -e .
    brun -c '(/ (q . -3) (q . 10))'
    # This time I got the following result
    # cost = 64
    # -1

    I also searched in venv/lib/python3.8/site-packages/ and found clvm-0.9.7.dist-info so I assume I installed clvm@0.9.7.

I'm expecting that the difference may come from different commit hash of clvm_tools. I've just figured out that if a different combination of clvm/clvm_tools version is used, the output may be different than another combination of versions, but sometimes it's the same. So I at least suggest to fix version of dependency for clvm/clvm_tools in setup.py.

dependencies = [
    "clvm>=0.9.2", # <- Shouldn't be "clvm==0.9.7"?
]

I'm sorry I need more investigation to find out the root cause of this issue.

arvidn commented 2 years ago

I should have mentioned that we attempt to import clvm_rs, and if we find it, we use it. Otherwise we import clvm. That's why we added the --backend command line option to brun.

To force the python backend, say brun --backend=python. For the tests, I run it in a venv, and I make sure to uninstall clvm_rs.

ChiaMineJP commented 2 years ago

Right. That explains everything. My excuse is that because in clvm_tools-js using clvm_rs is experimental and not default option, my mindset was not correctly switched back.

By the way, I think rather than making sure clvm_rs is uninstalled, it seems easy to add --backend=python option to test files.

arvidn commented 2 years ago

We actually use the same test files in the clvm_rs CI as well, that's why they need to be neutral.

We've talked about moving the tests into a separate repo for this reason.