Closed ErikSchierboom closed 8 months ago
Thank you for this! Apologies I haven't gotten to it sooner. I am in the process of re-working the representer to return an AST as opposed to a code text file (and also do some more normalizations), and also change the Docker image to Alpine. I've just been bogged down with having to re-do the error messages for concept exercises and other stuff.
I am tempted to wait until the representer work is done before we implement a bunch of tests that might need to change.
I am also sorta .... uncomfortable with the checks happening within a bash script, since the tooling is mostly Python - I would really prefer that we have a python test script that is called, similar to the way the test-runner tests work.
But let me think on this a bit. Both the Python std lib and the third-party library we use (astor) have tools for validating an AST and and AST round-trip, so it might be better to use those instead of doing a flat diff -- especially since the AST may or may not vary by Python version.
I am tempted to wait until the representer work is done before we implement a bunch of tests that might need to change.
Well, I would argue that this should be merged and that we'll update later. The representer is becoming more and more important (as it is now vital for showing community solutions), so having proper E2E tests is something we value a lot (and that virtually all representers now have).
I am also sorta .... uncomfortable with the checks happening within a bash script, since the tooling is mostly Python - I would really prefer that we have a python test script that is called, similar to the way the test-runner tests work.
I can see that. I am happy for it to be converted to a Python script of course, but didn't want to spend time on that myself.
Both the Python std lib and the third-party library we use (astor) have tools for validating an AST and and AST round-trip, so it might be better to use those instead of doing a flat diff
Well, that's fine by me, as long as we validate the actual generated file, not some internal state.
So all in all, my suggested course of action would be to just merge this PR for the added security guarantees and then to change things later on when needed.
@BethanyG Have you had time to think on this for a bit?
I tried to see if I can convert this to Python, but when I run pylint
in the root directory I get lots of:
No module named 'pylint'
I thought running pip install -r requirements.txt -r dev-requirements.txt
would do the trick, but it didn't. Thoughts?
@ErikSchierboom my apologies. I got sidetracked updating the concept exercises and doing the exercise tagging. However, I do have time set aside today and the remainder of the week to work on this, and am hoping I can move fairly quickly. I intend to change the docker file to Alpine, as well as change the representations to AST output and add tests.
I tried to see if I can convert this to Python, but when I run pylint in the root directory I get lots of:
No module named 'pylint'
I thought running pip install -r requirements.txt -r dev-requirements.txt would do the trick, but it didn't. Thoughts?
Hummm. Sounds like Pylint is either not installed, or not invokable from where you rant it. Let me (quickly) set up an env and see what's going on.
Looks like PyLint is not part of the requirements.txt
nor the dev-requirements.txt
- so you'll need to run
python -m pip install pylint=2.17.1
on your machine, or in whichever environment you're doing your dev work.
But that's really only if you need/want to run the linter or want to use astroid
to write transform rules for the AST. If what you are intending to do is run tests, pytest
in the root should do the trick.
For the rules we typically have enabled (or had enabled last time I did work on the analyzer), you can use this .pylintrc file
Thanks! Does it make sense for me to convert these smoke tests to Python-based tests then? For me the key points are:
bin/run.sh
entrypointDoes it make sense for me to convert these smoke tests to Python-based tests then?
I will happily do it for you. 😄 As long as you can wait a bit for me. If not, maybe we do merge this while I update things. I hope to be done by the weekend, but it may take a bit longer than that.
I can make sure I fulfill your points. I am going to more or less use the same strategy as the test runner tests -- except of course we will be testing AST ouput and not pytest flags.
I think the only questions for me are:
Is it OK that I switch the representation output to AST? It is too late for me to do that?
It's okay, but we do request you package this with any other changes you're making to prevent us having to re-run the representer multiple time
Is this now urgent enough that we need to merge this PR as is while I retool things?
It sounds like you're building this soon-ish, so it's fine to wait.
Is there anything else I need to keep in mind?
Not really. Just the things I mentioned above.
edit: the representer version should also be updated (in a representation.json
file, see the docs)
I think we can close this. But if there are changes here we still need to incorporate, please let me know and I can reopen. 😄
This PR adds smoke tests to verify the behavior of the representer end-to-end using Docker.
I've created two smoke tests from the existing Python tests. Not sure which ones make the most sense, but I'll let that up to you to decide.