Closed abhijithnraj closed 1 year ago
I was about to create an issue to track this @abhijithnraj. This a good point to start integrating tests NOW. @initialcommit-io even I was skeptical while adding support for the stash
subcommand that I may break something in some other commands while overriding the GitSimBaseCommand
methods. I think we need to start designing the test plan aggressively now. The problems you caught with Typer
integration could be caught easily with some functional test plans.
For Manim based tests, we can have a look at the same way manimCE tests their own code. https://docs.manim.community/en/stable/contributing/testing.html and https://github.com/ManimCommunity/manim/tree/main/tests
I didn't know how to test the images that will be generated but this looks promising.
Once done we can add an actions pipeline to ensure continuous integration.
Exactly. This will ensure great test coverage and using tools like pytest
we shall be in a good form to support this for the long term with new features and guaranteeing support for the already existing features.
Python version is already an issue with the new Typer
implementation which I have discussed here.
@abhijithnraj Thank you for creating this issue and for your proposed contributions! Yes, it's clear we need to start on this now, as @abhijitnathwani brought up some breaking changes across Python versions that we had not been testing for.
For context - I didn't have any tests going into the project because I wanted to gauge user interest before spending too much time on it, but now that we know the project has a solid user base, we need to prioritize a robust test suite for the reasons you both mentioned.
@abhijithnraj I don't have any specific test design in mind, but I agree with all of your points, and good find on the Manim test option. Maybe a good place to start is if you want to put together a PR with a simplified version of an initial test suite, that covers a single aspect of the codebase - probably git-sim log
subcommand since that is the simplest - from all perspectives (1), (2), and (3) that you mentioned.
Then we can all discuss the PR and see if it will work for the whole codebase/program.
@abhijitnathwani @abhijithnraj FYI I meant to mention - I will be working on a small tool that can generate dummy Git repos with specific numbers of commits / branch structures / etc. I am thinking we can use this in functional tests to create the structure we need for testing certain commands, like merge/rebase, etc.
@initialcommit-io Yes, that can help with generating test data.
@abhijithnraj Do you have a timeline for an initial version of the test suite that provides coverage for the git log
feature?
@abhijithnraj @abhijitnathwani @paketb0te Here is the first version of a tool I created to generate dummy Git repos:
https://github.com/initialcommit-com/git-dummy
I'm thinking we can use this in our functional tests.
@initialcommit-io I am studying the graphical test cases for manim. I think I can raise the first PR at max 2 days. We will add more testcases once we finalize the framework in the PR. Meanwhile I will checkout git-dummy. currently for testing, I wrote a simple code to generate some commits.
@abhijithnraj Sounds great. FYI I added a --branches=N
option to git-dummy
, so now it can easily generate dummy Git repos with an arbitrary number of branches. This will be good for simulating commands that show multiple branches, like merge, rebase, etc.
@abhijithnraj @abhijitnathwani @paketb0te Hi again - last update for a bit on git-dummy, I added flags --diverge-at
to set the commit number at which the branches will diverge from main
.
I also added --merge=x,y,...,n
so the user can select which branches to merge back into main
if desired.
Its not much, but I think at this point git-dummy has enough so that we can use it for basic functional test coverage, to set up automated scenarios/repos we need for the tests.
I am also planning to add hooks for git-dummy into Git-Sim, with something like a --generate
flag. This will allow users to generate a test repo automatically before running a Git-Sim command.
@abhijithnraj Hi again! Just wanted to check in and see if this is still something you were working on?
Hi, Still working on it.
Hi @abhijithnraj @abhijitnathwani @paketb0te ! Hope you all are doing great =)
Just a note I released a significant refactor in git-sim v0.2.6 which overhauls the way it parses the commit history and adds several important new options/flags. Git-Sim now properly traces parent/child relationships, which means it can display more accurate and complex graph structures, with multiple branches and the relationships between them.
See the release notes for more details: https://github.com/initialcommit-com/git-sim/releases/tag/v0.2.6
I figured I'd mention it here since it might affect the test suite, depending on how it was being written.
Let me know if you get the chance to try it out. It's likely there are some bugs since we still don't have the test suite set up yet, but I really wanted to get this released since I feel it's an important milestone for the project, will make the tool much more useful for users, and will make future development smoother.
Hey @initialcommit-io These are some great features ππ» I'll try these out over the weekend and let you know if I find something broken.
Hi @abhijithnraj, @abhijitnathwani. This is such a cool project! Do either of you have the work you've done public at all? I saw your forks, but don't see any additional tests. I would like to jump in, but don't want to step on your toes if you're about to share some progress.
Hey @ehmatthes Thanks for taking the time to check the project. Sadly, I don't have any python testing experience and given the nature of the project we need to design some testing framework for the images it renders. Afaik, we don't have any major progress done on the testing part. We are all open to contributions on this front! You can represent your thoughts about how we can achieve this and we can review it with @initialcommit-io :)
Sure, here's a quick take on how I'd start this.
Because this is still an evolving project, I'd focus on end to end tests and not worry too much about unit tests. I think I can write a demo test suite this week that would do the following:
git-sim status
against the test project.pytest handles this kind of setup really nicely. If everything passes, you never even see the temp dir. If the tests fail, you can drop into that temp dir and see what happened. In that temp dir you can then run the command manually, and do a number of other troubleshooting actions.
If that sounds good to you all, I should be able to share a demo by the end of the week.
@ehmatthes This sounds great. I completely agree that end-to-end tests make more sense for the project at this point, to provide baseline insurance that code changes won't break functionality.
I think the tests should catch 2 types of errors caused by code changes (let me know if you have other ideas):
1) Unhandled Python exceptions 2) Changes that cause deviation from the existing reference image for each test case
The nice thing about comparing to a reference image is that it is comprehensive - the image captures all elements drawn by git-sim, including commits, arrows, text, etc. So any unintended change to those would cause a test failure. The other benefit is that the default git-sim output is an image file, and this corresponds to the final frame of git-sim video output if the user uses the --animate
flag, so verifying the images should be sufficient to guarantee a good video as well.
One downside of comparing images might be performance, so I'm curious how that turns out. Another consideration is that reference files will need to be updated/maintained as functionality changes over time.
For setting up the Git repos to use in each test case, I recommend (again :D) using git-dummy for reproducibility and flexibility. Certain test cases require a Git repo to be in a certain state, and the ability to generate a consistent repo to meet that criteria is why I created git-dummy. Ideally the dummy repos would just be created/cleaned up as a part of the test cases. If you run into a scenario that git-dummy can't currently generate a repo for, let me know and I can add that functionality into git-dummy.
Let us know if you have any other thoughts! Looking forward to seeing what you put together as a demo π
The nice thing about comparing to a reference image is that it is comprehensive - the image captures all elements drawn by git-sim, including commits, arrows, text, etc. So any unintended change to those would cause a test failure. The other benefit is that the default git-sim output is an image file, and this corresponds to the final frame of git-sim video output if the user uses the --animate flag, so verifying the images should be sufficient to guarantee a good video as well.
Yes, this matches my thinking as well.
One downside of comparing images might be performance, so I'm curious how that turns out. Another consideration is that reference files will need to be updated/maintained as functionality changes over time.
I have done this in another project, and performance was not a significant issue. For end to end tests you have to generate the images that the project depends on. Once they're generated, the comparison itself does not take long.
One really nice thing about this approach is that it makes it really straightforward to update the reference files. When you run a test, you can pass a flag to stop after the first failed test. You can then drop into the test's temp dir, and look at the generated image that doesn't match the reference file. If this image is correct, you can simply copy that into the reference file folder, and the test will run. You don't have to generate the test file in a separate process; running the tests and failing becomes a way to keep the test suite up to date. The test suite becomes much more than just a pass/fail test, it shows you exactly what the current code would do for an end user.
For setting up the Git repos to use in each test case, I recommend (again :D) using git-dummy for reproducibility and flexibility. Certain test cases require a Git repo to be in a certain state, and the ability to generate a consistent repo to meet that criteria is why I created git-dummy. Ideally the dummy repos would just be created/cleaned up as a part of the test cases. If you run into a scenario that git-dummy can't currently generate a repo for, let me know and I can add that functionality into git-dummy.
Does git-dummy generate the same hashes each time it's run? That seems like an issue for testing.
Does git-dummy generate the same hashes each time it's run? That seems like an issue for testing.
Great point. The hashes would screw that up, since they depend on the timestamp of the commit which would be regenerated each time git-dummy is executed.
Git-Sim actually has a global flag called --highlight-commit-messages
which hides the hash values. We could use this, however I just noticed there is a visual bug with that makes the commit message text too big and often overlaps between commits, which would give us ugly reference images. One option is just to fix the overlapping and use the existing flag.
Another option is that I can create a new global option for Git-Sim called --show-hashes
, which is true by default. That way in our test scripts we can supply something like git-sim --show-hashes=false log
so that we avoid the non-matching hash issue you mentioned.
Alternatively, we might be able to update git-dummy to use hardcoded, expected timestamps via environment variables GIT_AUTHOR_DATE
and GIT_COMMITTER_DATE
. These could be set and then unset during each run of Git-Dummy, so that we always get the expected SHA1s on our commits...
Do you have any thoughts/preferences?
I don't think any of this is an issue. I think it's possible to create a single git repo to use in all tests. There will be new git-sim commands run, but not new git commands run. So git-dummy might help to build the initial test suite, but it doesn't need to be used on each test run.
Hmm, we could do that, but using a single repo to cover all test cases could get increasingly complex as more and more test cases need to be covered. To eventually get full scenario coverage we may need to jump around in the Git repo to do stuff like checkouts/switches/merges/rebases from different starting states. There are also certain scenarios captured by git-sim logic that are based on other factors, like the number of commits in the repo / on the active branch. For example, in a repo with less than 5 commits, there is special logic to handle the display.
My thinking with git-dummy is that it would be used to create a tailor-made repo for each test case with the desired number of commits, branches, merges, HEAD location, etc, and then just clean it up after each test run. This guarantees a minimally complex repo that suits the specific goals of each test case. Of course it does add time since git-dummy would need to repeatedly create/clean up the reference repos, but it performs pretty fast and has the benefit that the git-dummy command for each test case could be hard coded into the test cases, instead of having to ship a test repo around with the codebase.
Maybe there is a happy medium where we can use git-dummy to create a handful of test repos that are each suited to subsets of test cases, instead of generating and cleaning up a new repo for each test case... Thoughts?
Is there any way to prevent the image from opening automatically when calling something like git-sim log
?
I tried --quiet
, but that just seems to suppress CLI output. I didn't see an obvious place in the code where the generated image is opened; maybe that's something from manim?
Never mind, I just found it: git-sim -d log
Yes! Use the -d
flag to suppress the image auto-open! Like: git-sim -d log
You can use in conjunction with --quiet
to suppress the CLI output as well.
Okay, I think the test for git-sim log
is working. @initialcommit-io, are you on macOS? If so, I think it will be pretty straightforward for you to run this demo in a little bit.
Awesome - I actually switched to pc last year but I do have a mac as well I can try it on... Feel free to submit a PR to the dev branch and I'll take a look.
Here's the repo, and the issue tracking my work.
I think you can try this out with the following steps, without dealing with a PR right away:
$ git clone -b start_testing https://github.com/ehmatthes/git-sim.git
$ cd git-sim
$ python3 -m venv .venv
$ source .venv/bin/activate
(.venv)$ pip install pytest
(.venv)$ pytest -s
========= test session starts ================================================
platform darwin -- Python 3.11.2, pytest-7.3.2, pluggy-1.0.0
rootdir: /Users/eric/test_code/git-sim
collected 3 items
tests/e2e_tests/test_core_commands.py
Temp repo directory:
/private/var/folders/.../pytest-51/sample_repo0
Obtaining file:///Users/eric/test_code/git-sim
Preparing metadata (setup.py) ... done
...
[notice] A new release of pip available: 22.3.1 -> 23.1.2
[notice] To update, run: python3.11 -m pip install --upgrade pip
...
========= 3 passed in 33.52s =================================================
The most important part in this output isn't really the passing tests. In the first part of the output, look for these lines:
Temp repo directory:
/private/var/folders/.../pytest-51/sample_repo0
That gives you the absolute path to the sample repo that was used in the tests. You can poke around that folder, and you'll find the git-sim_media
directory. It took a couple hours to set up the whole test suite, but once that was working the second two tests took ten minutes. For example once the first test for git-sim log
ran, I copied the test and changed the command that was being tested to git-sim status
. That test failed in the final file comparison, but I just popped into the test sample repo and copied the file git-sim-status_06-12-23_14-26-40.jpg
to the reference files, calling it git-sim_status.jpg
.
You can also manually run additional git-sim commands in the test repo after the test has finished running:
sample_repo0$ source .venv/bin/activate
(.venv) sample_repo0$ git-sim -h
If this looks good, I'll do some cleanup and submit a PR. The tests aren't super fast, but:
This is on an M1 Mac Studio. One advantage of this approach is it's used exactly as end users work. It's not just running some code from the project, it's issuing the same commands end users issue, and testing the results of those commands. These are the kinds of tests that really let you sleep at night. :)
You can also manually run additional git-sim commands in the test repo after the test has finished running:
I meant to add that this is fantastic for bug fixing. Here's the process:
Last comments until you've had a chance to try it out. Please don't feel any obligation to accept this. I'm a maintainer as well, and sometimes people have taken my work in a direction I don't want to go. I've enjoyed this, and learned some things I wouldn't have if I didn't spend a morning on this project. No hard feelings in the least if you don't want to use this approach!
Hey wow this is really cool, and it definitely moves us many steps closer to having a test suite for git-sim!
And thanks for your last message π - this is my first time maintaining a project that has gotten some real traction, and it can be easy to feel pressured into accepting code changes.
I ran the steps like you mentioned on my Mac and it worked (except the 3 tests seem to have failed due to the files differing, which I confirmed using vim -d file1 file2
in terminal). Didn't dig into why they didn't match though.
There are a few design choices (possibly due to my biases of my current workflow π) that I prefer:
I'm still not a huge fan of storing and shipping the sample repo with the codebase. I think it feels like an extra piece of baggage that can be generated on the client, so why carry it around? Esp if the current code is taking the time to copy it to a tempdir, which might be comparable to the time of just autogenerating the sample repo(s) the first time the tests are run.
Also not sure how I feel about creating a virtual env as a part of the test suite and installing everything in it. The manim deps are not fun, and my first run took 96 seconds, and subsequent runs took 70 seconds (still takes time to do the venv checks even when the deps are installed). If the user wants a venv they can create one and activate it before running the test suite (which they likely already would have done if developing with a venv), or they can just use a manually-set-up editable git-sim install like python -m pip install -e .
(after navigating into the working codebase). Wouldn't this provide the benefits of the bug fixing scenario you outlined above while saving time of configuring the venvs each time?
Would love to hear your thoughts and let me know if I'm missing anything...
I'm still not a huge fan of storing and shipping the sample repo with the codebase
Do you have an idea how we could make the hashes be always the same? Maybe we could create the commit timestamps randomly (not with the current time, but randomly chosen - not sure if that's possible?) and then use a fixed seed for the random module (so it will always output the same for testing) :thinking:
Or is there is a way to hardcode the commit hashes directly?
Otherwise I think this is not too bad, since we will not ship the tests and test data with the package, but of course it will be part of the git repo. Also, I assume that the "copying over the test repo"-part is decently fast, the venv setup + installation of dependenies is probably what takes most time.
not sure how I feel about creating a virtual env as a part of the test suite and installing everything in it.
Agree, I don't really see the advantage of creating a venv from "inside" the tests - when developing, you are (hopefully) already in a venv, most likely an editable install... maybe I am missing something?
But I am glad to see that you took the initiative, thanks @ehmatthes !
Do you have an idea how we could make the hashes be always the same?
See my https://github.com/initialcommit-com/git-sim/issues/55#issuecomment-1587871492 above. (esp the last paragraph)
Hi again, and thanks for giving this a try.
it worked (except the 3 tests seem to have failed due to the files differing
That's not good! Tests that pass on my system and fail on yours are not useful!
It turns out that generating jpg or png output can vary slightly across systems. I had hoped that using pngs would generate identical output, but that's not the case. The output images look the same on both systems, but the pixels differ slightly. Image comparison is a huge topic. In the end I replaced the use of filecmp.cmp()
with a custom compare_images()
function that compares pixels, and fails the comparison if they differ above a certain threshold. The comparison takes about 0.15s for each test.
If you run the updated version, they should pass on your system:
$ git pull origin start_testing
$ pytest -s
For the development phase of the test suite, I put in some diagnostics. Here's some selected output, on the M1 Mac Studio:
==== SETUP TIME ====
Build venv: 1.917367167014163
Install git-sim: 21.972012166981585
Copied repo: 21.990174583974294
Finished setup: 21.990259875019547
====
Test log: 23.29906716599362
Test status: 2.1890487499767914
Test merge: 2.6912805839674547
I'm still not a huge fan of storing and shipping the sample repo with the codebase. I think it feels like an extra piece of baggage that can be generated on the client, so why carry it around? Esp if the current code is taking the time to copy it to a tempdir, which might be comparable to the time of just autogenerating the sample repo(s) the first time the tests are run.
The sample repo is 39kb. I think the biggest reason to include it for now is that we need stable hashes in order to test output. If we generate a repo every time we run tests, we'll have different hashes and the reference images won't match. I absolutely think hashes should be included in the reference images, because most users will include hashes.
In the test setup, most of the time is spent installing git-sim and its dependencies to the tmp environment. Copying the test repo to that environment takes less than 0.02s.
Also not sure how I feel about creating a virtual env as a part of the test suite and installing everything in it. The manim deps are not fun, and my first run took 96 seconds, and subsequent runs took 70 seconds (still takes time to do the venv checks even when the deps are installed). If the user wants a venv they can create one and activate it before running the test suite (which they likely already would have done if developing with a venv), or they can just use a manually-set-up editable git-sim install like python -m pip install -e . (after navigating into the working codebase). Wouldn't this provide the benefits of the bug fixing scenario you outlined above while saving time of configuring the venvs each time?
I hear you, it's not fun seeing an initial test suite take on the order of minutes with just three tests. The main reason for this approach is that ideally, test environments should be disposable. That ensures that test runs are consistent, and aren't impacted by leftover state from previous test runs, or inconsistent environments across systems.
I think the alternative to creating a tmp venv on every test run is to create one in a specified location, say ~/git-sim-tests/
. If that directory is not found, create it and create the full test environment. If it is found, make sure there's an editable install of git-sim available. You also have to do some work to make sure there's nothing left over from previous tests that will affect the new test run. This can be done, but there's a reason people really don't prefer it. It's easy to think you've reset the environment, and end up with some accumulated cruft that affects new test runs in simple ways. This can lead to passing tests that should fail, and failing tests that should pass. You can make a recommendation that people destroy their test environment and rerun the test suite when tests fail for reasons you can't quickly identify. But then you can find yourself manually doing the destroy/rebuild work that a tmp dir approach handles automatically.
People should definitely be able to run tests from the project root, without having to go set up an environment first, and certainly not having to go to an external directory to run the tests. Otherwise you lose the benefit of an automated test suite, and when people report failures you have to ask them about how they are running their tests. Also, people should not try to run tests in an environment where they're running git-sim against any project other than the test repo.
One of the things that slows the test down is that the first time you run a git-sim command in a fresh environment, it takes a long time. On my intel Mac, setup for the test suite takes ~30s, but the first test of a git-sim command takes 36s. Subsequent commands take closer to 10s. That roughly matches what I see if I set up a new git project, install git-sim, and try using it.
If you're curious, I'd be happy to take a look at building out a setup fixture that looks for an existing test environment. Talking this through, it would be interesting to do that, and then add a CLI arg. You could have pytest
run tests in the stable test environment by default, and pytest --use-tmp-env
run tests in a fresh tmp env.
Last thought on this note, some of my hesitation also comes from being reluctant to write to a part of the developer's file system that they might not be expecting. pytest handles temp dirs by tucking them away on a part of the filesystem that's really safe, established, and well-tested. We can't make a directory in the git-sim repository itself for a variety of reasons. There are probably ways to inform the developer of what we're doing, but generating output and prompting for input is not straightforward with pytest, because tests are expected to run non-interactively.
If you were going to build a stable test environment on a developer's system, where would you build that environment? My guess would be ~/git-sim_test_runs/
, or maybe alongside wherever they placed git-sim/
.
One more quick note. You can uncomment this block, and the tests will fail. Those lines run a git reset
command, which makes the generated git-sim images differ from the reference files.
For example, here's the reference file for testing git-sim log
:
With those lines uncommented, here's the file that's generated during the test run:
These images look pretty similar, but they do fail the comparison test. When evaluating the compare_images()
function, it's nice to be able to quickly make a test run that should fail, even though images are successfully generated and look quite similar to what's expected.
Agree, I don't really see the advantage of creating a venv from "inside" the tests - when developing, you are (hopefully) already in a venv, most likely an editable install... maybe I am missing something?
Hi @paketb0te, I may well be unnecessarily carrying over a mindset from a different project to this one, and making the setup more involved than it needs to be. Can you describe your setup for developing this project? I would guess it's something like this:
Is that accurate? Are you then proposing we'd run tests in the git-sim-work-dir/ directory? Or at least use that environment to run tests?
To be fair it's been a while since I last touched git-sim
:smile:
My usual flow (not only for git-sim
) is
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
poetry
, these three steps are instead a single poetry install
)-> I use the same virtual environment for developing and testing.
This has been working pretty well for me, but I am always happy to learn about new / improved / better approaches :smiley:
The git repo on which the tests are run can live wherever, I guess pytests tmpfile stuff is great for that. As long as the virtual environment where git-sim is installed is active, it does not matter where the test-repo is.
That's great! This is really helpful. I got in the habit long ago of making my venv in the project where I'm playing around, not in the library itself. I think that's because I did a lot of work in venvs before I started working directly on libraries. I'm also influenced by a project that does require a completely separate environment for testing.
I'll be back with something more efficient. :)
@ehmatthes Sweet! The tests pass now (although I did initially get errors until I manually installed pillow and numpy into the venv).
It turns out that generating jpg or png output can vary slightly across systems.
Wow... I have also assumed till now that the generated image content would match across environments... Good to know that's not the case (that knowledge may save headaches down the line) and nice alternative solution with numpy, even if it may require a bit of tweaking the ratio_diff
over time.
The sample repo is 39kb. I think the biggest reason to include it for now is that we need stable hashes in order to test output.
I may be nitpicking on this a bit, but it's not so much the size of the sample repo, but the idea of its inclusion (including object database and other git config files like sample hook files) inside the git-sim source code. It just feels a bit sloppy and like something that should be generated if we can. As for getting consistent hashes, I had this suggestion above:
_Alternatively, we might be able to update git-dummy to use hardcoded, expected timestamps via environment variables GIT_AUTHOR_DATE and GIT_COMMITTERDATE. These could be set and then unset during each run of Git-Dummy, so that we always get the expected SHA1s on our commits...
I'll be back with something more efficient. :)
I was just writing up a response about how your thorough points about the venv setup make a lot of sense, but now I'm curious what the more efficient option will be π
@paketb0te, thank you so much for jumping in. I had never seen pip install -e .
before.
The current version of the tests run in 6s on my Studio, and 24s on my intel Mac. They take closer to a minute when first cloning and getting all set up, but after that they run consistently at those two speeds. Most of that time seems to be just the time that git-sim typically takes to run when generating an image.
@initialcommit-io I think you can pull start_testing
again and you should see that same speedup.
I was wary of using Pillow and NumPy, but they were already installed in the environment where I cloned git-sim. I just cloned the project again to verify that. (Clone your git-sim, make new venv, run pip install -e .
, NumPy and Pillow are both present.)
I may be nitpicking on this a bit, but it's not so much the size of the sample repo, but the idea of its inclusion (including object database and other git config files like sample hook files) inside the git-sim source code. It just feels a bit sloppy and like something that should be generated if we can. As for getting consistent hashes, I had this suggestion above:
I can understand your hesitation to have a sample repo within the project. I would argue that it's not unusual at all to have a number of static resources in a tests/ directory. If it were straightforward to generate a sample repo for each test run I'd be all for that. But overriding something as fundamental as git hashing seems like a lot more work than simply having a small sample repository in the test resources.
That said, I did rename the .git dir to .git_not_a_submodule, otherwise Git thinks the overall project contains a submodule. Are there likely to be other issues as that sample repo grows more complex?
I think you can pull start_testing again and you should see that same speedup.
After a clean setup, I got FileNotFoundErrors (tried with and without venv, but maybe doing something silly):
FileNotFoundError: [Errno 2] No such file or directory: '/Users/jstopak/Desktop/litterally-my-entire-desktop/testrepos/git-sim/.venv/bin/git-sim'
/usr/local/Cellar/python@3.11/3.11.1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/subprocess.py:1901: FileNotFoundError
================================================================== short test summary info ==================================================================
FAILED tests/e2e_tests/test_core_commands.py::test_log - FileNotFoundError: [Errno 2] No such file or directory: '/Users/jstopak/Desktop/litterally-my-entire-desktop/testrepos/git-sim/.venv/bin/git-sim'
FAILED tests/e2e_tests/test_core_commands.py::test_status - FileNotFoundError: [Errno 2] No such file or directory: '/Users/jstopak/Desktop/litterally-my-entire-desktop/testrepos/git-sim/.venv/bin/git-sim'
FAILED tests/e2e_tests/test_core_commands.py::test_merge - FileNotFoundError: [Errno 2] No such file or directory: '/Users/jstopak/Desktop/litterally-my-entire-desktop/testrepos/git-sim/.venv/bin/git-sim'
===================================================================== 3 failed in 6.04s =====================================================================
Clone your git-sim, make new venv, run pip install -e ., NumPy and Pillow are both present.
Ahh got it!
But overriding something as fundamental as git hashing seems like a lot more work than simply having a small sample repository in the test resources.
I see it less as overriding the hashing and more as customizing a timestamp, but yes let me test that out in git-dummy and see if it's actually as simple as I claim π...
Here's the block that determines what command to use for running git-sim:
def test_log(tmp_repo):
"""Test a simple `git-sim log` command."""
git_sim_path = Path(os.environ.get('VIRTUAL_ENV')) / 'bin/git-sim'
git_sim_cmd = f"{git_sim_path} -d --output-only-path --img-format=png log"
cmd_parts = split(git_sim_cmd)
...
So it's trying to run the command:
/Users/jstopak/Desktop/litterally-my-entire-desktop/testrepos/git-sim/.venv/bin/git-sim -d --output-only-path --img-format=png log
When I ran pip install -e .
in the git-sim directory, I got a git-sim command in bin/. Where is your git-sim command?
I see it less as overriding the hashing and more as customizing a timestamp, but yes let me test that out in git-dummy and see if it's actually as simple as I claim π...
I'm not one to talk about what's simple and what's complex today. π€¦
Oops I knew I was doing something silly... I had git-sim installed in every location except the venv in the clone of your fork that I was working in...
Nice, it works now in 45s on the first run and 18s after that (on my Intel macbook pro).
Speaking of the time difference between the first and subsequent runs - this was something I had noticed and has also been reported various times by users. My only thought atm is it may be due to not having the pyc files on the first run, which are then generated and available in future runs? Manim does invoke some programs like ffmpeg for rendering, but I would think calls to that would be constant time. Any other thoughts on that?
I have seen that kind of behavior with other libraries as well. Matplotlib definitely comes to mind. I'm not sure there's anything to do about it, though.
If things are heading in the right direction, I can do some cleanup and submit a PR. I'll see that it runs on Windows as well.
Sounds perfect!
Plz submit PR to dev branch
@ehmatthes I am happy to see you have taken over this issue. @initialcommit-io I did not get a lot of time to work on this issue rather than the time I spent initially researching manim graphical unit tests.
I followed manim's own graphical unit tests since they had a really good underlying testing framework for frame comparison both from the final result and frame by frame comparison of the video.
I have updated all that I had to this branch https://github.com/abhijithnraj/git-sim/tree/unit_test. @ehmatthes Please check it if you think it will help. But I see you have already made significant progress.
@initialcommit-io @ehmatthes I saw some discussions above regarding maintaining the same commit hashes. I faced the same problem while I checked with git dummy. That's why I created https://github.com/abhijithnraj/git-sim/blob/unit_test/test/temp_git_creator.py to prevent this blocker until such changes could be introduced to git_dummy. Feel free to use any of the code changes in the branch.
@ehmatthes Thanks for the PR and for adding that documentation!! Overall looks excellent.
I ended up having some extra time tonight so I added a new flag to git-dummy called --constant-sha
which keeps the commit SHA's consistent by hardcoding the commit author, email, and commit date into all commits.
I released this as git-dummy version 0.0.8. Since git-dummy is a dependency of git-sim (and is installed along with it) all users using pip to install git-sim should get 0.0.8 now.
Can you update the PR to replace the sample repo with a call (in the code) to git-dummy to autogenerate one using the --constant-sha
flag? I think you can use the same git-dummy command you used to create the sample repo but just add that flag. For now we can just use a single repo call to cover all test cases, and later on we can add cleanup and/or additional calls to create new repo structures as needed.
I know this involves some code changes, documentation changes that reference the sample repo, and the reference images will need to be redone with the new constant-sha repo... Sorry for the late change-up, I didn't think I'd have time to get git-dummy updated this soon.
Can you update the PR to replace the sample repo with a call (in the code) to git-dummy to autogenerate one using the --constant-sha flag? I think you can use the same git-dummy command you used to create the sample repo but just add that flag. For now we can just use a single repo call to cover all test cases, and later on we can add cleanup and/or additional calls to create new repo structures as needed.
No problem! "92 files changed" was a nice motivation to land that flag, right? :)
Hi, I have been following this project for some time since its release. Great Work.
But support for several git features coming in, is there any plans or ideas on testing the features. I would like to work on this.
For Manim based tests, we can have a look at the same way manimCE tests their own code. https://docs.manim.community/en/stable/contributing/testing.html and https://github.com/ManimCommunity/manim/tree/main/tests
Hi, @initialcommit-io , is there any test design you have in mind.?
Some of my ideas:
Once done we can add an actions pipeline to ensure continuous integration.
I am working on this, let me know if there are any existing plans on this.
Once again, Thank You for this wonderful project.