TabbycatDebate / tabbycat

Debating tournament tabulation software for British Parliamentary and a variety of two-team parliamentary formats
https://tabbycat.readthedocs.io/
GNU Affero General Public License v3.0
245 stars 828 forks source link

Add default line for Equity in tournament staff #1827

Closed tienne-B closed 2 years ago

tienne-B commented 3 years ago

Knowing who is equity and how to contact them is important, but a line to list them is not in the default template, and so is less frequently done.

For this task, you'll need to add a line that asks to list Equity here:

https://github.com/TabbycatDebate/tabbycat/blob/3a9ca120dd30bf32a1e252a0b9f7add81694d970/tabbycat/tournaments/forms.py#L98-L100

hecker900 commented 3 years ago

I'll do it too

rhe4n commented 2 years ago

Hi, I can get this done.

tienne-B commented 2 years ago

Great! Don't hesitate to ask if you need anything!

rhe4n commented 2 years ago

Alright, the line is added and I manually checked it shows up where it should. I'm having a bit of a problem passing the django tests, as there are apparently other sessions using the test database. Should I keep trying to solve this or is the test suite broken?

czlee commented 2 years ago

That seems like an environment setup issue, if you can share the error message with us hopefully we can figure it out?

philipbelesky commented 2 years ago

@rhe4n I think I remember hitting that error if I was running the Django server at the same time as running the tests

rhe4n commented 2 years ago

running the Django server at the same time

That was my first conclusion too, but after rebooting/restarting postgres service I'm still getting it.

if you can share the error message with us

Of course. Idk how you guys usually share logs this large, so in the meantime here you have the whole dj test output:

(venv) rhean@rhean-VirtualBox:~/tabbycat$ dj test Creating test database for alias 'default'... System check identified no issues (17 silenced). [2022-04-15 10:47:09,265] INFO breakqual.utils: Liveness in demo R1/4 with break size 8, 24 teams: safe at 3, dead at -3 [2022-04-15 10:47:09,270] INFO breakqual.utils: Liveness in demo R1/4 with break size 4, 24 teams: safe at 4, dead at -5 [2022-04-15 10:47:09,281] INFO breakqual.utils: Liveness in demo R1/4 with break size 2, 24 teams: safe at 4, dead at -5 u.[2022-04-15 10:47:11,393] INFO breakqual.utils: Liveness in demo R1/4 with break size 8, 24 teams: safe at 3, dead at -3 [2022-04-15 10:47:11,397] INFO breakqual.utils: Liveness in demo R1/4 with break size 4, 24 teams: safe at 4, dead at -5 [2022-04-15 10:47:11,402] INFO breakqual.utils: Liveness in demo R1/4 with break size 2, 24 teams: safe at 4, dead at -5 u....................................................................................[2022-04-15 10:47:40,532] INFO breakqual.utils: Liveness in demo R1/4 with break size 8, 24 teams: safe at 3, dead at -3 [2022-04-15 10:47:40,537] INFO breakqual.utils: Liveness in demo R1/4 with break size 4, 24 teams: safe at 4, dead at -5 [2022-04-15 10:47:40,541] INFO breakqual.utils: Liveness in demo R1/4 with break size 2, 24 teams: safe at 4, dead at -5 u.....................................................................................................................xx......................................................................[2022-04-15 10:48:47,162] INFO breakqual.utils: Liveness in demo R4/4 with break size 8, 24 teams: safe at 3, dead at 0 [2022-04-15 10:48:47,168] INFO breakqual.utils: Liveness in demo R4/4 with break size 4, 24 teams: safe at 4, dead at 0 [2022-04-15 10:48:47,173] INFO breakqual.utils: Liveness in demo R4/4 with break size 2, 24 teams: safe at 3, dead at 0 u..[2022-04-15 10:49:04,650] INFO standings.metrics: Annotation in AverageReplyScoreMetricAnnotator: Avg(F(speakerscore__score), filter=(AND: ('speakerscore__ballot_submission__confirmed', True), ('speakerscore__debate_team__debate__round__seq__lte', 4), ('speakerscore__debate_team__debate__round__stage', 'P'), ('speakerscore__ghost', False), ('speakerscore__position', 4))) [2022-04-15 10:49:04,651] INFO standings.metrics: Annotation in StandardDeviationReplyScoreMetricAnnotator: StdDev(F(speakerscore__score), filter=(AND: ('speakerscore__ballot_submission__confirmed', True), ('speakerscore__debate_team__debate__round__seq__lte', 4), ('speakerscore__debate_team__debate__round__stage', 'P'), ('speakerscore__ghost', False), ('speakerscore__position', 4)), sample=False) [2022-04-15 10:49:04,652] INFO standings.metrics: Annotation in NumberOfRepliesMetricAnnotator: Count(F(speakerscore__score), filter=(AND: ('speakerscore__ballot_submission__confirmed', True), ('speakerscore__debate_team__debate__round__seq__lte', 4), ('speakerscore__debate_team__debate__round__stage', 'P'), ('speakerscore__ghost', False), ('speakerscore__position', 4))) /home/rhean/tabbycat/venv/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py:359: UserWarning: find_elements_by_tag_name is deprecated. Please use find_elements(by=By.TAG_NAME, value=name) instead warnings.warn("find_elements_by_tag_name is deprecated. Please use find_elements(by=By.TAG_NAME, value=name) instead") .[2022-04-15 10:49:15,202] INFO standings.metrics: Annotation in AverageSpeakerScoreMetricAnnotator: Avg(F(speakerscore__score), filter=(AND: ('speakerscore__ballot_submission__confirmed', True), ('speakerscore__debate_team__debate__round__seq__lte', 4), ('speakerscore__debate_team__debate__round__stage', 'P'), ('speakerscore__ghost', False), ('speakerscore__position__lte', 3))) [2022-04-15 10:49:15,204] INFO standings.metrics: Annotation in StandardDeviationSpeakerScoreMetricAnnotator: StdDev(F(speakerscore__score), filter=(AND: ('speakerscore__ballot_submission__confirmed', True), ('speakerscore__debate_team__debate__round__seq__lte', 4), ('speakerscore__debate_team__debate__round__stage', 'P'), ('speakerscore__ghost', False), ('speakerscore__position__lte', 3)), sample=False) [2022-04-15 10:49:15,205] INFO standings.metrics: Annotation in NumberOfSpeechesMetricAnnotator: Count(F(speakerscore__score), filter=(AND: ('speakerscore__ballot_submission__confirmed', True), ('speakerscore__debate_team__debate__round__seq__lte', 4), ('speakerscore__debate_team__debate__round__stage', 'P'), ('speakerscore__ghost', False), ('speakerscore__position__lte', 3))) ........................................................... ---------------------------------------------------------------------- Ran 340 tests in 182.355s FAILED (expected failures=2, unexpected successes=4) Destroying test database for alias 'default'... /home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:304: RuntimeWarning: Normally Django will use a connection to the 'postgres' database to avoid running initialization queries against the production database when it's not needed (for example, when running tests). Django was unable to create a connection to the 'postgres' database and will use the first PostgreSQL database instead. warnings.warn( Traceback (most recent call last): File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 82, in _execute return self.cursor.execute(sql) psycopg2.errors.ObjectInUse: database "test_tabbycatdb" is being accessed by other users DETAIL: There are 3 other sessions using the database. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/postgresql/base.py", line 302, in _nodb_cursor yield cursor File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/base/creation.py", line 298, in _destroy_test_db cursor.execute("DROP DATABASE %s" File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers return executor(sql, params, many, context) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 82, in _execute return self.cursor.execute(sql) django.db.utils.OperationalError: database "test_tabbycatdb" is being accessed by other users DETAIL: There are 3 other sessions using the database. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/rhean/tabbycat/manage.py", line 18, in execute_from_command_line(sys.argv) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line utility.execute() File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 413, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/core/management/commands/test.py", line 23, in run_from_argv super().run_from_argv(argv) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/core/management/base.py", line 354, in run_from_argv self.execute(*args, **cmd_options) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/core/management/base.py", line 398, in execute output = self.handle(*args, **options) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/core/management/commands/test.py", line 55, in handle failures = test_runner.run_tests(test_labels) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/test/runner.py", line 736, in run_tests self.teardown_databases(old_config) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/test/runner.py", line 674, in teardown_databases _teardown_databases( File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/test/utils.py", line 313, in teardown_databases connection.creation.destroy_test_db(old_name, verbosity, keepdb) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/base/creation.py", line 282, in destroy_test_db self._destroy_test_db(test_database_name, verbosity) File "/home/rhean/tabbycat/venv/lib/python3.8/site-packages/django/db/backends/base/creation.py", line 298, in _destroy_test_db cursor.execute("DROP DATABASE %s" File "/usr/lib/python3.8/contextlib.py", line 162, in __exit__ raise RuntimeError("generator didn't stop after throw()") RuntimeError: generator didn't stop after throw()

As you can see, the tests first fail with no exception, but when the suite tries to delete the test database it crashes, as it is being used. No clue what else might be using this database (possibly a stray thread of this tests?).

Again, if this is not already a known problem I can probably figure it out on my own. The vm I'm using for this is a bit old, that might be the culprit.

czlee commented 2 years ago

Ah, yeah, that one's new to me, sorry.

For what it's worth, you'd be welcome to just submit a PR to see how the tests go; we have Django CI set up to run the test suite on all PRs. It'd be very surprising if something like this broke the regression tests, since I don't even think we have a test case for the tournament staff template.

rhe4n commented 2 years ago

Yeah, it makes no sense to me that such a small change might make the tests fail - I literally only changed one string. I will do the PR and we'll see what happens. On another note, how should I make the request? Should I finish my local feature and then PR develop -> develop? Or feature -> develop?

czlee commented 2 years ago

I just had a quick peek at your repo—if it's on your feature/equity-line branch, then that seems like a great place to create the PR from.

It's your fork, so whether you merge it into your develop branch is up to you—it won't affect this main repo. But I'd recommend against making the PR from your develop branch, because whenever you push to the branch used to create the PR, the PR will automatically update (to track the branch). So it's better always to make PRs from feature branches.

If I can be a little pedantic about how the commits are phrased:

Thanks for your work on this!

tienne-B commented 2 years ago

Fixed in 7fa47eb7e2ff996b01171c05e1aad64d1719033d.