Closed johann-petrak closed 3 years ago
I would be fine with adding a test, but I had difficulties getting started with how testing works in FARM.
Doing python setup.py test
shows a deprecation warning and fails with the following exception:
convert_tf_checkpoint_to_pytorch (unittest.loader._FailedTest) ... ERROR
======================================================================
ERROR: convert_tf_checkpoint_to_pytorch (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: convert_tf_checkpoint_to_pytorch
Traceback (most recent call last):
File "/home/johann/.conda/envs/farm-dev/lib/python3.8/unittest/loader.py", line 154, in loadTestsFromName
module = __import__(module_name)
File "/home/johann/work-git/FARM-forked/farm/conversion/convert_tf_checkpoint_to_pytorch.py", line 24, in <module>
from transformers.modeling_bert import (
ModuleNotFoundError: No module named 'transformers.modeling_bert'
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
Test failed: <unittest.runner.TextTestResult run=1 errors=1 failures=0>
error: Test failed: <unittest.runner.TextTestResult run=1 errors=1 failures=0>
Is there anything needed to be done in addition to the normal installation procedure to get set up for development and what is the proper way for running unit tests?
Hey @johann-petrak,
You can run the tests via cd test && pytest
You can also deselect the conversion tests which take quite some time to run via cd test && pytest -m "not conversion"
(see our CI https://github.com/deepset-ai/FARM/blob/751b48f0310e11c0e4cbdb95f264e47a587d86e1/.github/workflows/ci.yml#L34 )
Let me know if you face any problems with this.
Thanks - I still get what looks like a dependency problem:
$ cd test && pytest -m "not conversion"
05/21/2021 10:22:12 - INFO - transformers.file_utils - PyTorch version 1.3.1 available.
05/21/2021 10:22:12 - INFO - transformers.modeling_xlnet - Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .
ImportError while loading conftest '/home/johann/work-git/FARM-forked/test/conftest.py'.
conftest.py:5: in <module>
from farm.data_handler.data_silo import DataSilo
../../../nlp/GIT/FARM-mine/farm/data_handler/data_silo.py:20: in <module>
from farm.data_handler.processor import Processor
../../../nlp/GIT/FARM-mine/farm/data_handler/processor.py:15: in <module>
from farm.data_handler.input_features import (
../../../nlp/GIT/FARM-mine/farm/data_handler/input_features.py:17: in <module>
from farm.modeling.tokenization import insert_at_special_tokens_pos
../../../nlp/GIT/FARM-mine/farm/modeling/tokenization.py:25: in <module>
from transformers.tokenization_albert import AlbertTokenizer
E ModuleNotFoundError: No module named 'transformers.tokenization_albert'
I do have transformers version 4.5.0 (which fits requirements.txt) installed but the module transformers.tokenization_albert does not exist.
I had a quick look at the current transformers source code and it seems there is a class with that name in
transformers.models.albert.tokenization_albert
It seems that your tests are using another (outdated) FARM installation. The import that is flagged in your logs (farm/modeling/tokenization.py:25 ) shouldn't call transformers.models.albert.tokenization_albert
but rather look like this:
https://github.com/deepset-ai/FARM/blob/master/farm/modeling/tokenization.py#L26
If you run this from an IDE you probably need to make sure that the test is using your current project code. If you run this from the terminal, pip install -e .
in FARM's root folder should do the trick.
Thanks, your observation helped point me in the right direction. This is something I would consider a pytest bug: if pytest is installed in a base or system wide environment it will not respect the PYTHONPATH of the current environment, even if pytest also got installed into the current environment.
After uninstalling pytest from all the base and other non-env locations it worked.
So of 92 tests of which 11 were deselected and 81 selected, 80 passed and 1 failed:
FAILED test_ner_amp.py::test_ner_amp - assert 0.12325342 > 0.12
Since I have not found an existing unit tests module for the metric module, I have created an initial unit test module now which tests the list support, but this could test quite a lot of method calls from evaluation.metric. Since unit testing is an opinionated topic just added that initial code for now.
Ok, that's really nasty and seems like a pytest bug. Good to know.
Regarding the failing test: Seems to pass in the CI. So probably only a minor fluctuation on the precision in your local run.
From my side this seems ready to be merged or do you want to add anything else here?
Fine from my side!
Thanks, your observation helped point me in the right direction. This is something I would consider a pytest bug: if pytest is installed in a base or system wide environment it will not respect the PYTHONPATH of the current environment, even if pytest also got installed into the current environment.
After uninstalling pytest from all the base and other non-env locations it worked.
So of 92 tests of which 11 were deselected and 81 selected, 80 passed and 1 failed:
FAILED test_ner_amp.py::test_ner_amp - assert 0.12325342 > 0.12
Since I have not found an existing unit tests module for the metric module, I have created an initial unit test module now which tests the list support, but this could test quite a lot of method calls from evaluation.metric. Since unit testing is an opinionated topic just added that initial code for now.
I came across this failing test too when I upgraded pytorch to 1.8.1. in this PR We will relax the test to https://github.com/deepset-ai/FARM/blob/24a5b05b9ade56c645c29702df5cddfaa9185df8/test/test_ner_amp.py#L97
Also add pythondoc for the compute_metrics function.