Closed eelucio closed 1 month ago
@eelucio I think test_plugin_status
got the PASSED in action
https://github.com/FREVA-CLINT/freva/actions/runs/9098881830/job/25010139164#step:7:100
Let me check it this afternoon on my machine, but it seems it's the matter different configuration
@MoSHad91 thanks,
I have run the same whole test several times and more times than not it passes, but sometimes it does not, without me modifying (yet) any test.
I am running with python 3.12.3 (the version I got after the latest freva-dev conda installation). I see that the test was run with py3.11 but I would doubt that is the issue.
For example just now I ran make test
and make test_coverage
and I got no issue.
It must be with some timeout or the time freva is waiting for a response, maybe sometimes my laptop is too slow and it does not directly entering into status=running
?
I doubt that your laptop is too slow. Can you post the output of the plugin output file for a test that fails at this position?
I do not know if this is going to answer this question:
I believe the log of the failed job must be:
DummyPlugin-6004.out -----
[09:33:14] INFO freva - INFO - Running dummyplugin as _plugin.py:504
scheduled in history with ID 2320
ERROR freva - ERROR - This is not a scheduled job utils.py:99
(status 3)! - increase verbosity flags (-v) for
more information
╭───── Traceback (most recent call last) ──────╮
│ /home/etor/work/freva/github/freva/src/evalu │
│ ation_system/api/plugin_manager.py:1253 in │
│ load_scheduled_conf │
│ │
│ 1250 │ row = h[0] │
│ 1251 │ # scheduled jobs only │
│ 1252 │ if row.status != History.processS │
│ ❱ 1253 │ │ raise Exception("This is not │
│ 1254 │ return row.config_dict() │
│ 1255 │
│ 1256 │
╰──────────────────────────────────────────────╯
Exception: This is not a scheduled job (status
3)!
where
STATUS_CHOICES = (
(processStatus.finished, "finished"),
(processStatus.finished_no_output, "finished (no output)"),
(processStatus.broken, "broken"),
(processStatus.running, "running"),
(processStatus.scheduled, "scheduled"),
(processStatus.not_scheduled, "not scheduled"),
)
It seems that it was expecting a running but got a scheduled. I do not know to which output rowid it corresponds, since everytime I run a test the DB is wiped and rewritten.
on another note: the job_ids are randomly generated right? because I have job_id of 2,469 the latest but I am wiping the db everytime we run this tests, and I can see that in the /tmp/evaluation_system_test/etor/evaluation_system
I have earlier job_ids that are even higher than that
It seems that that job is already being taken or never get scheduled. Can you reproduce this error outside of your pytest environment?
I tried to run
e.g.
import freva
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true"
res_batchmode = freva.run_plugin("dummyplugin", the_number=2, other=-5, batchmode=True)
Scheduled job with history id: 2111
You can view the job's status with the command squeue
Your job's progress will be shown with the command
tail -f /tmp/share/slurm/dummyplugin/DummyPlugin-2226.out
where
$ tail -f /tmp/share/slurm/dummyplugin/DummyPlugin-2226.out
/tmp/tmp4_ijk8dp.sh: line 13: freva-plugin: command not found
and
$ cat /tmp/tmp4_ijk8dp.sh
#!/usr/bin/env bash
cleanup() {
echo 'Caught SIGINT, cleaning up...'
kill -QUIT $PLUGIN_ID
exit 1
}
trap 'cleanup' QUIT
PID=$(pgrep -f $0)
export EVALUATION_SYSTEM_CONFIG_FILE=/home/etor/work/freva/github/freva/compose/local-eval-system.conf
freva-plugin dummyplugin --unique-output True --scheduled-id 2111 &> /tmp/share/slurm/dummyplugin/DummyPlugin-$PID.out &
PLUGIN_ID=$!
wait $PLUGIN_ID%
I used to ahve the whole
├── activate_csh
├── activate_fish
├── activate_sh
├── completions
│ ├── complete_csh
│ ├── complete_fish
│ └── complete_sh
└── loadfreva.modules
in freva/compose/
but those were outdated pointing to an old freva.
How can I recreate them?
I am reading the setup.py and I do not see how to. (I can always replace the old conda env path with the new one if necessary)
I tried to run
e.g.
import freva os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" res_batchmode = freva.run_plugin("dummyplugin", the_number=2, other=-5, batchmode=True) Scheduled job with history id: 2111 You can view the job's status with the command squeue Your job's progress will be shown with the command tail -f /tmp/share/slurm/dummyplugin/DummyPlugin-2226.out
where
$ tail -f /tmp/share/slurm/dummyplugin/DummyPlugin-2226.out /tmp/tmp4_ijk8dp.sh: line 13: freva-plugin: command not found
and
$ cat /tmp/tmp4_ijk8dp.sh #!/usr/bin/env bash cleanup() { echo 'Caught SIGINT, cleaning up...' kill -QUIT $PLUGIN_ID exit 1 } trap 'cleanup' QUIT PID=$(pgrep -f $0) export EVALUATION_SYSTEM_CONFIG_FILE=/home/etor/work/freva/github/freva/compose/local-eval-system.conf freva-plugin dummyplugin --unique-output True --scheduled-id 2111 &> /tmp/share/slurm/dummyplugin/DummyPlugin-$PID.out & PLUGIN_ID=$! wait $PLUGIN_ID%
I used to ahve the whole
├── activate_csh ├── activate_fish ├── activate_sh ├── completions │ ├── complete_csh │ ├── complete_fish │ └── complete_sh └── loadfreva.modules
in
freva/compose/
but those were outdated pointing to an old freva. How can I recreate them?I am reading the setup.py and I do not see how to. (I can always replace the old conda env path with the new one if necessary)
I just did the same on my machine with new env and new docker running and latest freva repo but got the finished
out of this after 5 secs.
In [1]: import os
...: import time
...: import freva
...: os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true"
...: res_batchmode = freva.run_plugin("dummyplugin", the_number=2, other=-5, batchmode=True)
...: while res_batchmode.status != "finished":
...: print(f"Current batchmode status: {res_batchmode.status}")
...: time.sleep(1)
...: print(f"Final batchmode status: {res_batchmode.status}")
Scheduled job with history id: 2670
You can view the job's status with the command squeue
Your job's progress will be shown with the command
tail -f /tmp/share/slurm/dummyplugin/DummyPlugin-71367.out
Current batchmode status: scheduled
Current batchmode status: running
Current batchmode status: running
Current batchmode status: running
Current batchmode status: running
Current batchmode status: running
Final batchmode status: finished
it's kinda strange
I tried to run
e.g.
import freva os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" res_batchmode = freva.run_plugin("dummyplugin", the_number=2, other=-5, batchmode=True) Scheduled job with history id: 2111 You can view the job's status with the command squeue Your job's progress will be shown with the command tail -f /tmp/share/slurm/dummyplugin/DummyPlugin-2226.out
where
$ tail -f /tmp/share/slurm/dummyplugin/DummyPlugin-2226.out /tmp/tmp4_ijk8dp.sh: line 13: freva-plugin: command not found
and
$ cat /tmp/tmp4_ijk8dp.sh #!/usr/bin/env bash cleanup() { echo 'Caught SIGINT, cleaning up...' kill -QUIT $PLUGIN_ID exit 1 } trap 'cleanup' QUIT PID=$(pgrep -f $0) export EVALUATION_SYSTEM_CONFIG_FILE=/home/etor/work/freva/github/freva/compose/local-eval-system.conf freva-plugin dummyplugin --unique-output True --scheduled-id 2111 &> /tmp/share/slurm/dummyplugin/DummyPlugin-$PID.out & PLUGIN_ID=$! wait $PLUGIN_ID%
I used to ahve the whole
├── activate_csh ├── activate_fish ├── activate_sh ├── completions │ ├── complete_csh │ ├── complete_fish │ └── complete_sh └── loadfreva.modules
in
freva/compose/
but those were outdated pointing to an old freva. How can I recreate them?I am reading the setup.py and I do not see how to. (I can always replace the old conda env path with the new one if necessary)
It seems you didn't setup your environment? Is this a jupyter notebook by any chance?
@MoSHad91 did you try the pytest, to see if you can reproduce this?
@antarcticrainforest of course, shit, I forgot that the python kernel and the shell env go independently on jupyter
I just run in ipython and goes well
which makes sense as most of the time the tests work...
@MoSHad91 did you try the pytest, to see if you can reproduce this?
Yep sir, i got PASSED
in all tests and funcs
========================================== test session starts ==========================================
platform darwin -- Python 3.12.2, pytest-8.1.1, pluggy-1.4.0 -- /opt/anaconda3/envs/freva-dev/envs/freva-dev/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.12.2', 'Platform': 'macOS-14.2-arm64-arm-64bit', 'Packages': {'pytest': '8.1.1', 'pluggy': '1.4.0'}, 'Plugins': {'html': '4.1.1', 'env': '1.1.3', 'metadata': '3.1.1', 'cov': '4.1.0', 'allure-pytest': '2.13.3', 'requests-mock': '1.11.0', 'nbval': '0.11.0'}}
rootdir: /Users/mo/dev/20240522/freva
configfile: pytest.ini
plugins: html-4.1.1, env-1.1.3, metadata-3.1.1, cov-4.1.0, allure-pytest-2.13.3, requests-mock-1.11.0, nbval-0.11.0
collected 166 items
src/evaluation_system/tests/A_Notebook_Smoke_Test::ipynb::Cell 0 PASSED [ 0%]
src/evaluation_system/tests/A_Notebook_Smoke_Test::ipynb::Cell 1 PASSED [ 1%]
src/evaluation_system/tests/A_Notebook_Smoke_Test::ipynb::Cell 2 PASSED [ 1%]
src/evaluation_system/tests/cli_argcomplete_test.py::test_main_complete PASSED [ 2%]
src/evaluation_system/tests/cli_argcomplete_test.py::test_subcommand_help PASSED [ 3%]
src/evaluation_system/tests/cli_argcomplete_test.py::test_databrowser PASSED [ 3%]
src/evaluation_system/tests/cli_argcomplete_test.py::test_plugin PASSED [ 4%]
src/evaluation_system/tests/cli_argcomplete_test.py::test_wrong_choice PASSED [ 4%]
src/evaluation_system/tests/cli_help_test.py::test_main_help PASSED [ 5%]
src/evaluation_system/tests/cli_help_test.py::test_subcommand_help PASSED [ 6%]
src/evaluation_system/tests/crawl_my_data_test.py::test_invalid_data_files PASSED [ 6%]
src/evaluation_system/tests/crawl_my_data_test.py::test_add_valid_data PASSED [ 7%]
src/evaluation_system/tests/crawl_my_data_test.py::test_get_time_frequency PASSED [ 7%]
src/evaluation_system/tests/crawl_my_data_test.py::test_get_file_name_from_metadata PASSED [ 8%]
src/evaluation_system/tests/crawl_my_data_test.py::test_versions PASSED [ 9%]
src/evaluation_system/tests/crawl_my_data_test.py::test_iter_data_files PASSED [ 9%]
src/evaluation_system/tests/crawl_my_data_test.py::test_link_my_data PASSED [ 10%]
src/evaluation_system/tests/crawl_my_data_test.py::test_add_my_data PASSED [ 10%]
src/evaluation_system/tests/crawl_my_data_test.py::test_add_methods PASSED [ 11%]
src/evaluation_system/tests/crawl_my_data_test.py::test_delete_my_data PASSED [ 12%]
src/evaluation_system/tests/crawl_my_data_test.py::test_index_my_data PASSED [ 12%]
src/evaluation_system/tests/crawl_my_data_test.py::test_wrong_datatype PASSED [ 13%]
src/evaluation_system/tests/crawl_my_data_test.py::test_validate_path PASSED [ 13%]
src/evaluation_system/tests/databrowser_test.py::test_index_len PASSED [ 14%]
src/evaluation_system/tests/databrowser_test.py::test_time_subsets PASSED [ 15%]
src/evaluation_system/tests/databrowser_test.py::test_freva_databrowser_method PASSED [ 15%]
src/evaluation_system/tests/databrowser_test.py::test_search_files_cmd PASSED [ 16%]
src/evaluation_system/tests/databrowser_test.py::test_search_facets PASSED [ 16%]
src/evaluation_system/tests/databrowser_test.py::test_solr_backwards PASSED [ 17%]
src/evaluation_system/tests/db_test.py::test_store_history PASSED [ 18%]
src/evaluation_system/tests/db_test.py::test_schedule_entry PASSED [ 18%]
src/evaluation_system/tests/db_test.py::test_upgrade_status PASSED [ 19%]
src/evaluation_system/tests/db_test.py::test_change_flag PASSED [ 19%]
src/evaluation_system/tests/db_test.py::test_get_history PASSED [ 20%]
src/evaluation_system/tests/db_test.py::test_add_history_tag PASSED [ 21%]
src/evaluation_system/tests/db_test.py::test_update_history_tag PASSED [ 21%]
src/evaluation_system/tests/db_test.py::test_store_results PASSED [ 22%]
src/evaluation_system/tests/db_test.py::test_version PASSED [ 22%]
src/evaluation_system/tests/db_test.py::test_create_user PASSED [ 23%]
src/evaluation_system/tests/db_test.py::test_create_user_crawl PASSED [ 24%]
src/evaluation_system/tests/db_test.py::test_timestamp_to_string PASSED [ 24%]
src/evaluation_system/tests/db_test.py::test_timestamp_from_string PASSED [ 25%]
src/evaluation_system/tests/esgf_test.py::test_show_facet PASSED [ 25%]
src/evaluation_system/tests/esgf_test.py::test_query PASSED [ 26%]
src/evaluation_system/tests/esgf_test.py::test_freva_esgf_method PASSED [ 27%]
src/evaluation_system/tests/esgf_test.py::test_find_files PASSED [ 27%]
src/evaluation_system/tests/esgf_test.py::test_download_script PASSED [ 28%]
src/evaluation_system/tests/file_test.py::test_solr_search PASSED [ 28%]
src/evaluation_system/tests/file_test.py::test_compare PASSED [ 29%]
src/evaluation_system/tests/file_test.py::test_json_path PASSED [ 30%]
src/evaluation_system/tests/file_test.py::test_find_structure_in_path PASSED [ 30%]
src/evaluation_system/tests/file_test.py::test_structure_from_path PASSED [ 31%]
src/evaluation_system/tests/file_test.py::test_from_dict PASSED [ 31%]
src/evaluation_system/tests/file_test.py::test_from_json PASSED [ 32%]
src/evaluation_system/tests/file_test.py::test_to_dataset PASSED [ 33%]
src/evaluation_system/tests/history_command_test.py::test_freva_history_method PASSED [ 33%]
src/evaluation_system/tests/history_command_test.py::test_history_cmd PASSED [ 34%]
src/evaluation_system/tests/history_models_test.py::test_history_model PASSED [ 34%]
src/evaluation_system/tests/history_models_test.py::test_result_model PASSED [ 35%]
src/evaluation_system/tests/history_models_test.py::test_similar_results PASSED [ 36%]
src/evaluation_system/tests/parameters_test.py::test_infer_type PASSED [ 36%]
src/evaluation_system/tests/parameters_test.py::test_parameter_type PASSED [ 37%]
src/evaluation_system/tests/parameters_test.py::test_parsing PASSED [ 37%]
src/evaluation_system/tests/parameters_test.py::test_parameters_dictionary PASSED [ 38%]
src/evaluation_system/tests/parameters_test.py::test_parse_arguments PASSED [ 39%]
src/evaluation_system/tests/parameters_test.py::test_complete PASSED [ 39%]
src/evaluation_system/tests/parameters_test.py::test_defaults PASSED [ 40%]
src/evaluation_system/tests/parameters_test.py::test_validate_errors PASSED [ 40%]
src/evaluation_system/tests/parameters_test.py::test_help PASSED [ 41%]
src/evaluation_system/tests/parameters_test.py::test_special_cases PASSED [ 42%]
src/evaluation_system/tests/parameters_test.py::test_parameter_options PASSED [ 42%]
src/evaluation_system/tests/plugin_command_test.py::test_cli PASSED [ 43%]
src/evaluation_system/tests/plugin_command_test.py::test_killed_jobs_set_to_broken PASSED [ 43%]
src/evaluation_system/tests/plugin_command_test.py::test_list_tools PASSED [ 44%]
src/evaluation_system/tests/plugin_command_test.py::test_run_pyclientplugin PASSED [ 45%]
src/evaluation_system/tests/plugin_command_test.py::test_plugin_status PASSED [ 45%]
src/evaluation_system/tests/plugin_command_test.py::test_plugin_output PASSED [ 46%]
src/evaluation_system/tests/plugin_command_test.py::test_empty_status PASSED [ 46%]
src/evaluation_system/tests/plugin_command_test.py::test_run_plugin PASSED [ 47%]
src/evaluation_system/tests/plugin_manager_test.py::test_missing_plugin_directory_logs_warning PASSED [ 48%]
src/evaluation_system/tests/plugin_manager_test.py::test_modules PASSED [ 48%]
src/evaluation_system/tests/plugin_manager_test.py::test_get_plugins_user PASSED [ 49%]
src/evaluation_system/tests/plugin_manager_test.py::test_plugins PASSED [ 50%]
src/evaluation_system/tests/plugin_manager_test.py::testDefaultPluginConfigStorage PASSED [ 50%]
src/evaluation_system/tests/plugin_manager_test.py::test_plugin_config_storage PASSED [ 51%]
src/evaluation_system/tests/plugin_manager_test.py::test_parse_arguments PASSED [ 51%]
src/evaluation_system/tests/plugin_manager_test.py::test_parse_arguments_with_config_file PASSED [ 52%]
src/evaluation_system/tests/plugin_manager_test.py::test_write_setup PASSED [ 53%]
src/evaluation_system/tests/plugin_manager_test.py::test_get_history PASSED [ 53%]
src/evaluation_system/tests/plugin_manager_test.py::testDynamicPluginLoading PASSED [ 54%]
src/evaluation_system/tests/plugin_manager_test.py::test_load_invalid_plugin PASSED [ 54%]
src/evaluation_system/tests/plugin_manager_test.py::test_get_plugin_dict PASSED [ 55%]
src/evaluation_system/tests/plugin_manager_test.py::test_preview_generation PASSED [ 56%]
src/evaluation_system/tests/plugin_manager_test.py::test_get_command_string PASSED [ 56%]
src/evaluation_system/tests/plugin_manager_test.py::test_load_scheduled_conf PASSED [ 57%]
src/evaluation_system/tests/plugin_manager_test.py::test_2dict_to_conf PASSED [ 57%]
src/evaluation_system/tests/plugin_manager_test.py::test_scheduletool PASSED [ 58%]
src/evaluation_system/tests/plugin_manager_test.py::test_get_config_name PASSED [ 59%]
src/evaluation_system/tests/plugin_test.py::test_incomplete_abstract PASSED [ 59%]
src/evaluation_system/tests/plugin_test.py::test_complete_abstract PASSED [ 60%]
src/evaluation_system/tests/plugin_test.py::test_setup_configuration PASSED [ 60%]
src/evaluation_system/tests/plugin_test.py::test_parse_arguments PASSED [ 61%]
src/evaluation_system/tests/plugin_test.py::test_parse_metadict PASSED [ 62%]
src/evaluation_system/tests/plugin_test.py::test_read_config_parser PASSED [ 62%]
src/evaluation_system/tests/plugin_test.py::test_save_config PASSED [ 63%]
src/evaluation_system/tests/plugin_test.py::test_suggest_batchscript_name PASSED [ 63%]
src/evaluation_system/tests/plugin_test.py::test_read_config PASSED [ 64%]
src/evaluation_system/tests/plugin_test.py::testSubstitution PASSED [ 65%]
src/evaluation_system/tests/plugin_test.py::test_help PASSED [ 65%]
src/evaluation_system/tests/plugin_test.py::test_show_config PASSED [ 66%]
src/evaluation_system/tests/plugin_test.py::test_usage PASSED [ 66%]
src/evaluation_system/tests/plugin_test.py::test_run PASSED [ 67%]
src/evaluation_system/tests/plugin_test.py::test_plugin_help PASSED [ 68%]
src/evaluation_system/tests/plugin_test.py::test_get_class_base_dir PASSED [ 68%]
src/evaluation_system/tests/plugin_test.py::test_special_variables PASSED [ 69%]
src/evaluation_system/tests/plugin_test.py::test_compose_command PASSED [ 69%]
src/evaluation_system/tests/plugin_test.py::test_append_unique_output PASSED [ 70%]
src/evaluation_system/tests/plugin_test.py::test_set_environment PASSED [ 71%]
src/evaluation_system/tests/plugin_test.py::test_run_tool PASSED [ 71%]
src/evaluation_system/tests/plugin_test.py::test_prepare_output PASSED [ 72%]
src/evaluation_system/tests/plugin_test.py::test_call PASSED [ 72%]
src/evaluation_system/tests/plugin_test.py::test_link_mydata PASSED [ 73%]
src/evaluation_system/tests/solr_core_test.py::test_ingest PASSED [ 74%]
src/evaluation_system/tests/solr_core_test.py::test_reload PASSED [ 74%]
src/evaluation_system/tests/solr_core_test.py::test_unload_and_create PASSED [ 75%]
src/evaluation_system/tests/solr_test.py::test_solr_search PASSED [ 75%]
src/evaluation_system/tests/solr_test.py::test_facet_search PASSED [ 76%]
src/evaluation_system/tests/user_config_test.py::test_load_config PASSED [ 77%]
src/evaluation_system/tests/user_test.py::test_dummy_user PASSED [ 77%]
src/evaluation_system/tests/user_test.py::test_getters PASSED [ 78%]
src/evaluation_system/tests/user_test.py::test_directory_creation PASSED [ 78%]
src/evaluation_system/tests/user_test.py::test_directory_creation2 PASSED [ 79%]
src/evaluation_system/tests/user_test.py::test_central_directory_Creation PASSED [ 80%]
src/evaluation_system/tests/user_test.py::test_config_file PASSED [ 80%]
src/evaluation_system/tests/utils_test.py::test_time_ranges PASSED [ 81%]
src/evaluation_system/tests/utils_test.py::test_struct PASSED [ 81%]
src/evaluation_system/tests/utils_test.py::test_template_dict PASSED [ 82%]
src/evaluation_system/tests/utils_test.py::test_metadict_creation PASSED [ 83%]
src/evaluation_system/tests/utils_test.py::test_metadict_copy PASSED [ 83%]
src/evaluation_system/tests/utils_test.py::test_printable_list PASSED [ 84%]
src/evaluation_system/tests/utils_test.py::test_super_make_dirs PASSED [ 84%]
src/evaluation_system/tests/utils_test.py::test_mp_wrap_fn PASSED [ 85%]
src/evaluation_system/tests/workload_manager/test_core.py::test_get_cancel_command PASSED [ 86%]
src/evaluation_system/tests/workload_manager/test_core.py::test_parse_bytes PASSED [ 86%]
src/evaluation_system/tests/workload_manager/test_core.py::test_string_to_bytes PASSED [ 87%]
src/evaluation_system/tests/workload_manager/test_core.py::test_get_format_bytes PASSED [ 87%]
src/evaluation_system/tests/workload_manager/test_local.py::test_header PASSED [ 88%]
src/evaluation_system/tests/workload_manager/test_local.py::test_job_script PASSED [ 89%]
src/evaluation_system/tests/workload_manager/test_lsf.py::test_header PASSED [ 89%]
src/evaluation_system/tests/workload_manager/test_lsf.py::test_job_script PASSED [ 90%]
src/evaluation_system/tests/workload_manager/test_lsf.py::test_informative_errors PASSED [ 90%]
src/evaluation_system/tests/workload_manager/test_lsf.py::test_lsf_unit_detection[LSF_UNIT_FOR_LIMITS=MB-mb] PASSED [ 91%]
src/evaluation_system/tests/workload_manager/test_lsf.py::test_lsf_unit_detection[LSF_UNIT_FOR_LIMITS=G # And a comment-gb] PASSED [ 92%]
src/evaluation_system/tests/workload_manager/test_lsf.py::test_lsf_unit_detection[#LSF_UNIT_FOR_LIMITS=NotDetected-kb] PASSED [ 92%]
src/evaluation_system/tests/workload_manager/test_lsf.py::test_lsf_unit_detection_without_file PASSED [ 93%]
src/evaluation_system/tests/workload_manager/test_oar.py::test_header PASSED [ 93%]
src/evaluation_system/tests/workload_manager/test_oar.py::test_job_script PASSED [ 94%]
src/evaluation_system/tests/workload_manager/test_pbs.py::test_header[PBSJob] PASSED [ 95%]
src/evaluation_system/tests/workload_manager/test_pbs.py::test_header[MoabJob] PASSED [ 95%]
src/evaluation_system/tests/workload_manager/test_pbs.py::test_job_script[PBSJob] PASSED [ 96%]
src/evaluation_system/tests/workload_manager/test_pbs.py::test_job_script[MoabJob] PASSED [ 96%]
src/evaluation_system/tests/workload_manager/test_pbs.py::test_informative_errors PASSED [ 97%]
src/evaluation_system/tests/workload_manager/test_sge.py::test_job_script PASSED [ 98%]
src/evaluation_system/tests/workload_manager/test_slurm.py::test_header PASSED [ 98%]
src/evaluation_system/tests/workload_manager/test_slurm.py::test_job_script PASSED [ 99%]
src/evaluation_system/tests/workload_manager/test_slurm.py::test_slurm_format_bytes_ceil PASSED [100%]
=========================================== warnings summary ============================================
src/evaluation_system/tests/db_test.py::test_create_user_crawl
/Users/mo/dev/20240522/freva/src/evaluation_system/model/solr_models/models.py:12: PendingDeprecationWarning: The evaluation_system.model.solr_models module will be removed from v2304.0.0
warnings.warn(
src/evaluation_system/tests/plugin_command_test.py::test_cli
src/evaluation_system/tests/plugin_command_test.py::test_plugin_status
src/evaluation_system/tests/plugin_command_test.py::test_plugin_status
src/evaluation_system/tests/plugin_manager_test.py::test_get_history
src/evaluation_system/tests/plugin_manager_test.py::test_scheduletool
src/evaluation_system/tests/workload_manager/test_local.py::test_header
src/evaluation_system/tests/workload_manager/test_local.py::test_job_script
src/evaluation_system/tests/workload_manager/test_local.py::test_job_script
/Users/mo/dev/20240522/freva/src/evaluation_system/api/workload_manager/core.py:334: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
job_id = int(datetime.utcnow().timestamp() * 10**6)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=================================== 166 passed, 9 warnings in 28.93s ====================================
When running the pytests (
make test
) I am encountering thatthe test FAILs or PASSes on a seemingly random fashion, I wonder whether this could happen when pushing the new branch to github and block the CI/CD.
I reinstalled the conda-env for freva-dev to be sure that it is not related to any old dependency I had.