Closed gsarma closed 8 years ago
@brijeshmodi12 Please post your start on this here on this issue.
I wrote the python script shell_script_tester.py which calls each shell script in the current folder and prints it exit code. The code can be found here in my fork
There is an issue that I am facing. Each shell scripts returns exit code only after the output plots are closed and at this stage(02/17/2016), I(user) am closing the output plots manually. Those plots are blocking the further execution of other files. I shall be fixing this in my next commit wherein shell_script_tester.py automatically suppresses the GUI and can continue its operations automatically.
@brijeshmodi12 Have a look at https://github.com/openworm/muscle_model/blob/master/.travis.yml. This calls most of the shell scripts which require testing already. It uses the -nogui option to suppress the windows opening.
@pgleeson Thank you for the link. Even @slarson suggested the use -nogui. I shall soon use it in the code and commit the changes. Thank you for your time.
I have made the required changes and updated the code. Please have a look. https://github.com/brijeshmodi12/muscle_model/blob/shell_script_tester/NeuroML2/shell_script_tester.py
From @brijeshmodi12 --
Here is the output: ================================================================================== test session starts =================================================================================== platform linux2 -- Python 2.7.10 -- py-1.4.27 -- pytest-2.7.1 rootdir: /home/brijesh/PycharmProjects/ow/muscle_model, inifile: collected 1 items shell_scripttester.py F ======================================================================================== FAILURES ======================================================================================== testshellscripts _ file_list = ['analyse_k_fast.sh', 'analyse_k_slow.sh', 'ivcurve_ca_boyle.sh', 'analyse_ca_boyle.sh', 'analyse_all.sh', 'test.sh'] def test_shell_scripts(file_list = collect_files()): ''' Tests all the shell scripts based on its exit code :param file_list: contains files that are to be tested :return: exitcode of each file ''' returncode_list = [] for each_script in file_list: file_name = ' "./' +each_script+'"'
returncode = sp.call(file_name+' -nogui', shell=True)
returncode_list.append(returncode)
# if returncode == 0:
# print(file_name+'\n Executed Successfully with code: '+str(returncode))
# else:
# print(file_name+'\n Execution failed with code: '+str(returncode))
#prints the final output in the tabular format
# print("Result.\t\tExitCode\tFile name")
# for i in range(len(file_list)):
# if returncode_list[i] == 0:
# result = "Success"
# else:
# result = "Failed."
# print (result+"\t\t"+str(returncode_list[i])+" \t\t "+file_list[i])
for i in range(len(file_list)):
assert(returncode_list[i] == 0),"Test Failed for " + file_list[i]
E AssertionError: Test Failed for test.sh E assert 127 == 0 shell_script_tester.py:50: AssertionError ---------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------- pyNeuroML >>> pyNeuroML >>> Analysing channels from files: ['k_fast.channel.nml'] pyNeuroML >>> pyNeuroML >>> Generating LEMS file to investigate k_fast in k_fast.channel.nml, -55mV->80mV, 34.0degC pyNeuroML >>> pyNeuroML >>> Analysing channels from files: ['k_slow.channel.nml'] pyNeuroML >>> pyNeuroML >>> Generating LEMS file to investigate k_slow in k_slow.channel.nml, -55mV->80mV, 34.0degC pyNeuroML >>> pyNeuroML >>> Analysing channels from files: ['ca_boyle.channel.nml'] pyNeuroML >>> pyNeuroML >>> Generating LEMS file to investigate ca_boyle in ca_boyle.channel.nml, -40mV->80mV, 6.3degC pyNeuroML >>> pyNeuroML >>> Analysing channels from files: ['ca_boyle.channel.nml'] pyNeuroML >>> pyNeuroML >>> Generating LEMS file to investigate ca_boyle in ca_boyle.channel.nml, -55mV->80mV, 34.0degC pyNeuroML >>> pyNeuroML >>> Analysing channels from files: ['k_fast.channel.nml', 'k_slow.channel.nml', 'ca_boyle.channel.nml'] pyNeuroML >>> pyNeuroML >>> Generating LEMS file to investigate k_fast in k_fast.channel.nml, -100mV->100mV, 34.0degC pyNeuroML >>> Generating LEMS file to investigate k_slow in k_slow.channel.nml, -100mV->100mV, 34.0degC pyNeuroML >>> Generating LEMS file to investigate ca_boyle in ca_boyle.channel.nml, -100mV->100mV, 34.0degC pyNeuroML >>> Written HTML info to: /home/brijesh/PycharmProjects/ow/muscle_model/NeuroML2/channel_summary/ChannelInfo.html pyNeuroML >>> Written Markdown info to: /home/brijesh/PycharmProjects/ow/muscle_model/NeuroML2/channel_summary/README.md test passed for analyse_k_fast.sh test passed for analyse_k_slow.sh test passed for ivcurve_ca_boyle.sh test passed for analyse_ca_boyle.sh test passed for analyse_all.sh This is what the output was when I run py.test shell_script_tester.py on command line
@brijeshmodi12 -- looking good; going in the right direction. It looks like you have correctly used the pytest framework. Have you been able to commit this to your fork of the muscle model?
To make the next step, now we need to make the test pass instead of fail, as it currently does. It is not clear if you are correctly capturing the return code because it looks like the script is exiting correctly. The goal is to make the test only fail if the script doesn't exit correctly, and only succeed if the script does exit correctly. Do you know what we might do to make that happen?
@slarson Yes, I have committed the code to my fork of the muscle model.
The test is failing for the following reason: There are 6 shell files in muscle_model/NeuroML2/ . Out of this 6 files, test.sh is unable to exit with code 0. and thus test fails.
Thus in order to make the test pass, I would want all the shell scripts to exit with code 0. It would be great if you can try this script on your machine. On my machine test.sh fails every time.
Here is the output when I try to run test.sh independently:
_brijesh@brijesh-TECRA-R850:~/PycharmProjects/ow/muscle_model/NeuroML2$ ./test.sh ./test.sh: line 3: jnml: command not found_
Here is the test result when I excluded test.sh from the test: _ brijesh@brijesh-TECRA-R850:~/PycharmProjects/ow/muscle_model/NeuroML2$ py.test shell_script_tester.py ================================================================================== test session starts =================================================================================== platform linux2 -- Python 2.7.10 -- py-1.4.27 -- pytest-2.7.1 rootdir: /home/brijesh/PycharmProjects/ow/muscle_model, inifile: collected 1 items
shell_script_tester.py .
=============================================================================== 1 passed in 47.46 seconds ================================================================================ brijesh@brijesh-TECRA-R850:~/PycharmProjects/ow/musclemodel/NeuroML2$
@brijeshmodi12 there is no need to re run/test test.sh, it was intended as a quick local test that jnml and analyse_k_fast.sh work correctly when jNeuroML is installed locally. I've removed this now. Your tests as well as the OMV tests in .travis.yml, which install and test with jnml, should be sufficient noww
@pgleeson Thank you for the necessary action. @slarson The test now runs correctly without any failures since all the shell scripts exit with code 0. What should be the next step?
@brijeshmodi12 Terrific. Now time to modify .travis.yml to run the test for you and test with your local version of the repo up on Travis-CI. Google for a quickstart guide for Travis-CI if you need it. Let's see if you can post a link of Travis-CI successfully running your test.
Here is the link of Travis-CI successfully running the test.
https://travis-ci.org/brijeshmodi12/muscle_model/jobs/120125839#L934
https://github.com/openworm/muscle_model/blob/tests/.travis.yml#L35