The test would take the working copy, tarball it and install it in a VM or container. Then something like to following would be run:
python setup.py install
oscapd-cli task
# should cause dbus error
systemctl enable oscapd
systemctl start oscapd
# no error with no tasks specified
oscapd-cli task
oscapd-cli task_create -i
# put the necessary values in stdin to make the interactive task create finish
# check that the task exists and is saved
oscapd-cli task 1
oscapd-cli result 1 # should be empty
systemctl restart oscapd
# even after restart of the service
oscapd-cli task 1
oscapd-cli result 1 # should be empty
# this should fail, the task is disabled!
oscapd-cli task 1 run
oscapd-cli task 1 enable
# this should run fine
oscapd-cli task 1 run
# should have 1 result
oscapd-cli result 1
oscapd-cli result 1 1
oscapd-cli result 1 1 arf
oscapd-cli result 1 1 html
oscapd-cli result 1 1 stdout
oscapd-cli result 1 1 stderr
oscapd-cli result 1 1 exit_code
oscapd-cli result 1 1 remove
# this should run fine
oscapd-cli task 1 run
oscapd-cli result 1 remove
# and so on...
This will ensure we don't break basic functionality when developing.
PS: I know beaker would be pretty good for this test but I'd like to have an upstream test that all devs can run on their machines. If that turns out to be too difficult we can resort to beaker.
The test would take the working copy, tarball it and install it in a VM or container. Then something like to following would be run:
This will ensure we don't break basic functionality when developing.
PS: I know beaker would be pretty good for this test but I'd like to have an upstream test that all devs can run on their machines. If that turns out to be too difficult we can resort to beaker.