Closed s-celles closed 8 years ago
It's going to be much harder with QT, unfortunately. Some tests we could pull it over (like data parsing). I had some success testing GUIs using a combination of Xnest + xmacro + screenshotting, but it's a lot of work.
I think that just integration tests could catch some basic errors (not whole unit tests) ie ensure that package install correctly both on Python 2.7 and 3.x (x>3) and that view(data)
doesn't raise error.
I'm aware that's probably not enough but it can catch some errors.
On Travis you need to have DISPLAY environment variable defined
You may need to add this to your .travis.yml
before_install:
- "export DISPLAY=:99.0"
- "sh -e /etc/init.d/xvfb start"
- "/sbin/start-stop-daemon --start --quiet --pidfile /tmp/custom_xvfb_99.pid --make-pidfile --background --exec /usr/bin/Xvfb -- :99 -ac -screen 0 1280x1024x16"
see http://docs.travis-ci.com/user/gui-and-headless-browsers/
I was thinking of a very basic set of tests like:
file test_gtabview.py
inside a tests
directory
from gtabview import view
import gtabview
import numpy as np
import pandas as pd
lst2 = [[1, 2], [3, 4]]
gtabview.TESTING = True
def display_row(x):
return "R%02d" % x
def display_col(x):
return "C%02d" % x
v_display_row = np.vectorize(display_row)
v_display_col = np.vectorize(display_col)
def test_list():
view(lst2)
def test_numpy_array():
a2 = np.array(lst2)
view(a2)
def test_numpy_matrix():
m = np.matrix(lst2)
view(m)
def test_numpy_array_3d():
a3 = np.array(np.random.random((3, 2, 4))
view(a3)
def test_numpy_array_4d():
a4 = np.array(np.random.random((3, 2, 4, 5))
view(a4)
def test_pandas_series():
N = 10
a_index = v_display_row(np.arange(N) + 1)
s = pd.Series(np.random.random(N), index=a_index)
s.index.name = "Idx"
s.name = "Serie"
view(s)
def test_pandas_dataframe():
Ny, Nx = 10, 3
a_index = v_display_row(np.arange(Ny) + 1)
a_columns = v_display_col(np.arange(Nx) + 1)
df = pd.DataFrame(np.random.random((Ny, Nx)), index=a_index, columns=a_columns)
df.index.name = "Idx"
df.columns.name = "Col"
view(df)
def test_pandas_panel():
#Ny, Nx = 10, 3
#a_index = v_display_row(np.arange(Ny) + 1)
#a_columns = v_display_col(np.arange(Nx) + 1)
#d = {}
#def new_dataframe(Ny, Nx):
# return pd.DataFrame(np.random.random((Ny, Nx)), index=a_index, columns=a_columns)
#for i in range(3):
# d["I%d" % (i+1)] = new_dataframe(Ny, Nx)
#panel = pd.Panel.from_dict(d)
items = ['Open', 'High', 'Low', 'Close']
minor_axis = ['AAAA', 'BBBB', 'CCCC']
periods = 5
panel = pd.Panel(np.random.random((len(items), periods, len(minor_axis))), items=items,
major_axis=pd.date_range('1/1/2000', periods=periods),
minor_axis=minor_axis)
#print(panel)
#print(panel.loc['a',:,:])
view(panel)
def test_pandas_panel4d():
labels = ['Label1','Label2']
items = ['Item1', 'Item2']
periods = 5
minor_axis = ['A', 'B', 'C', 'D']
p4d = pd.Panel4D(np.random.random((len(labels), len(items), periods, len(minor_axis))),
labels=labels,
items=items,
major_axis=pd.date_range('1/1/2000', periods=periods),
minor_axis=minor_axis)
view(p4d)
def test_pandas_panel5d():
"""see http://stackoverflow.com/questions/17261678/can-panel4d-and-panelnd-objects-be-saved"""
pass
def test_csv():
pass
def test_blaze_csv():
pass
def test_blaze_table_uri():
pass
def test_excel_xls():
pass
def test_excel_xlsx():
pass
you can run all tests using:
$ nosetests
or
$ nosetests -s -v
if you want it verbose and without stdout capture
You can run only one test using
$ nosetests -s -v tests/test_gtabview.py:test_pandas_dataframe
or
$ nosetests -s -v tests.test_gtabview:test_pandas_dataframe
(you must insure that tests directory contains a blank __init__.py
file.)
My problem is that I need to close windows manually
On 25/07/15 14:32, scls19fr wrote:
My problem is that I need to close windows manually
We could use wait=False, detach=True for testing.
view() returns a Viewer instance that you can close() at the end as well.
Could we have a sort of global settings ?
gtabview.TESTING = True
By default it will be False
inside the view function you could use wait=False, detach=True
On 25/07/15 14:43, scls19fr wrote:
Could we have a sort of global settings ?
|gtabview.TESTING = True |
By default it will be |False|
inside the view function you could use wait=False, detach=True
it's alredy there:
gtabview.DETACH and gtabview.WAIT control the default behavior.
I'll add a couple of basic tests later today.
Ok my idea was to hide these details.
$ gtabview.set_testing_mode()
Thanks
On 25/07/15 14:48, scls19fr wrote:
Ok my idea was to hide this details.
|$ gtabview.set_testing_mode() |
In a couple of other projects, I just set the variables in the setup phase of the test.
My idea is that when testing mode is set you can for example also print to console what you display using Qt. It helps to compare display in "manual" tests. In fact we should have testing mode manual and testing mode auto.
On 25/07/15 16:03, scls19fr wrote:
My idea is that when testing mode is set you can for example also print to console what you display using Qt. It helps to compare display in "manual" tests
This would be hard to do.
What I was thinking is that I can test the results of one of the core functions (as_model) and compare the results item-by-item with a known good results. This does 100% of the I/O work, which is not bad.
Then you can also run view() directly via Xvfb, simply screenshotting at each view() call and compare the output with a known image (pretty much what matplotlib does).
It might over-trigger due to UI changes, but by comparing the results with the lower level tests you can get a pretty good idea.
The only thing is that I don't want to add too much stuff now, since I'm thinking about implementing editing, and this might again shuffle things around. By experience, I prefer to waste a bit more time testing manually in the beginning.
Getting there: https://travis-ci.org/wavexx/gtabview
Hello,
You might also verify code quality (don't expect 100% ... that's just a metric to be informed)
You can test PEP8 conformity locally using either:
http://stackoverflow.com/questions/1428872/pylint-pychecker-or-pyflakes
Some people are using tox to automate this (and more). I haven't use myself.
There is also interesting free online services (especially for Python - and open source projects)
Tests coverage might also be considered
on line service for tests coverage
You should probably add to your setup.py
extras_require = {
'dev': ['check-manifest', 'nose'],
'test': ['coverage', 'nose'],
},
Numpy and Pandas might also be dependencies for 'test' and 'dev'.
You can also probably add badges for PyPi package http://shields.io/
Kind regards
coverage is already submitted to coveralls.io, although right now the service doesn't seem to work (https://coveralls.io/github/wavexx/gtabview).
I've set up a large test matrix, in order to have all combinations of dependencies (mostly, to avoid extra dependencies creeping in).
I don't think coveralls doesn't work. I think a .coveralls.yml
is missing or .travis.yml
should be modified (depending if you use python-coveralls or coveralls-python)
https://coveralls.zendesk.com/hc/en-us/articles/201342869-Python https://github.com/z4r/python-coveralls https://github.com/coagulant/coveralls-python
or a .coverage
https://github.com/z4r/python-coveralls
TRAVIS.YML
Create a .coverage file and you can use coverage, pytest-cov, or nosexcover. Then you can add in the after_success step:
On 27/07/15 14:01, scls19fr wrote:
or a |.coverage|
I already have 3 projects submitting stuff to coveralls, so I'm not sure. .coveralls.yml is only needed if you're not using travis-ci, and .coverage is correctly generated as well by nosetests --with-coverage (the build logs shows the real coverage and submission seems to work as well).
No idea, I submitted an email for feedback earlier today.
Thanks. gtabview will become my new example for my new Python projects ;-)
You might have a look a landscape and codacy. It take 2 minutes to enable and that's very useful.
coveralls seems to have caught up. I added more tests regarding data handling, but of course the tests for view() and the matploglib interaction will require something more hackish..
I'll try to do something about it (at least for matplotlib) later on..
Travis is driving me insane. Never had so many problems with package installations and versions as their new "containers". They're only faster, but break in so many possible ways it's not funny. I'm going to switch back to their "sudo: required" approach.
Good luck @wavexx
I think this is covered successfully now.
Hello,
adding unit tests and continuous integration could help to catch some bugs. You might have a look at @mdbartos PR to firecat53/tabview https://github.com/firecat53/tabview/pull/116/files
.travis.yml
Kind regards