Closed niazim3 closed 6 years ago
Note: interp.py in Python subfolder defines
def lin_interp(y1, y2, x1, x2, input_param):
...
def proper_index(index1,index2,data,value):
...
# find_bounds: numpy.ndarray numpy.ndarray Num Num -> Int Int Int Int Int
def find_bounds(data1, data2, value1, value2):
...
# interp: Int Int Int Int Int Int numpy.ndarray numpy.ndarray numpy.ndarray Num Num -> Num
def interp(idx, jdx, kdx, num_interp1, num_interp2, data1, data2, data3, value1, value2):
...
was simplified to interp.py in Python_Simplified subfolder, which defines
class BoundError(Exception):
...
def lin_interp(x1, y1, x2, y2, x):
...
# arr => Sj (i.e. sequence of numbers representing possible standOff distances)
# v => SD calculated (i.e. number)
def indInSeq(arr, v):
...
def matrixCol(mat, c):
...
def interpY(x_array, y_array, z_array, x, z):
...
def interpZ(x_array, y_array, z_array, x, y):
...
Due to the presence of tests and implementations of Python
and Python_Simplified
and the subfolders Python_Simplified/Test
and Python_Simplified/OldTests
, the content is duplicated in multiple areas, causing the structure and content for the interpolation for GlassBR to be somewhat confusing. @smiths Is there an original documentation for testing that is being followed from which I can double-check the inputs and outputs?
Edit: If there is no such file, it's fine, as I'll comb through the files one-by-one then...
@niazim3, there is no such file. 😞 All the files for the case studies are in the repo. Unfortunately, the tests were thrown together, rather than being done systematically. It would be wonderful if you could make order out of the test cases, and produce the kind of documentation you had hoped to find. 😄 The eventual goal of Drasil will be to generate the test cases and their documentation. The Case Study example is intended to show an example of the target that the Drasil generator should aim for. We also want to identify the minimum information that the human user needs to provide to Drasil, since the human will have to make some decisions and add some new knowledge.
Note to self for cleanup of infrastructure (WIP)
For INTERPOLATION TESTING, /Test contains testInterp.py which is a file that parameterizes the hard coded inputs in the /OldTests testInterp{#}.py (# = '' to 14); all the comments from these files were input under the column heading of "Significant inputs" in table 5 of the TestingPythonGlassBR.pdf (which means the files in /OldTests can be removed after
[ ] ensuring the comments in the documentation pdf is correct
[ ] inputs in interp.txt were copied over correctly)
For INTERPOLATION IMPLEMENTATION, as mentioned in JacquesCarette/Drasil/issues/262, _PythonSimplified contains the "improved" from Python and that "The code is correct, it is the tests that are not.", which means new/updated test cases are required for the functions matrixCol(mat, c)
, interpY(x_array, y_array, z_array, x, z)
, and interpZ(x_array, y_array, z_array, x, y)
(as they're replacing find_bounds(data1, data2, value1, value2)
and interp(idx, jdx, kdx, num_interp1, num_interp2, data1, data2, data3, value1, value2)
)
For READTABLE TESTING, currently broken due to the following error showing up upon running make test
:
AttributeError: 'bool' object has no attribute 'all'
, which is related to the implementation
For READTABLE IMPLEMENTATION, 3 functions (read_z_array(filename)
, read_x_array(filename, length)
, read_y_array(filename, length)
) in the _PythonSimplified version of readTable.py have been introduced to replace Python's version (which only contained read_table(filename)
).
MAINFUN IMPLEMENTATION differences between Python and _PythonSimplified versions : https://www.diffchecker.com/U73K51SG
MAINFUN TESTING is improved in /Test since it has all the necessary files for input and output (which were the only discrepancies between the testMain{#}.py files in /OldTests) in main.txt; seems to be breaking in relation to interp
Note: testTable1.txt and testTable2.txt files are no longer useful since testReadTable
completely contains all the data it could've provided, just split into different arrays:
testTable{#}.txt:
list of weights
a, b, c, d, e, f, g, h
i, j, k, l, m, n, o, p
...
readTable.txt:
testTable{#}.txt
list of weights
[[[a, c, e, g], [i, k, m, o]...]]
[[[b, d, f, h], [j, l, n, p]...]]
for both files. In other words, the original text looks like a merged version of the lines in the new text.
Next steps: determine the functionality of reading in x, y, and z arrays and the semantics of doing so.
Update (6/5/2018):
Semantics were determined (see below). New input files are not necessary, since the new functions took into account the merged formatting.
older function --> w_array = num_col
; data_sd = array1
; data_q = array2
newer function --> w_array = read_z_array
; data_sd = read_x_array
; data_q = read_y_array
@niazim3, please feel free to improve the test code and documentation as you see fit. The eventual goal is to write something that you wish you had found when you first looked at the test code. 😄 In writing the new test documentation and test code, please change the current names to more meaningful names. testTable1 and testTable2 do not tell us anything. It is also good if the test case summaries do not needlessly repeat information. If many test cases share inputs, then we should only summarize these inputs once, and simply distinguish each test case by how it is different from the others. I made a similar comment for:
https://github.com/JacquesCarette/Drasil/issues/330
Perhaps you and @elwazana should discuss the best way to document the test cases? Our goal is something that makes sense to humans, but that we can also imagine generating from the knowledge in Drasil.
As of commit 01e1337, testReadTable
is no longer causing failures or errors. However, the test definitions use numbers that seem to come out of nowhere when looking at just the code. Either
read_x_array
or read_y_array
will be updated to use the length
input parameters to produce these number lists (will still need to determine semantics of such a list)For now, just going to point out that these numbers are the first length
number of items in the lists produced of the first length+1
lines that are returned from the functions. E.g. for https://github.com/smiths/caseStudies/blob/01e13378b237d23756bfd41af5efb8732feb9fca/CaseStudies/glass/Implementations/Python/Test/testReadTable.py#L40-L45, where the read_x_array('TSD.txt', len(w_array))
call produced the following list (partial list of the approximately 500 lines produced):
Not sure why these weren't coming up earlier, but print
ing (self.errorMsg[i]) + " is " + context.exception.args[0])
in test_check_constraints
leads to the following code snippet. We need to look through the input files to ensure different types of invalid inputs are being entered (which can be done systematically after test cases are documented following #80).
InputError: a and b must be greater than 0 is InputError: a and b must be greater than 0
InputError: a and b must be greater than 0 is InputError: a and b must be greater than 0
InputError: a/b must be between 1 and 5 is InputError: a/b must be between 1 and 5
InputError: a/b must be between 1 and 5 is InputError: a/b must be between 1 and 5
InputError: t must be in [2.5,2.7,3.0,4.0,5.0,6.0,8.0,10.0,12.0,16.0,19.0,22.0] is InputError: a/b must be between 1 and 5
InputError: wtnt must be between 4.5 and 910 is InputError: a/b must be between 1 and 5
InputError: wtnt must be between 4.5 and 910 is InputError: a/b must be between 1 and 5
InputError: TNT must be greater than 0 is InputError: a/b must be between 1 and 5
InputError: SD must be between 6 and 130 is InputError: a/b must be between 1 and 5
InputError: SD must be between 6 and 130 is InputError: a/b must be between 1 and 5
InputError: a and b must be greater than 0 is InputError: a and b must be greater than 0
InputError: a and b must be greater than 0 is InputError: a and b must be greater than 0
InputError: TNT must be greater than 0 is InputError: a/b must be between 1 and 5
.
@smiths Should this issue be closed since #134 is tracking the new code for GlassBR and new tests will be implemented for that implementation?
Yes, I think we should close this issue. Part of my motivation for wanting to redesign GlassBR was the confusion around testing in the previous version. I think things should fit together better with the redesign.
This issue is a continuation of JacquesCarette/Drasil#347.
testInterp.py
Broken as Implementation module has no attributes
find_bound
andinterp
andx1
is not defined (mentioned in comments of this commit).testReadTable.py
Broken, as mentioned in the comments of this commit.
testMainFun.py
Has 1 failure upon running
make test
:Please update TestingPythonGlassBR.tex and TestingPythonGlassBR.pdf as deemed necessary.